Dear everyone,
You are invited to this week’s online Lunch2B, which will take place on Friday, March 19th, at 12:00 PM, Italian time.
Here is the link for this session, we look forward to see you online: https://univiu-org.zoom.us/j/92814546692?pwd=TVdDc3JLZFA5TDZkYVdNYVk1ZHNYUT09
Machine learning algorithms learn from the training data they are given, regardless of any incorrect assumptions in the data. In this way, these algorithms can reflect, or even magnify, the biases that are present in the data provided by the people constructing algorithms.
For example, if an algorithm is trained on data that is racist or sexist in any respect, the resulting predictions will also reflect this. Some existing algorithms have mislabeled black people as “gorillas” or charged Asian Americans higher prices for school tutoring.
Algorithms are already being used to determine credit-worthiness and hiring, and they may not pass the disparate impact test which is traditionally used to determine discriminatory practices.
How can we make sure algorithms are fair, especially when they are privately owned by corporations (for example Amazon HR scandal!), and not accessible to public scrutiny? How can we balance openness and intellectual property?
We want to hear your opinions and thoughts.
Let’s chat about all these questions and more!
Your three beloved REDs Francesca, Olivia, and Luka
コメント