قالب وردپرس درنا توس
Home / Technology / What Google’s firing of researcher Timnit Gebru means for AI ethics

What Google’s firing of researcher Timnit Gebru means for AI ethics



Google sparked an uprising earlier this month when it fired Timnit Gebru, co-chair of a team of researchers at the company that studied the ethical implications of artificial intelligence. Google claims that it accepted her “dismissal”, but Gebru, who is black, claims that she was fired for drawing unwelcome attention to the lack of diversity in Google’s workforce. She had also been in conflict with the supervisors due to their request that she draw a paper that she had written authoring on ethical issues related to certain types of AI models that are central to Google’s business.

In this week̵

7;s Trend Lines podcast, WPR’s Elliot Waldman teamed up with Karen Hao, senior AI reporter for the MIT Technology Review, to discuss Gebrus’ expulsion and its implications for the increasingly important field of AI ethics.

Listen to the full interview with Karen Hao on the Trend Lines podcast:

If you like what you hear, you can subscribe to Trend Lines:
Google Podcasts brand Apple Podcasts brand Spotify Podcasts brand

The following is a partial transcript of the interview. It has been edited easily for clarity.

World Politics Review: First, can you tell us a little about Gebru and what kind of growth she has in the AI ​​field, given the groundbreaking research she has done, and how she eventually started with Google?

Karen Hao: Timnit Gebru, you could say, is one of the cornerstones of the AI ​​ethics field. She received her doctorate. in AI ethics at Stanford under the guidance of Fei-Fei Li, who is one of the pioneers in the entire AI field. When Timnit completed her doctorate at Stanford, she started at Microsoft for a doctorate, before ending up at Google after they approached her based on the impressive work she had done. Google started its ethical AI team, and they thought she would be a great person to lead it. One of the studies she is known for is one that she co-authored with another black female researcher, Joy Buolamwini, on algorithmic discrimination that appears in commercial face recognition systems.

The newspaper was published in 2018, and at the time, the revelations were quite shocking, because they revised commercial face recognition systems that were already sold by technical giants. The findings in the paper showed that these systems, which were sold under the assumption that they were very accurate, were in fact extremely inaccurate, especially on dark-skinned and female faces. During the two years since the newspaper was published, a number of incidents have taken place that have ultimately led to these technical giants taking down or suspending the sale of face recognition products to the police. The seeds of these actions were actually planted by the paper that Timnit co-authored. So she’s a very big presence in the field of AI ethics, and she’s done a lot of groundbreaking work. She also founded a non-profit organization called Black in AI that really fights against diversity in technology and AI specifically. She is a force of nature and a very famous name in space.

We should think about how we can develop new AI systems that do not rely on this brute force method to scrape billions and billions of sentences from the internet.

WPR: What are the ethical issues that Gebru and her co-authors identified in the paper that led to her dismissal?

Hao: The paper was about the risk of large language models, which are actually AI algorithms that are trained on a huge amount of text. So you can imagine that they are trained in all the articles published on the internet – all subreddits, Reddit threads, Twitter and Instagram subtitles – everything. And they try to learn how we construct sentences in English and how they can then generate sentences in English. One of the reasons why Google is very interested in this technology is that it helps drive the search engine. In order for Google to give you relevant results when you search in a search, it must be able to capture or interpret the context of what you say, so that if you type three random words, it can gather the intention of what you are looking for. after.

What Timnit and her co-authors point out in this article is that this relatively recent area of ​​research is beneficial, but it also has some quite significant disadvantages that need to be talked about more. One of them is that these models take an enormous amount of power to power because they run on very large data centers. And given the fact that we are in a global climate crisis, the field should think about the fact that doing this study can exacerbate climate change and then have downstream effects that disproportionately affect marginalized societies and developing countries. Another risk that they point out is the fact that these models are so large that they are very difficult to examine, and they also capture large parts of the internet that are very toxic.

So they end up normalizing a lot of sexist, racist or violent language that we do not want to maintain in the future. However, due to the lack of research in these models, we are not able to dissect what kind of things they are learning and then weed them out. Finally, the conclusion of the article is that there are great benefits to these systems, but there are also great risks. And as a field, we should spend more time thinking about how we can actually develop new language AI systems that do not rely so much on this brute force method to just train it on billions and billions of sentences scrapped from the internet.

WPR: And how did the Gebru tutors at Google react to that?

Hao: What is interesting is that Timnit has said – and this has been supported by her former teammates – that the paper was actually approved to be sent to a conference. This is a very classic process for her team and the wider Google AI team. The whole point of doing this research is to contribute to the academic discourse, and the best way to do that is to send it to an academic conference. They had prepared this article with some external partners and sent it to one of the leading conferences in AI ethics for next year. It had been approved by her manager and by other people, but at the last minute she was told by her superiors by the manager that she needed to withdraw the paper.

Very little was revealed to her about why she needed to pull the paper in. She went on to ask many questions about who asked her to withdraw the paper, why they asked her to withdraw it, and whether modifications could be made to make it more tasty for submission. She was stonewalled and did not receive further clarification, so she ended up sending an email just before she went on a Thanksgiving holiday saying that she would not withdraw the paper unless certain conditions were first met.

Silicon Valley has a notion of how the world works based on the disproportionate representation of a particular subset of the world. There are usually upper class, straight white men.

She asked who gave the feedback and what the feedback was. She also asked for meetings with several leaders to explain what happened. The way their research was treated was extremely disrespectful and not the way researchers were traditionally treated on Google. She wanted an explanation for why they had done it. And if they did not meet these conditions, she would then have an honest conversation with them about a last date on Google, so that she could make a transition plan, leave the company without any problems and publish the paper outside the Google context. Then she went on vacation, and in the middle of it, one of her direct reports sent a text message that they had received an email saying that Google had accepted her departure.

WPR: As for the issues Gebru and her co-authors address in their article, what does it mean for the AI ​​ethics field to have what seems like this enormous level of moral danger, where the communities most at risk from the effects that Gebru and her co-authors identified environmental consequences and such, are marginalized and often lack a voice in the technical field, while the engineers who build these AI models are largely isolated from the risk?

Hao: I think this will be the core of what has been an ongoing discussion in this community over the last couple of years, which is that Silicon Valley has a notion of how the world works based on the disproportionate representation of a particular subset of the world. There are usually upper class, straight white men. The values ​​they have from the cross-section of the perceived experience have now in one way or another become the values ​​that everyone needs to live by. But it does not always work that way.

They make this cost-benefit analysis that it is worth making these very large language models and worth using all the money and power to get the benefits out of that type of research. But it is based on their values ​​and their lived experience, and it may not end up being the same cost-benefit analysis that someone can do in a developing country where they would rather not have to deal with the consequences of climate change later. This was one of the reasons why Timnit was so determined to ensure that there was more diversity in the decision-making table. If you have several people who have different experiences who can analyze the effects of these technologies through their lenses and bring their voices into the conversation, we might have more technologies that do not skew their benefits so much against a group at the expense of others.

Editor’s note: The top image is available below CC BY 2.0 permission.


Source link