Google will change the procedures before July to review the researchers’ work, according to a town hall recording heard by Reuters, part of an attempt to quell internal turmoil over the integrity of its artificial intelligence (AI) research.
In remarks at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company fired two prominent women and rejected their work, according to an hour-long recording, the contents of which were confirmed by two sources.
The teams are already trying out a questionnaire that will assess projects for risk and help researchers navigate through assessments, said Maggie Johnson, the research unit̵
Reuters reported in December that Google had introduced a “sensitive topic” review for studies involving dozens of issues, such as China or service bias. Internal reviewers had demanded that at least three articles on AI be changed to refrain from throwing Google technology in a negative light, Reuters reported.
Jeff Dean, Google’s vice president who oversees the division, said Friday that the review of “sensitive topics” is and was confusing, and that he had commissioned a senior research director, Zoubin Ghahramani, to clarify the rules, according to the recording. .
Ghahramani, a professor at the University of Cambridge who came to Google in September from Uber, said during City Hall: “We must be comfortable with that discomfort” in self-critical research.
Google declined to comment on Friday’s meeting.
An internal email, seen by Reuters, provided new details about Google researchers’ concerns, showing exactly how Google’s legal department had modified one of the three AI papers, called Extracting Training Data from Large Language Models.
The e-mail, dated February 8, from a co-author of the newspaper, Nicholas Carlini, went to hundreds of colleagues and sought to draw attention to what he called “deeply insidious” edits by the firm’s lawyers.
“Let’s be ready here,” the roughly 1,200-word email said. “When we as academics write that we have a ‘concern’ or find something ‘worrying’ and a Google lawyer demands that we change it to sound nicer, this is very much Big Brother.
Necessary edits, according to his email, included “negative to neutral” switches such as changing the word “concerns” to “considerations” and “dangers” to “risk”. Lawyers also demanded deletion of references to Google technology; the authors’ findings that AI leaked copyrighted content; and the words “breach” and “sensitive,” the email said.
Carlini did not respond to requests for comment. Google, in response to questions about the email, disputed its claim that lawyers were trying to control the tone of the paper. The company said it had no issues with the topics examined by the paper, but they found that some legal terms were used inaccurately and carried out a thorough editing as a result.
Breed capital audit
Google last week also named Marian Croak, a pioneer in Internet audio technology and one of Google’s few black vice presidents, to consolidate and manage ten teams studying issues such as racial differences in algorithms and technology for the disabled.
Croak said at Friday’s meeting that it would take time to resolve concerns among AI ethics researchers and reduce damage to Google’s brand. “Please hold me fully responsible for trying to reverse that situation,” she said on the recording.
Johnson added that the AI organization brought in a consulting company for a comprehensive assessment of the share capital effect. The first-of-its-kind audit for the department will lead to recommendations “that are going to be quite difficult,” she said.
Tensions in Dean’s division had deepened in December after Google fired Timnit Gebru, co-chair of the ethics AI research team, after she refused to withdraw an article on language-generating AI. Gebru, who is black, accused the company at the time of having assessed her work differently because of her identity and of marginalizing employees from under-represented backgrounds. Almost 2,700 employees signed an open letter in support of Gebru.
Under City Hall, Dean elaborated on what scholarship the company would support. “We want responsible AI and ethical AI research,” Dean said, giving the example of studying the environmental costs of technology. But he said it was problematic to cite data “close to a factor of a hundred”, while ignoring more accurate statistics, as well as Google’s efforts to reduce emissions. Dean has previously criticized Gebru’s paper for not including important findings on environmental impact.
Gebru defended the quoted quote. “It is a very bad look for Google to come out of this defensively against a paper that was quoted by so many of their gender equality institutions,” she told Reuters.
Employees continued to post about the frustrations in recent months on Twitter when Google investigated and then fired Margaret Mitchell, an ethical AI leader, for moving electronic files outside the company. Mitchell said on Twitter that she had acted “to raise concerns about racial and gender inequality, and to report on Google’s problematic termination of Dr. Gebru”.
Mitchell had collaborated on the newspaper that led to Gebru’s departure, and a version that was published online last month without a Google affiliation, called “Shmargaret Shmitchell” as co-author.
When asked for comment, Mitchell expressed through a lawyer disappointment with Dean’s criticism of the newspaper and said that her name was removed after a corporate order.