Bookmarked What Really Happened When Google Ousted Timnit Gebru by Tom Simonite (WIRED)

She was a star engineer who warned that messy AI can spread racism. Google brought her in. Then it forced her out. Can Big Tech take criticism from within?

Tom Simonite digs into the complex series of events that led to Timnit Gebru (and Margaret Mitchell) being ousted from Google’s AI team. The journey starts with fleeing Ethiopia due to the conflict with Eritrea. It then discusses her journey to Stanford and Apple. The journey then turns to her PhD as a part of the lab of Fei-Fei Li exploring computer vision, deep learning and artificial intelligence. All along she battled with questions associated with gender and race. Later, working for Microsoft, Gebru discussed the need for a framework called Datasheets for Datasets.

Datasheets for datasets is a tool for documenting the datasets used for training and evaluating machine learning models. The aim of datasheets is to increase dataset transparency and facilitate better communication between dataset creators and dataset consumers

This all led to Mitchell inviting Gebru to work at Google with her. Although there were some wins in regards to artificial intelligence, there were also many cultural challenges associated with this.

Inside Google, researchers worked to build more powerful successors to BERT and GPT-3. Separately, the Ethical AI team began researching the technology’s possible downsides. Then, in September 2020, Gebru and Mitchell learned that 40 Googlers had met to discuss the technology’s future. No one from Gebru’s team had been invited, though two other “responsible AI” teams did attend. There was a discussion of ethics, but it was led by a product manager, not a researcher.

In part, this led to the ill-fated paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

The paper was not intended to be a bombshell. The authors did not present new experimental results. Instead, they cited previous studies about ethical questions raised by large language models, including about the energy consumed by the tens or even thousands of powerful processors required when training such software, and the challenges of documenting potential biases in the vast data sets they were made with. BERT, Google’s system, was mentioned more than a dozen times, but so was OpenAI’s GPT-3.

It is unclear where this leaves research into and developing of artificial intelligence.

Bookmarked Timnit Gebru’s Exit From Google Exposes a Crisis in AI (WIRED)

This crisis makes clear that the current AI research ecosystem—constrained as it is by corporate influence and dominated by a privileged set of researchers—is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isn’t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.

Alex Hanna reports on Timnit Gebru’s exit from Google and the implications that this has for research into artificial intelligence. It highlights the dark side of being funded by the company that you are at the same time researching:

Meredith Whittaker, faculty director at New York University’s AI Now institute, says what happened to Gebru is a reminder that, although companies like Google encourage researchers to consider themselves independent scholars, corporations prioritize the bottom line above academic norms. “It’s easy to forget, but at any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest,” she says.

In an interview with Karen Hao, Gebru questions the response from Google suggesting they treat those involved in gross misconduct better.

I didn’t expect it to be in that way—like, cut off my corporate account completely. That’s so ruthless. That’s not what they do to people who’ve engaged in gross misconduct. They hand them $80 million, and they give them a nice little exit, or maybe they passive-aggressively don’t promote them, or whatever. They don’t do to the people who are actually creating a hostile workplace environment what they did to me.

John Naughton suggests that this is no different to what has happened in the past with oil and tobacco.

And my question is: why? Is it just that the paper provides a lot of data which suggests that a core technology now used in many of Google’s products is, well, bad for the world? If that was indeed the motivation for the original dispute and decision, then it suggests that Google’s self-image as a technocratic force for societal good is now too important to be undermined by high-quality research which suggests otherwise. In which case, it suggests that there’s not that much difference between big tech companies and tobacco, oil and mining giants. They’re just corporations, doing what corporations always do.

This all reminds me of Jordan Erica Webber’s discussion from a few years ago about the push for more ethics and whether this it is just a case of public relations?