Bookmarked What Really Happened When Google Ousted Timnit Gebru by Tom Simonite (WIRED)

She was a star engineer who warned that messy AI can spread racism. Google brought her in. Then it forced her out. Can Big Tech take criticism from within?

Tom Simonite digs into the complex series of events that led to Timnit Gebru (and Margaret Mitchell) being ousted from Google’s AI team. The journey starts with fleeing Ethiopia due to the conflict with Eritrea. It then discusses her journey to Stanford and Apple. The journey then turns to her PhD as a part of the lab of Fei-Fei Li exploring computer vision, deep learning and artificial intelligence. All along she battled with questions associated with gender and race. Later, working for Microsoft, Gebru discussed the need for a framework called Datasheets for Datasets.

This all led to Mitchell inviting Gebru to work at Google with her. Although there were some wins in regards to artificial intelligence, there were also many cultural challenges associated with this.

In part, this led to the ill-fated paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

It is unclear where this leaves research into and developing of artificial intelligence.

Bookmarked Timnit Gebru’s Exit From Google Exposes a Crisis in AI (WIRED)

This crisis makes clear that the current AI research ecosystem—constrained as it is by corporate influence and dominated by a privileged set of researchers—is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isn’t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.

Alex Hanna reports on Timnit Gebru’s exit from Google and the implications that this has for research into artificial intelligence. It highlights the dark side of being funded by the company that you are at the same time researching:

In an interview with Karen Hao, Gebru questions the response from Google suggesting they treat those involved in gross misconduct better.

John Naughton suggests that this is no different to what has happened in the past with oil and tobacco.

This all reminds me of Jordan Erica Webber’s discussion from a few years ago about the push for more ethics and whether this it is just a case of public relations?