Bookmarked What Really Happened When Google Ousted Timnit Gebru by Tom Simonite (WIRED)

She was a star engineer who warned that messy AI can spread racism. Google brought her in. Then it forced her out. Can Big Tech take criticism from within?

Tom Simonite digs into the complex series of events that led to Timnit Gebru (and Margaret Mitchell) being ousted from Google’s AI team. The journey starts with fleeing Ethiopia due to the conflict with Eritrea. It then discusses her journey to Stanford and Apple. The journey then turns to her PhD as a part of the lab of Fei-Fei Li exploring computer vision, deep learning and artificial intelligence. All along she battled with questions associated with gender and race. Later, working for Microsoft, Gebru discussed the need for a framework called Datasheets for Datasets.

Datasheets for datasets is a tool for documenting the datasets used for training and evaluating machine learning models. The aim of datasheets is to increase dataset transparency and facilitate better communication between dataset creators and dataset consumers

This all led to Mitchell inviting Gebru to work at Google with her. Although there were some wins in regards to artificial intelligence, there were also many cultural challenges associated with this.

Inside Google, researchers worked to build more powerful successors to BERT and GPT-3. Separately, the Ethical AI team began researching the technology’s possible downsides. Then, in September 2020, Gebru and Mitchell learned that 40 Googlers had met to discuss the technology’s future. No one from Gebru’s team had been invited, though two other “responsible AI” teams did attend. There was a discussion of ethics, but it was led by a product manager, not a researcher.

In part, this led to the ill-fated paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

The paper was not intended to be a bombshell. The authors did not present new experimental results. Instead, they cited previous studies about ethical questions raised by large language models, including about the energy consumed by the tens or even thousands of powerful processors required when training such software, and the challenges of documenting potential biases in the vast data sets they were made with. BERT, Google’s system, was mentioned more than a dozen times, but so was OpenAI’s GPT-3.

It is unclear where this leaves research into and developing of artificial intelligence.

Bookmarked The Delicate Ethics of Using Facial Recognition in Schools (Wired)

A growing number of districts are deploying cameras and software to prevent attacks. But the systems are also used to monitor students—and adult critics.

Tom Simonite and Gregory Barber discuss the rise in facial recognition within US schools. This software is often derived from situations such as Israeli checkpoints. It serves as a ‘free‘ and ‘efficient‘ means for maintaining student safety at the cost of standardising a culture of surveillance. What is worse is the argument that the use of facial recognition is a case of fighting fire with fire:

“You meet superior firepower with superior firepower,” Matranga says. Texas City schools can now mount a security operation appropriate for a head of state. During graduation in May, four SWAT team officers waited out of view at either end of the stadium, snipers perched on rooftops, and lockboxes holding AR-15s sat on each end of the 50-yard line, just in case.(source)

I am with Audrey Watters here, what is ‘delicate’ ethics?