Bookmarked The Secret History of Facial Recognition (Wired)

Sixty years ago, a sharecropper’s son invented a technology to identify faces. Then the record of his role all but vanished. Who was Woody Bledsoe, and who was he working for?

Shaun Raviv explores the secret history of Woody Bledsoe, Panoramic Research Incorporated and the CIA funded research into facial recognition. At the height of his work,

When Woody and Hart asked three people to cross-match subsets of 100 faces, even the fastest one took six hours to finish. The CDC 3800 computer completed a similar task in about three minutes, reaching a hundredfold reduction in time. The humans were better at coping with head rotation and poor photographic quality, Woody and Hart acknowledged, but the computer was “vastly superior” at tolerating the differences caused by aging. Overall, they concluded, the machine “dominates” or “very nearly dominates” the humans.

Due to the association with government, Bledsoe’s papers and research remained confidential.

This was the greatest success Woody ever had with his facial-recognition research. It was also the last paper he would write on the subject. The paper was never made public—for “government reasons,” Hart says—which both men lamented. In 1970, two years after the collaboration with Hart ended, a roboticist named Michael Kassler alerted Woody to a facial-recognition study that Leon Harmon at Bell Labs was planning. “I’m irked that this second rate study will now be published and appear to be the best man-machine system available,” Woody replied. “It sounds to me like Leon, if he works hard, will be almost 10 years behind us by 1975.” He must have been frustrated when Harmon’s research made the cover of Scientific American a few years later, while his own, more advanced work was essentially kept in a vault.

Replied to My Way or the Highway (rtschuetz.net)

Just as with the auto repair shop, there are costs associated with ignoring research, experience, and observations. In an age of computer algorithms and artificial intelligence, how much value should we, do we, place on professional judgment?

Robert, I enjoyed your reflection on the balance between professional judgement and the use of technology. The world of algorithms and artificial intelligence is posing a lot of challenges for education at the moment. I like Simon Buckingham Shum’s challenge to define the education we want and go from there.
Bookmarked The Secretive Company That Might End Privacy as We Know It (nytimes.com)

Mr. Ton-That said his company used only publicly available images. If you change a privacy setting in Facebook so that search engines can’t link to your profile, your Facebook photos won’t be included in the database, he said.

But if your profile has already been scraped, it is too late. The company keeps all the images it has scraped even if they are later deleted or taken down, though Mr. Ton-That said the company was working on a tool that would let people request that images be removed if they had been taken down from the website of origin.

Kashmir Hill explores Clearview AI and the world of digital tracking. Although this seems like a case of bad faith, I am left wondering how it differs from Google and reverse image searches? What confuses me is the process for having images supposedly removed from the database. How would you even know they are there? This is a point that is discussed on the Download This podcast. There has been many different responses to this, including the call to ban scraping of the web. Cory Doctorow suggests that this is not necessarily the best approach and instead calls for laws which clarify what you can do with scraped data.

If we want to protect privacy, we should pass a federal privacy law — something Big Tech has fought tooth and nail — that regulates what you do with scraped data, without criminalizing an activity that is key to competition, user empowerment, academic and security research.

Listened Artificial intelligence, ethics and education from Radio National

AI holds enormous potential for transforming the way we teach, says education technology expert Simon Buckingham Shum, but first we need to define what kind of education system we want.

Also, the head of the UK’s new Centre for Data Ethics and Innovation warns democratic governments that they urgently need an ethics and governance framework for emerging technologies.

And Cognizant’s Bret Greenstein on when it would be unethical not to use AI.

Guests

Roger Taylor – Chair of the UK Government’s Centre for Data Ethics and Innovation

Simon Buckingham Shum – Professor of Learning Informatics, University of Technology Sydney, leader of the Connected Intelligence Centre; co-founder and former Vice-President of the Society for Learning Analytics Research

Bret Greenstein – Senior Vice President and Global head of AI and Analytics, Cognizant

In this episode of RN Future Tense, Antony Funnell leads an exploration of artificial intelligence, educational technology and ethics. Simon Buckingham Shum discusses the current landscape and points out that we need to define the education we want, while Roger Taylor raises the concern that if we do not find a position that fits with our state that we will instead become dictated by either America’s market based solutions or China’s focus on the state. This is a topic that has been discussed on a number of fronts, including by Erica Southgate. This also reminds me of Naomi Barnes’ 20 Thoughts on Automated Schooling.
Liked

Listened The role of humans in the technological age from Radio National

Forget the humans versus machine dichotomy. Our relationship with technology is far more complicated than that. To understand AI, first we need to appreciate the role humans play in shaping it.