Listened Have we lost our sense of reality? from Radio National

Are the systems we’ve developed to enhance our lives now impairing our ability to distinguish between reality and falsity?

Guests

Dr Laura D’Olimpio – Senior Lecturer in Philosophy, University of Notre Dame Australia

Andrew Potter – Associate Professor, Institute for the Study of Canada, McGill University; author of The Authenticity Hoax

Hany Farid – Professor of Computer Science, Dartmouth College, USA

Mark Pesce – Honorary Associate, Digital Cultures Programme, University of Sydney

Robert Thompson - Professor of Media and Culture, Syracuse University


This is an interesting episode in regards to augmented reality and fake news. One of the useful points was Hany Farid’s description of machine learning and deep fakes:

When you think about faking an image or faking a video you typically think of something like Adobe Photoshop, you think about somebody takes an image or the frames of a video and manually pastes somebody’s face into an image or removes something from an image or adds something to a video, that’s how we tend to think about digital fakery. And what Deep Fakes is, where that word comes from, by the way, is there has been this revolution in machine learning called deep learning which has to do with the structure of what are called neural networks that are used to learn patterns in data.

And what Deep Fakes are is a very simple idea. You hand this machine learning algorithm two things; a video, let’s say it’s a video of somebody speaking, and then a couple of hundred, maybe a couple of thousand images of a person’s face that you would like to superimpose onto the video. And then the machine learning algorithm takes over. On every frame of the input video it finds automatically the face. It estimates the position of the face; is it looking to the left, to the right, up, down, is the mouth open, is the mouth closed, are the eyes open, are the eyes closed, are they winking, whatever the facial expression is.

It then goes into the sea of images of this new person that you have provided, either finds a face with a similar pose and facial expression or synthesises one automatically, and then replaces the face with that new face. It does that frame after frame after frame for the whole video. And in that way I can take a video of, for example, me talking and superimpose another person’s face over it.

Bookmarked The Building Blocks of Interpretability by Chris Olah (Google Brain Team)
There is a rich design space for interacting with enumerative algorithms, and we believe an equally rich space exists for interacting with neural networks. We have a lot of work left ahead of us to build powerful and trusthworthy interfaces for interpretability. But, if we succeed, interpretability promises to be a powerful tool in enabling meaningful human oversight and in building fair, safe, and aligned AI systems (Crossposted on the Google Open Source Blog) In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as De...
Is it just me, or is this new article exploring how feature visualization can combine together with other interpretability techniques to understand aspects of how networks make decisions a case of creating a solution and then working out how or why it works? Seems reactive or maybe I just don’t get it.
Liked When It Comes to Gorillas, Google Photos Remains Blind (WIRED)
Google’s caution around images of gorillas illustrates a shortcoming of existing machine-learning technology. With enough data and computing power, software can be trained to categorize images or transcribe speech to a high level of accuracy. But it can’t easily go beyond the experience of that training. And even the very best algorithms lack the ability to use common sense, or abstract concepts, to refine their interpretation of the world as humans do.