Listened Will mind-controlled films change cinema? Chips with Everything podcast by Jordan Erica Webber from the Guardian

The movie industry has seen tech advances since its inception. But do audiences really want to have a say in a filmโ€™s plot?

Jordan Erica Webber is joined by the chief curator of the Museum of the Moving Image in New York, David Schwartz, and Dr Polina Zioga, director the Interactive Filmmaking Lab, at Staffordshire University. This look back at the beginnings of film, as well as the future of a personalised experience.
Listened Have we lost our sense of reality? from Radio National

Are the systems weโ€™ve developed to enhance our lives now impairing our ability to distinguish between reality and falsity?

Guests

Dr Laura Dโ€™Olimpio โ€“ Senior Lecturer in Philosophy, University of Notre Dame Australia

Andrew Potter โ€“ Associate Professor, Institute for the Study of Canada, McGill University; author of The Authenticity Hoax

Hany Farid โ€“ Professor of Computer Science, Dartmouth College, USA

Mark Pesce โ€“ Honorary Associate, Digital Cultures Programme, University of Sydney

Robert Thompson - Professor of Media and Culture, Syracuse University


This is an interesting episode in regards to augmented reality and fake news. One of the useful points was Hany Farid’s description of machine learning and deep fakes:

When you think about faking an image or faking a video you typically think of something like Adobe Photoshop, you think about somebody takes an image or the frames of a video and manually pastes somebody’s face into an image or removes something from an image or adds something to a video, that’s how we tend to think about digital fakery. And what Deep Fakes is, where that word comes from, by the way, is there has been this revolution in machine learning called deep learning which has to do with the structure of what are called neural networks that are used to learn patterns in data.

And what Deep Fakes are is a very simple idea. You hand this machine learning algorithm two things; a video, let’s say it’s a video of somebody speaking, and then a couple of hundred, maybe a couple of thousand images of a person’s face that you would like to superimpose onto the video. And then the machine learning algorithm takes over. On every frame of the input video it finds automatically the face. It estimates the position of the face; is it looking to the left, to the right, up, down, is the mouth open, is the mouth closed, are the eyes open, are the eyes closed, are they winking, whatever the facial expression is.

It then goes into the sea of images of this new person that you have provided, either finds a face with a similar pose and facial expression or synthesises one automatically, and then replaces the face with that new face. It does that frame after frame after frame for the whole video. And in that way I can take a video of, for example, me talking and superimpose another person’s face over it.