Google’s last surviving VR product is dead. Today the company stopped selling the Google Cardboard VR viewer on the Google Store, the last move in a long wind-down of Google’s once-ambitious VR efforts. The message on the Google Store, which was first spotted by Android Police, reads, “We are no longer selling Google Cardboard on the Google Store.”
The intimate camerawork of its web broadcasts gives everyone the best seat in the house.
As the performance scholar Sarah Bay-Cheng points out, “mediated theatre” that’s edited for a screen offers a very different sense of space, movement, and time than an in-person performance. Eight or so cameras get positioned around the theater, and there are two camera rehearsals before the broadcast. The show is recorded in a single take in front of an audience and broadcast live that night to movie theaters, with some delay for audiences in different time zones.
Speaking from a US perspective, Pollack-Pelzner also situates such broadcast from the perspective of the literacy canon and colonialism, suggesting that there is something to be said about the particular choices chosen to be broadcast.
broadcasts also reinforce a sense of the U.K. as the center of civilization, and cinematic outposts around the world as its fringes, a message reinforced by the particular plays NT Live chooses for export. Although the theater has, in recent years, become much more supportive of diverse artists, the broadcasts for NT at Home come straight out of the Victorian canon, a series of Shakespeare and 19th-century-novel adaptations: Twelfth Night, Antony and Cleopatra, Frankenstein, Jane Eyre, Treasure Island. What the National sends out under its banner, “the best of British theatre,” is more or less the same culture that the British empire used to enforce Englishness around the world a century and a half ago. London is still the metropole; I’m still regarding it fondly from a colonial outpost. It’s the very coziness, the domesticity, of NT at Home that makes its imperial echoes both so pervasive and so hard to hear.
This can be understood as being a part of a wider push back on the limits of streaming. Although there is a plethora of content available, whether it be museums, zoos or concerts, there has been a growing sense of push-back. For example, Chris DeVille argues that musical performances are often underwhelming:
Livestreams suck. Livestreams have always sucked. There are exceptions — when your favorite artist logs on, when something incredibly charming and unexpected happens — but in general, watching musicians perform onscreen from home is underwhelming and sometimes depressing. By necessity, the format has become a mainstay of the music industry during the coronavirus pandemic, which has only underlined how much the format sucks. There’s a reason the streamed concert platform StageIt was in dire financial peril before COVID-19 struck and why the world’s best and most popular musical artists didn’t typically lower themselves to the level of YouTube struggle-folkies until they had to. Under normal circumstances, when the live concert experience is available and people can safely leave their homes, livestreams are clearly an inferior alternative. They suck.
While Peter Schjeldah reflects on the mark virtual tours of galleries will leave on us, accompanying us spectrally.
Online “virtual tours” add insult to injury, in my view, as strictly spectacular, amorphous disembodiments of aesthetic experience. Inaccessible, the works conjure in the imagination a significance that we have taken for granted. Purely by existing, they stir associations and precipitate meanings that may resonate in this plague time.
In the end, I am reminded of something that Audrey Watters‘ wrote a few years ago about .
Virtual field trips are not field trips. Oh sure, they might provide educational content. They might, as Google’s newly unveiled “Expeditions” cardboard VR tool promises, boast “360° photo spheres, 3D images and video, ambient sounds — annotated with details, points of interest and questions that make them easy to integrate into curriculum already used in schools.” But virtual field trips do not offer physical context; they do not offer social context. Despite invoking the adjective “immersive,” they most definitely are not.
Maybe the current crisis is not one of equity, it is still something to stop and consider I guess.
The movie industry has seen tech advances since its inception. But do audiences really want to have a say in a film’s plot?
Are the systems we’ve developed to enhance our lives now impairing our ability to distinguish between reality and falsity?
Dr Laura D’Olimpio – Senior Lecturer in Philosophy, University of Notre Dame Australia
Andrew Potter – Associate Professor, Institute for the Study of Canada, McGill University; author of The Authenticity Hoax
Hany Farid – Professor of Computer Science, Dartmouth College, USA
Mark Pesce – Honorary Associate, Digital Cultures Programme, University of Sydney
Robert Thompson – Professor of Media and Culture, Syracuse University
This is an interesting episode in regards to augmented reality and fake news. One of the useful points was Hany Farid’s description of machine learning and deep fakes:
When you think about faking an image or faking a video you typically think of something like Adobe Photoshop, you think about somebody takes an image or the frames of a video and manually pastes somebody’s face into an image or removes something from an image or adds something to a video, that’s how we tend to think about digital fakery. And what Deep Fakes is, where that word comes from, by the way, is there has been this revolution in machine learning called deep learning which has to do with the structure of what are called neural networks that are used to learn patterns in data.
And what Deep Fakes are is a very simple idea. You hand this machine learning algorithm two things; a video, let’s say it’s a video of somebody speaking, and then a couple of hundred, maybe a couple of thousand images of a person’s face that you would like to superimpose onto the video. And then the machine learning algorithm takes over. On every frame of the input video it finds automatically the face. It estimates the position of the face; is it looking to the left, to the right, up, down, is the mouth open, is the mouth closed, are the eyes open, are the eyes closed, are they winking, whatever the facial expression is.
It then goes into the sea of images of this new person that you have provided, either finds a face with a similar pose and facial expression or synthesises one automatically, and then replaces the face with that new face. It does that frame after frame after frame for the whole video. And in that way I can take a video of, for example, me talking and superimpose another person’s face over it.