Bookmarked These 3D models take you inside the shattered ruins of some of Ukraine’s cultural treasures (ABC News)

Durand hoped that by producing intricate 3D models, he could offer the world a unique perspective of what was happening to some of these Ukrainian sites.

Emmanuel Durand is capturing the war in Ukraine in a new way, capturing 3D models of various heritage sites as a means of documenting the impact. This reminds me of a piece I wrote a few years ago on imaging and imagining the past.
Watched History in 3D from YouTube

Welcome!

Our channel dedicated to interesting and perspective topic – to revive the history using 3D technologies. Hith help of them we can see how a number of great ancient buildings and works of art looked like, and we hope that in the near future we all be able to enjoy exploring the whole towns from the past and their life in all its diversity!

With best regards,
Danila Loginov and “History in 3D” creative team.

A collection of videos reimagining Ancient Rome and Greece using 3D technology.
Bookmarked Google’s VR dreams are dead: Google Cardboard is no longer for sale (Ars Technica)

Google’s last surviving VR product is dead. Today the company stopped selling the Google Cardboard VR viewer on the Google Store, the last move in a long wind-down of Google’s once-ambitious VR efforts. The message on the Google Store, which was first spotted by Android Police, reads, “We are no longer selling Google Cardboard on the Google Store.”

Google is putting an end to its work with Google Cardboard. I liked the idea of it, but always felt inhibited by how it would work practically in the classroom.
Bookmarked Why London’s National Theatre Is Hooking Online Viewers (The Atlantic)

The intimate camerawork of its web broadcasts gives everyone the best seat in the house.

Daniel Pollack-Pelzner reflects on the pivot of plays online. He explains how such mediated experiences are different from the feeling of being their in the theatre.

As the performance scholar Sarah Bay-Cheng points out, ā€œmediated theatreā€ that’s edited for a screen offers a very different sense of space, movement, and time than an in-person performance. Eight or so cameras get positioned around the theater, and there are two camera rehearsals before the broadcast. The show is recorded in a single take in front of an audience and broadcast live that night to movie theaters, with some delay for audiences in different time zones.

Speaking from a US perspective, Pollack-Pelzner also situates such broadcast from the perspective of the literacy canon and colonialism, suggesting that there is something to be said about the particular choices chosen to be broadcast.

broadcasts also reinforce a sense of the U.K. as the center of civilization, and cinematic outposts around the world as its fringes, a message reinforced by the particular plays NT Live chooses for export. Although the theater has, in recent years, become much more supportive of diverse artists, the broadcasts for NT at Home come straight out of the Victorian canon, a series of Shakespeare and 19th-century-novel adaptations: Twelfth Night, Antony and Cleopatra, Frankenstein, Jane Eyre, Treasure Island. What the National sends out under its banner, ā€œthe best of British theatre,ā€ is more or less the same culture that the British empire used to enforce Englishness around the world a century and a half ago. London is still the metropole; I’m still regarding it fondly from a colonial outpost. It’s the very coziness, the domesticity, of NT at Home that makes its imperial echoes both so pervasive and so hard to hear.

This can be understood as being a part of a wider push back on the limits of streaming. Although there is a plethora of content available, whether it be museums, zoosĀ or concerts, there has been a growing sense of push-back. For example, Chris DeVille argues that musical performances are often underwhelming:

Livestreams suck. Livestreams have always sucked. There are exceptions — when your favorite artist logs on, when something incredibly charming and unexpected happens — but in general, watching musicians perform onscreen from home is underwhelming and sometimes depressing. By necessity, the format has become a mainstay of the music industry during the coronavirus pandemic, which has only underlined how much the format sucks. There’s a reason the streamed concert platform StageItĀ was in dire financial perilĀ before COVID-19 struck and why the world’s best and most popular musical artists didn’t typically lower themselves to the level of YouTube struggle-folkies until they had to. Under normal circumstances, when the live concert experience is available and people can safely leave their homes, livestreams are clearly an inferior alternative. They suck.

WhileĀ Peter SchjeldahlĀ reflects on the mark virtual tours of galleries willĀ  leave on us, accompanying us spectrally.

Online ā€œvirtual toursā€ add insult to injury, in my view, as strictly spectacular, amorphous disembodiments of aesthetic experience. Inaccessible, the works conjure in the imagination a significance that we have taken for granted. Purely by existing, they stir associations and precipitate meanings that may resonate in this plague time.

In the end, I am reminded of something that Audrey Watters‘ wrote a few years ago about virtual tours.

Virtual field trips are not field trips. Oh sure, they might provide educational content. They might, as Google’s newly unveiled ā€œExpeditionsā€ cardboard VR tool promises, boast “360° photo spheres, 3D images and video, ambient sounds — annotated with details, points of interest and questions that make them easy to integrate into curriculum already used in schools.” But virtual field trips do not offer physical context; they do not offer social context. Despite invoking the adjective ā€œimmersive,ā€ they most definitely are not.

Maybe the current crisis is not one of equity, it is still something to stop and consider I guess.

Listened Will mind-controlled films change cinema? Chips with Everything podcast by Jordan Erica Webber;Danielle Stephens from the Guardian

The movie industry has seen tech advances since its inception. But do audiences really want to have a say in a film’s plot?

Jordan Erica Webber is joined by the chief curator of the Museum of the Moving Image in New York, David Schwartz, and Dr Polina Zioga, director the Interactive Filmmaking Lab, at Staffordshire University. This look back at the beginnings of film, as well as the future of a personalised experience.
Listened Have we lost our sense of reality? from Radio National

Are the systems we’ve developed to enhance our lives now impairing our ability to distinguish between reality and falsity?


Guests

Dr Laura D’Olimpio – Senior Lecturer in Philosophy, University of Notre Dame Australia

Andrew Potter – Associate Professor, Institute for the Study of Canada, McGill University; author of The Authenticity Hoax

Hany Farid – Professor of Computer Science, Dartmouth College, USA

Mark Pesce – Honorary Associate, Digital Cultures Programme, University of Sydney

Robert Thompson – Professor of Media and Culture, Syracuse University


This is an interesting episode in regards to augmented reality and fake news. One of the useful points was Hany Farid’s description of machine learning and deep fakes:

When you think about faking an image or faking a video you typically think of something like Adobe Photoshop, you think about somebody takes an image or the frames of a video and manually pastes somebody’s face into an image or removes something from an image or adds something to a video, that’s how we tend to think about digital fakery. And what Deep Fakes is, where that word comes from, by the way, is there has been this revolution in machine learning called deep learning which has to do with the structure of what are called neural networks that are used to learn patterns in data.

And what Deep Fakes are is a very simple idea. You hand this machine learning algorithm two things; a video, let’s say it’s a video of somebody speaking, and then a couple of hundred, maybe a couple of thousand images of a person’s face that you would like to superimpose onto the video. And then the machine learning algorithm takes over. On every frame of the input video it finds automatically the face. It estimates the position of the face; is it looking to the left, to the right, up, down, is the mouth open, is the mouth closed, are the eyes open, are the eyes closed, are they winking, whatever the facial expression is.

It then goes into the sea of images of this new person that you have provided, either finds a face with a similar pose and facial expression or synthesises one automatically, and then replaces the face with that new face. It does that frame after frame after frame for the whole video. And in that way I can take a video of, for example, me talking and superimpose another person’s face over it.

Replied to Virtually the same? by Matthew Esterman (Medium)

What kind of learning experience can ā€˜other’ realities provide that our physical realities don’t?
What effects will (dramatically) reduced cost and much more prolific access to VR equipment mean for schools?
What professional learning will be required for teachers, parents and students to fully utilise these kinds of technologies?
How do we ensure that we don’t just create a new method of information consumption but critical thinking, collaboration and creativity?

I have written about VR before, from the perspective of Google Cardboard. Some ideas that I thought of were as a means of supporting vocabulary, real life learning, telling stories and sparking curiosity. It is an interesting space.