📺 Future Forum 2 – Knowledge and Ethics in the Next Machine Age

Watched
On the 8th of December at The Overseas Passenger Terminal in Sydney Australia, BVN hosted its bi-annual conference – Futures Forum 2. The theme was ‘Knowledge and Ethics in the Next Machine Age’.

23:21 Larry Prusak: Knowledge and it’s Practices in the 21st Century

Prusak discusses the changes in knowledge over time and the impact that this has. This reminds me of Weinberger’s book Too Big To Know. Some quotes that stood out were:

Knowledge won’t flow without trust

and

Schools measure things they can measure even if it is not valuable

Again and again Prusak talks about going wide, getting out and meeting new people.

1:21:59 Professor Genevieve Bell: Being Human in a Digital Age

Bell points out that computing has become about the creation, circulation, curation and resistence of data. All companies are data companies now. For example, Westfield used to be a real estate company, but they are now a data company.

The problem with algorithms is that they are based on the familiar and retrospective, they do not account for wonder and serendipity.

As we design and develop standards for tomorrow, we need to think about the diversity associated with those boards and committees. If there are only white males at the table, how does this account for other perspectives.

We do want to be disconnected, even if Silicon Valley is built around being permanently connected. One of the things that we need to consider is what is means to have an analogue footprint.

Building on the discussion of data and trust, Bell makes the point:

The thing about trust is that you only get it once.

The question remains, who do we trust when our smart devices start selling our data.

In regards to the rise of the robots, our concern should be the artificial intelligence within them. One of the big problems is that robots follow rules and we don’t.

The future of technology that we need to be aspiring to develop a future where technology can support us with our art, wonder and curiosity.


A comment made during the presentation and shared after Bell had finished:

Is your current job the best place for you to make the world a better place?


2:49:51 Phillip Bernstein: The Future of Making Things: Design Practice in the Era of Connected Technology

Berstein unpacks six technical disruptions – data, computational design, simulation analysis, the internet of things, industrial construction and machine learning – and looks at the implications for architecture.

3:51:44 Dr Simon Longstaff: Ethics in the Next Machine Age

Dr Longstaff explores the ethics associated with technology. This includes the consideration of ethical design, a future vision – Athens or Eden – and the purpose to making. Discussing the technology of WWII, Longstaff states:

Technical mastery devoid of ethics is the root of all evil

He notes that just because we can, it does not mean we ought.

A collection of points to consider in regards to ethics in technology
A screenshot from Dr Longstaff

He also used two ads from AOL to contrast the choices for tomorrow:


H/T Tom Barrett

9 responses on “📺 Future Forum 2 – Knowledge and Ethics in the Next Machine Age”

  1. Continuing the conversation about forgetting and ethics, Antony Funnell speaks with Kate Eichhorn and Kate Mannell about digital forgetting.
    Eichhorn, the author of The End of Forgetting, discusses the long and complicated history that children have and challenges associated with identity. She explains that our ability to control what is forgotten has been diminished in the age of social media. Although new solutions may allow us to connect, this also creates its own problems and consequences, such as the calcification of polarised politics. Eichhorn would like to say things are going to change, but she argues that there is little incentive for big tech. Although young people are becoming more cynical, there maybe resistance, but little hope for a return to an equitable utopian web.
    Kate Mannell explores the idea of forcing a sense of ethics through the form of a hypocratic oath. Some of the problems with this is that there are many versions of the oath, it does not resolve the systemic problems and it is hard to have an oath of no harm when it is not even clear what harms are actually at play. In the end, it risks being a soft form of self regulation.
    I found Eichhorn’s comments about resistance interesting when thinking about my engagement with the IndieWeb and Domain of One’s Own. I guess sometimes all we have is hope. While Mannell’s point about no harm when it is not even clear what harm is at play reminds me about Zeynep Tufekci’s discussion of shadow profiles, complications of inherited datasets and the challenges of the next machine age. In regards to education, the issue is in regards to artificial intelligence and facial recognition.

    Also on:

Mentions

  • Aaron Davis
  • Aaron Davis
  • Aaron Davis
  • Aaron Davis
  • Aaron Davis
  • Aaron Davis
  • Aaron Davis
  • Aaron Davis

Leave a Reply

Your email address will not be published. Required fields are marked *