Liked Cowriting an Album With AI – OneZero by Clive Thompson (clivethompson.medium.com)

During lockdown last year, Robin got cabin fever like the rest of us, and started chatting with his friend Jesse Solomon Clark, a composer and music producer. They’d heard about OpenAI’s Jukebox and hatched a plan to craft an album of music by working with Jukebox as a creative partner.

Bookmarked Timnit Gebru’s Exit From Google Exposes a Crisis in AI (WIRED)

This crisis makes clear that the current AI research ecosystem—constrained as it is by corporate influence and dominated by a privileged set of researchers—is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isn’t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.

Alex Hanna reports on Timnit Gebru’s exit from Google and the implications that this has for research into artificial intelligence. It highlights the dark side of being funded by the company that you are at the same time researching:

Meredith Whittaker, faculty director at New York University’s AI Now institute, says what happened to Gebru is a reminder that, although companies like Google encourage researchers to consider themselves independent scholars, corporations prioritize the bottom line above academic norms. “It’s easy to forget, but at any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest,” she says.

In an interview with Karen Hao, Gebru questions the response from Google suggesting they treat those involved in gross misconduct better.

I didn’t expect it to be in that way—like, cut off my corporate account completely. That’s so ruthless. That’s not what they do to people who’ve engaged in gross misconduct. They hand them $80 million, and they give them a nice little exit, or maybe they passive-aggressively don’t promote them, or whatever. They don’t do to the people who are actually creating a hostile workplace environment what they did to me.

John Naughton suggests that this is no different to what has happened in the past with oil and tobacco.

And my question is: why? Is it just that the paper provides a lot of data which suggests that a core technology now used in many of Google’s products is, well, bad for the world? If that was indeed the motivation for the original dispute and decision, then it suggests that Google’s self-image as a technocratic force for societal good is now too important to be undermined by high-quality research which suggests otherwise. In which case, it suggests that there’s not that much difference between big tech companies and tobacco, oil and mining giants. They’re just corporations, doing what corporations always do.

This all reminds me of Jordan Erica Webber’s discussion from a few years ago about the push for more ethics and whether this it is just a case of public relations?

Listened
Antony Funnell speaks with Frank Pasquale about his new book New Laws of Robotics. Pasquale builds on the work of Isaac Asimov to propose a more human first orientation to the development of artificial intelligence.

Pasquale says we must push much further, arguing that the old laws should be expanded to include four new ones:

  1. Digital technologies ought to “complement professionals, not replace them.”
  2. A.I. and robotic systems “should not counterfeit humanity.”
  3. A.I. should be prevented from intensifying “zero-sum arms races.”
  4. Robotic and A.I. systems need to be forced to “indicate the identity of their creators(s), controller(s), and owners(s).”

In a follow-up, Funnell speaks with Michael Evans about public opinion in regards to AI and government strategy. He also discusses the report AI for Social Good with Neil Selwyn.

Bookmarked Verse by Verse (sites.research.google)
Google’s experiment using AI to create poems in the style of past poets. This reminds me of Ian Guest’s debate about poetry versus coding. I imagine some would worry that this might be considered as ‘cheating’, however what interests me is the opportunity to easily create and then deconstruct the structures associated with the text.

Another example of AI generated text is Mark Riedl’s Generating Parody Lyrics.

via Clive Thompson

Bookmarked
Cory Doctorow discusses the magic that is predictive policing.

Victoria police say they can’t disclose any details about the program because of “methodological sensitivities,” much in the same way that stage psychics can’t disclose how they guess that the lady in the third row has lost a loved one due to “methodological sensitivities.”

Doctorow explains that all this tells the police is “how many crimes to charge the child with between now and their 21st birthday.”

Bookmarked
Donald Clark talks about the problems with intelligences. He unpacks the tendency of the IQ test to prioritise logical and mathematical skills, the false hope of single measure associated with Multiple Intelligences and the confusion between personality and Emotional Intelligences. Clark suggests that we need to move on from the discussion of intelligence being centred on the brain and instead focus on networks. This harks back to David Weinberger’s claim that the ‘smartest person in the room is the room’. In the end, maybe the word ‘intelligence’ may need to be abandoned.

We would do well to abandon the word ‘intelligence’, as it carries with it so much bad theory and practice. Indeed AI has, in my view, already transcended the term, as it had success across a much wider sets of competences (previously intelligences), such as perception, translation, search, natural language processing, speech, sentiment analysis, memory, retrieval and other many other domains. All of this was achieved without consciousness. It is all competence without comprehension.

This reminds me of Doug Belshaw’s discussion of dead metaphors.

via Stephen Downes

Replied to The best they can with what they’ve got by David Truss (Daily Ink)

In essence, it’s about giving the parent more information and resources than they arrived with, to deal with the situation better than an angry mama bear has defending a cub from danger. It’s about saying, ‘Your kid made a bad choice’, and separating their behaviour from their identity and the parent’s identity too. And then it’s about helping both of them get the strategies and resources they need to make the situation better.


It’s not easy. But when a mama bear sees that you want what’s best for their kid… and that’s really what you want even though the kid made a really bad choice… then the outcome becomes what you intended it to be. That same mama bear parent has, at times, even wanted to go harder on their kid than I do. If it comes to this point, they are still operating under the same pretence, they are doing the best they can with what they’ve got.

David, I enjoyed your response to Corrie’s prompt. Your differentiation of identity and behaviour had me think about the introduction of automation in education and the challenges associated with ‘messy problems’, as you say. Responding to the situation you included in a black and white manner may supposedly resolve the issue, but it would then have flow on effect associated with learning and community connections. This is why context is always so important I guess.
Replied to Technology is the name we give to stuff that doesn’t work properly yet (Doug Belshaw’s Thought Shrapnel)

Three things that, to be honest, make me a bit concerned about the next few years…

As a ‘technology coach’ (I think that is what I am) it is an interesting space to be in. There are so many educators out there praising some of these innovations and the affordances they bring. I think that the least we can do is to be more informed, even if this is fallible and somewhat naive.

One of the challenges that really intrigues me is when someone else gives your consent for something without either asking or often even realising. In some ways shadow profiles touches upon this, but the worst is DNA tests:

It is also kind of funny how in education the discussions seems to be about banning smartphones. However, as you touch upon with microphones and wearables, we will not even know what is and is not being captured. A part of me thinks that as a teacher you need to be mindful of this.

What concerns me most are those who feel that we should make the capturing of biometric data standard.

We live in wicked times.

Replied to New AI Systems Are Here to Personalize Learning by Aaron Frank (Singularity Hub)

Can the technologies automating jobs also help workers learn the skills they’ll need to find new work in the changing economy? This AI learning startup thinks so.

The idea of AI tracking every movement in education and providing for our next step is an interesting proposition. I am just concerned why ethics comes after the supposed solution:

“Our goal is to build an ethics review board that has teeth, is diverse in both gender and background but also in thought and belief structures. The idea is to have our ethics review panel ensure we’re building things ethically,” Talebi said.

What happens if the ethics board says the whole thing is unethical?

Personally, I am left wondering if the supposed personalized ‘results’ are worth the reward? There is talk of scraping even more data:

Going forward, Ahura hopes to add to its suite of biometric data capture by including things like pupil dilation and facial flushing, heart rate, sleep patterns, or whatever else may give their system an edge in improving learning outcomes.

Next we will be measuring the pupils of every staff member to maximise market gains? Is this what education is for?

Bookmarked Artificial intelligence in Schools: An Ethical Storm is Brewing (EduResearch Matters)

‘Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in …

Erica Southgate discusses a new report and project produced for the Australian Government Department of Education to support the analysis of artificial intelligence in education. It touches on some of the concerns around AI, including:

  • Bias
  • Black box nature
  • Digital human rights issues
  • Deep fakes
  • The potential for a lack of independent advice for educational leaders