Bookmarked Where are the crescents in AI? | LSE Higher Education (blogs.lse.ac.uk)

For me, being critical goes beyond critique and scepticism: it includes subscribing to critical theory and critical pedagogy – developing awareness of social justice issues and cultivating in learners a disposition to redress them. The elements of critical AI literacy in my view are:

  • Understanding how GenAI works
  • Recognising inequalities and biases within GenAI
  • Examining ethical issues in GenAI
  • Crafting effective prompts
  • Assessing appropriate uses of GenAI

Where are the crescents in AI? by Maha Bali

Maha Bali discusses the need for cultivating critical AI literacy. She reflects on ideas and exercises that she has used as a part of her course on digital literacies and intercultural learning. After unpacking each of the areas, with elaborations and examples, she ends with a series of questions to consider:

I think we should always question the use of AI in education for several reasons. Can we position AI as a tutor that supports learning, when we know AI hallucinates often? Even when we train AI as an expert system that has expert knowledge, are we offering this human-less education to those less privileged while keeping the human-centric education to more privileged populations? Why are we considering using technology in the first place – what problems does it solve? What are alternative non-tech solutions that are more social and human? What do we lose from the human socioemotional dimensions of teacher-student and student-student interactions when we replace these with AI? Students, teachers, and policymakers need to develop critical AI literacy in order to make reasonable judgments about these issues.

Where are the crescents in AI? by Maha Bali

This discussion of critical, more than just critique, reminds me of Doug Belshaw’s digital literacies:

  • Digital literacies are about process as much as product
  • Lets move beyond good and evil and focus on choice and consequence
  • Literacy starts with you, curate rather than be curated

In Search of an Understanding of Digital Literacies Worth Having by Aaron Davis

As well as my piece on Cambridge Analytica and the need to critically reflect and ask questions.

I think that the most important thing we can do is wonder. This helps go beyond the how-to to the how-do-they-do-that.

Secret, Safe and Informed: A Reflection on Facebook, Cambridge Analytica and the Collection of Data by Aaron Davis

🤔 How Spoutible’s Leaky API Spurted out a Deluge of Personal Data

During my 14 years at Pfizer, I once reviewed an iOS app built for us by a low-cost off-shored development shop. I proxied the app through Fiddler, watched the requests and found an API that was returning every user record in the system and for each user, their corresponding password in plain text. When quizzing the developers about this design decision, their response was – and I kid you not, this isn’t made up – “don’t worry, our users don’t use Fiddler” 🤦‍♂️

Source: How Spoutible’s Leaky API Spurted out a Deluge of Personal Data by @troyhunt

Bookmarked
Cory Doctorow discusses the magic that is predictive policing.

Victoria police say they can’t disclose any details about the program because of “methodological sensitivities,” much in the same way that stage psychics can’t disclose how they guess that the lady in the third row has lost a loved one due to “methodological sensitivities.”

Doctorow explains that all this tells the police is “how many crimes to charge the child with between now and their 21st birthday.”

Bookmarked VINYL (kenziemurphy.github.io)

About the Data The data visualized here were pulled from Spotify API. Most data attributes are computed by Spotify’s audio analysis algorithms.

Kenzie Murphy provides a tool for visualising Spotify’s data. This is an interesting example of big data.

via Ian O’Byrne

Liked The Government Protects Our Food and Cars. Why Not Our Data? (nytimes.com)

Now some consumer groups and members of Congress are calling for a sweeping data protection law, along with a dedicated federal regulator to enforce it. The idea is to provide Americans with the same level of safeguards for apps as they have for appliances.

Bookmarked AltSchool’s out: Zuckerberg-backed startup that tried to rethink education calls it quits (SFChronicle.com)

A San Francisco startup that tried to reinvent school is handing off operations of its schools and turning into a software company.

Melia Russell reports on the further closure of AltSchool, however it will continue as software. This all makes you wonder if this was the intent from the beginning?
Bookmarked
Clive Thompson discusses the power of big data to support making clearer decisions around climate change. In the New Dark Age, James Bridle argues that there is a certain irony associated with using technology to solve the problems of technology.

Thinking about climate change is degraded by climate change itself, just as communications networks are undermined by the softening ground, just as our ability to debate and act on entangled environmental and technological change is diminished by our inability to conceptualise complex systems. And yet at the heart of our current crisis is the hyperobject of the network: the internet and the modes of life and ways of thinking it weaves together (Page 79)

The other problem is where the data gets manipulated to support vested interests.

Bookmarked Learning lessons from data controversies by Ben Williamson (codeactsineducation.wordpress.com)

By learning lessons from past controversies with data in education, and anticipating the controversies to come, we can ensure we have good answers to these hard questions. We can also ensure that good, ethical data practices are built in to educational technologies, hopefully preventing problems before they become full-blown public data controversies.

In a talk delivered at OEB2018 in Berlin on 7 December 2018, Ben Williamson discusses a number of topics associated with the use of big data in education:

  • Software can’t ‘solve’ educational ‘problems’
  • Global edtech influence raises public concern
  • Data leaks break public trust
  • Algorithmic mistakes & encoded politics cause social consequences
  • Transparency, not algorithmic opacity, is key to building trust with users
  • Psychological surveillance raises fears of emotional manipulation
  • ‘Reading the brain’ poses risks to human rights
  • Genetic datafication could lead to dangerous ‘Eugenics2.0’

This is a good introduction to Williamson’s book on the same topic, which unpacks these issues in more detail. Along with Audrey Watters year in review, these posts provide a useful snapshot of educational technology in 2018. You can also watch the talk.

https://youtu.be/C0cs4IGEz_c?t=3055

Bookmarked We Don’t Know What ‘Personal Data’ Means – uncomputing (uncomputing)

It’s Not Just What We Tell Them. It’s What They Infer. Many of us seem to think that “personal data” is a straightforward concept.  In discussions about Facebook, Cambridge Analytica, GDPR, and the rest of the data-drenched world we live in now, we proceed from the assumption that personal data means something like “data about myself that I provide to a

David Golumbia provides a list of six types of personal data: provided, observed, derived, inferred, anonymised and aggregate. In unpacking the work of Virginia Eubank and Cathy O’Neil, he warns about what we share only when we do not really know who is collecting such information.

Yes, we should be very concerned about putting direct personal data out onto social media. Obviously, putting “Democrat” or even “#Resist” in your public Twitter profile tells anyone who asks what party we are in. We should be asking hard questions about whether it is wise to allow even that minimal kind of declaration in public and whether it is wise to allow it to be stored in any form, and by whom. But perhaps even more seriously, and much less obviously, we need to be asking who is allowed to process and store information like that, regardless of where they got it from, even if they did not get it directly from us. source

Golumbia says that governments need to get on top of issues associated with data, because the public is struggling.

Bookmarked Cory Doctorow: Zuck’s Empire of Oily Rags (Locus Online)

For 20 years, privacy advocates

Cory Doctorow provides a commentary on the current state of affairs involving Facebook and Cambridge Analytica. Rather than blame the citizens of the web, he argues that the fault exists with the mechanics in the garage and the corruption that they have engaged with. The question that seems to remain is if this is so and we still want our car fixed, where do we go?

Marginalia

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. source

The comparison between Cambridge Analytica (and big data in general) with the stage mentalist is intriguing. I am left wondering about the disappointment and disbelief in the truth. Sometimes there is a part of us that oddly wants to be mesmerised and to believe.


It’s fashionable to treat the dysfunctions of social media as the result of the naivete of early technologists, who failed to foresee these outcomes. The truth is that the ability to build Facebook-like services is relatively common. What was rare was the moral recklessness necessary to go through with it. source

Facebook and Cambridge Analytica raise the question of just because we can, it doesn’t mean we should.


Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters. source

In relation to the question of mind-control verses corruption, I wonder where the difference exists. Does corruption involve some element of ‘mind-control’ to convince somebody that this is the answer?

Bookmarked Personalized precision education and intimate data analytics (code acts in education)

Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.

Ben Williamson breaks down the idea of precision through the use of data and how it might apply to education.
Liked Cambridge Analytica: the data analytics industry is already in full swing by David Beer (The Conversation)

If we want a full and comprehensive debate about the role of data in our lives, we need to first appreciate that the analysis and use of our data is not restricted to the types of figures that we have been reading about in these recent stories – it is deeply embedded in the structures in which we live.

Bookmarked Silicon Valley Has Failed to Protect Our Data. Here’s How to Fix It by Paul Ford (Bloomberg.com)

The activist and internet entrepreneur Maciej Ceglowski once described big data as “a bunch of radioactive, toxic sludge that we don’t know how to handle.” Maybe we should think about Google and Facebook as the new polluters. Their imperative is to grow! They create jobs! They pay taxes, sort of! In the meantime, they’re dumping trillions of units of toxic brain poison into our public-thinking reservoir. Then they mop it up with Wikipedia or send out a message that reads, “We take your privacy seriously.”

Paul Ford proposes the creation of a Digital Protection Agency to clean up the toxic data spill. This touches on what Mike Caulfield calls Info-Environmentalism.

A quote from Paul Ford on the toxic data spill
Background Image via “CIMG5200” by Phil LaCombe https://flickr.com/photos/phillacombe/3625101565 is licensed under CC BY-NC-SA
Bookmarked Next Big Thing in Education: Small Data by Pasi Sahlberg (pasisahlberg.com)

It is becoming evident that Big Data alone won’t be able to fix education systems. Decision-makers need to gain a better understanding of what good teaching is and how it leads to better learning in schools. This is where information about details, relationships and narratives in schools become important. These are what Martin Lindstrom calls Small Data: small clues that uncover huge trends. In education, these small clues are often hidden in the invisible fabric of schools. Understanding this fabric must become a priority for improving education.

The ‘compulsive collector of clues, Martin Lindstrom, defines Small data as:

Seemingly insignificant behavioral observations containing very specific attributes pointing towards an unmet customer need. Small data is the foundation for break through ideas or completely new ways to turnaround brands.

Sahlberg takes this concept and applies it to education. Some ‘small data’ practices he suggests include:

  • Focus on formative assessment over standardised testing
  • Develop collective autonomy and teamwork in schools
  • Involve students in assessing and reflecting their own learning and then incorporating that information into collective human judgment about teaching and learning

This move away from standardisation is something championed by people like Greg Whitby.

Bookmarked Beyond the Rhetoric of Algorithmic Solutionism by dana boyd (Points)

Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.

danah boyd reviews a book by Virginia Eubanks which takes a look at the way(s) that algorithms work within particular communities. Along with Weapons of Math Destruction and Williamson’s Big Data in Education, they provide a useful starting point for discussing big data today.