That presentation is worth watching (or listening) too as well.
How is it that its not necessarily [technologies] intentions, but the structuring configuration that causes the pain
danah boyd continues her investigation of algorithms and the way in which our data is being manipulated. This is very much a wicked problem with no clear answer. Data & Society have also published a primer on the topic. I wonder if it starts by being aware of the systemic nature of it all? Alternatively, Jamie Williams and Lena Gunn provide five questions to consider when using algorithms.
- “Actively taking things out of context can be helpful for analysis”
- “help students truly appreciate epistemological differences”
- “help students see how they fill in gaps when the information presented to them is sparse and how hard it is to overcome priors [confirmation bias and selective attention]”
Benjamin Doxtdator raises the concern that focusing on the individual:
Would boyd’s cognitive strength training exercises have helped here? No. Turning inwards to psychology, rather outwards to the political context, is precisely what gives us ‘lone wolf’ analyses of white supremacy.
Instead Doxtdator suggests considering the technical infrastructure. Interestingly, she does touch on platforms in the Q&A at the end:
One of the things that is funny is that these technologies get designed for a very particular idea of what they could be used for and then they twist in different ways.source
I’ve never been one to feel the need to put on a lot of makeup in order to leave the house and I haven’t been someone who felt the need to buy bots to appear cool online. But I find it deeply hypocritical to listen to journalists and politicians wring their hands about fake followers and bots given that they’ve been playing at that game for a long time. Who among them is really innocent of trying to garner attention through any means possible?
Many people have unhealthy habits and dynamics in their life. Some are rooted in physical addiction. Others are habitual or psychological crutches. But across that spectrum, most people are aware of when something that they’re doing isn’t healthy. They may not be able to stop. Or they may not want to stop. Untangling that is part of the challenge. When you feel as though your child has an unhealthy relationship with technology (or anything else in their life), you need to start by asking if they see this the same way you do. When parents feel as though what their child is doing is unhealthy for them, but the child does not, the intervention has to be quite different than when the child is also concerned about the issue.
Parents don’t like to see that they’re part of the problem or that their efforts to protect and help their children might backfire.
In response, she suggests two things for parents to do:
- Verbalize what you’re doing with your phone’
Create a household contract
After reading this, I tried verbalising my actions and it soon becomes apparent when maybe the phone could go away.
Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.
Published in DLTV Journal 1.2 December 2014