Liked Invisible Labor and Digital Utopias by Audrey Watters (Hack Education)
The efficiency of teaching and learning – that means we need to talk about labor, in this illustration, in our imagined futures, in our stories. Because it’s not just the machine (or it’s not the machine alone) – in this depiction or in our practices – that is doing “the work.” There is invisible labor here. Not depicted. Not imagined. Not theorized or commented upon by Asimov.
Bookmarked Email Is Dangerous (The Atlantic)
Email has changed since then, but not much. Most of what’s changed in the last 45 years is email clients—the software we use to access email. They’ve clumsily bolted on new functionality onto the old email, without fixing any of the underlying protocols to support that functionality.
In my work with schools there is a lot of conversations that seem to end with “just email [insert content] to them”. Although this is convenient, it is not always the best practice. In this post from Quinn Norton in The Atlantic she shares why. Continuing to remind us how everything is broken, Norton gives a history of email and many of its inherent flaws. This comes on the back of the latest discovery of bugs associated with supposed encrypted email.
Bookmarked Digitally Connected and Proficient at Three by Mal Lee
The bit of being digital that is set in stone from age three is the absolute awareness that being connected aids their learning, and that connectedness is highly visual and aural, as well as being textual, and includes connection with people as well as information. They have probably also internalised that they can interact creatively with the digital environment and everything in it, to aid their learning. Hence the comparison with learning to speak, in that it is messy, diverse, involves a lot of trial and error and has concepts built and rebuilt from a multitude of influences.
Mal Lee and Roger Broadie discuss the relationship between infants and the digital world. One of the points that they make is that, by three, children brought up in digital environments will be largely directing their own learning with the digital. This raises so many questions for me, such as what is lost in this transfer to swipping on a tablet and talking to search engines, as well as who or what the children are actually connecting with? It is interesting to think about this in regards to Google’s ‘selfish ledger’.
Liked Hack Education Weekly News by Audrey Watters (Hack Education)

Michael Horn writes in Edsurge about “Why Google Maps – not Netflix or Amazon – Points to the Future of Education.” Funny, it was just a few years ago that he wrote that, indeed, Netflix and Amazon did point the way.

It’s almost as though there are zero consequences in ed-tech for being full of shit.

Bookmarked Digital Readiness (

Open Lecture: 2018 Steve Wheeler- Literacies and competencies for learning in the digital age from Educational Development Unit on Vimeo.

The rapid proliferation and deployment of smart mobile, pervasive computing, social and personal technologies is changing the higher education landscape. In this presentation I will argue that new media present new opportunities for learning through digital technologies, but that such opportunities will require new literacies. This is not just my view - it reflects the views of many other commentators including Lea & Jones (2011), Beetham et al (2009) and Lankshear & Knobel(2006). Essentially, the traditional literacies that have dominated higher education in the past are thought to no longer be sufficient in the face of recent changes. I will explore a range of new 'digital literacies and competencies', discuss the concept of 'digital fluency' and highlight some new and emergent pedagogical theories, including connectivism, heutagogy, paralogy and rhizomatic learning, that seek to explain how students are learning in the first part of the 21st Century.

Steve Wheeler is a Learning Innovations Consultant and former Associate Professor of Learning Technologies at the Plymouth Institute of Education where he chaired the Learning Futures group and led the Computing and science education teams. He continues to research into technology supported learning and distance education, with particular emphasis on the pedagogy underlying the use of social media and Web 2.0 technologies, and also has research interests in mobile learning and cybercultures. He has given keynotes to audiences in more than 35 countries and is author of more than 150 scholarly articles, with over 6000 academic citations. An active and prolific edublogger, his blog Learning with 'e'sis a regular online commentary on the social and cultural impact of disruptive technologies, and the application of digital media in education, learning and development. In the last few years it has attracted in excess of 7.5 million unique visitors.

More about Steve Wheeler

Steve Wheeler’s presentation is not necessarily a definition of what digital literacies / fluencies, but rather a wander through education today. For Wheeler, the key is finding your desire lines and personalised learning. This not only touches on what is learned, but also how the learning occurs – negotiated, blended, socially. It is interesting to think of some of these ideas alongside Peter Hutton’s work and calls to reform Australian education.
re:publica 2018 – danah boyd: Opening Keynote: How an Algorithmic World Can Be Undermined

How is it that its not necessarily [technologies] intentions, but the structuring configuration that causes the pain

danah boyd continues her investigation of algorithms and the way in which our data is being manipulated. This is very much a wicked problem with no clear answer. Data & Society have also published a primer on the topic. I wonder if it starts by being aware of the systemic nature of it all? Alternatively, Jamie Williams and Lena Gunn provide five questions to consider when using algorithms.

via Jenny Mackness and Ian O’Byrne.

Liked Meme Histories – Learning the Web So We Can Make It Better by dave dave (
I believe that people sometimes need to learn to work building their objectives on the fly given what they’ve been confronted with. So how do I design activities that allow for people to learn to persist through that uncertainty and still be willing to accept half answers when that’s as far as they will get? Meme histories. That’s how.
Liked To work for society, data scientists need a hippocratic oath with teeth by Tom Upchurch (WIRED UK)

The first question is, are the algorithms that we deploy going to improve the human processes that they are replacing? Far too often we have algorithms that are thrown in with the assumptions that they’re going to work perfectly, because after all they’re algorithms, but they actually end up working much worse than the system that they’re replacing. For example in Australia they implemented an algorithm that sent a bunch of ridiculously threatening letters to people saying that they had defrauded the Australian Government. That’s a great example where they actually just never tested it to make sure it worked.

The second question is to ask, for whom is the algorithm failing? We need to be asking, “Does it fail more often for women than for men? Does it fail more often for minorities than for whites? Does it fail more often for old people than for young people?” Every single class should get a question and an answer. The big example I have for this one is the facial recognition software that the MIT Media Lab found worked muchbetter for white men than black women. That is a no-brainer test that every single facial recognition software company should have done and it’s embarrassing that they didn’t do it.

The third category of question is simply, is this working for society? Are we tracking the mistakes of the system? Are we inputting these mistakes back into the algorithm so that it’ll work better? Is it causing some other third unintended consequence? Is it destroying democracy? Is it making people worse off?