The efficiency of teaching and learning – that means we need to talk about labor, in this illustration, in our imagined futures, in our stories. Because it’s not just the machine (or it’s not the machine alone) – in this depiction or in our practices – that is doing “the work.” There is invisible labor here. Not depicted. Not imagined. Not theorized or commented upon by Asimov.
Email has changed since then, but not much. Most of what’s changed in the last 45 years is email clients—the software we use to access email. They’ve clumsily bolted on new functionality onto the old email, without fixing any of the underlying protocols to support that functionality.
The bit of being digital that is set in stone from age three is the absolute awareness that being connected aids their learning, and that connectedness is highly visual and aural, as well as being textual, and includes connection with people as well as information. They have probably also internalised that they can interact creatively with the digital environment and everything in it, to aid their learning. Hence the comparison with learning to speak, in that it is messy, diverse, involves a lot of trial and error and has concepts built and rebuilt from a multitude of influences.
Michael Horn writes in Edsurge about “Why Google Maps – not Netflix or Amazon – Points to the Future of Education.” Funny, it was just a few years ago that he wrote that, indeed, Netflix and Amazon did point the way.
It’s almost as though there are zero consequences in ed-tech for being full of shit.
The rapid proliferation and deployment of smart mobile, pervasive computing, social and personal technologies is changing the higher education landscape. In this presentation I will argue that new media present new opportunities for learning through digital technologies, but that such opportunities will require new literacies. This is not just my view - it reflects the views of many other commentators including Lea & Jones (2011), Beetham et al (2009) and Lankshear & Knobel(2006). Essentially, the traditional literacies that have dominated higher education in the past are thought to no longer be sufficient in the face of recent changes. I will explore a range of new 'digital literacies and competencies', discuss the concept of 'digital fluency' and highlight some new and emergent pedagogical theories, including connectivism, heutagogy, paralogy and rhizomatic learning, that seek to explain how students are learning in the first part of the 21st Century.
Steve Wheeler is a Learning Innovations Consultant and former Associate Professor of Learning Technologies at the Plymouth Institute of Education where he chaired the Learning Futures group and led the Computing and science education teams. He continues to research into technology supported learning and distance education, with particular emphasis on the pedagogy underlying the use of social media and Web 2.0 technologies, and also has research interests in mobile learning and cybercultures. He has given keynotes to audiences in more than 35 countries and is author of more than 150 scholarly articles, with over 6000 academic citations. An active and prolific edublogger, his blog Learning with 'e'sis a regular online commentary on the social and cultural impact of disruptive technologies, and the application of digital media in education, learning and development. In the last few years it has attracted in excess of 7.5 million unique visitors.
More about Steve Wheeler https://steve-wheeler.net/
There’s nothing wrong with scissors, glue, and cardboard paper – I hope schools are not so quick to discard such fun, fulfilling, and slowed down activities.
How is it that its not necessarily [technologies] intentions, but the structuring configuration that causes the pain
danah boyd continues her investigation of algorithms and the way in which our data is being manipulated. This is very much a wicked problem with no clear answer. Data & Society have also published a primer on the topic. I wonder if it starts by being aware of the systemic nature of it all? Alternatively, Jamie Williams and Lena Gunn provide five questions to consider when using algorithms.
I believe that people sometimes need to learn to work building their objectives on the fly given what they’ve been confronted with. So how do I design activities that allow for people to learn to persist through that uncertainty and still be willing to accept half answers when that’s as far as they will get? Meme histories. That’s how.
The first question is, are the algorithms that we deploy going to improve the human processes that they are replacing? Far too often we have algorithms that are thrown in with the assumptions that they’re going to work perfectly, because after all they’re algorithms, but they actually end up working much worse than the system that they’re replacing. For example in Australia they implemented an algorithm that sent a bunch of ridiculously threatening letters to people saying that they had defrauded the Australian Government. That’s a great example where they actually just never tested it to make sure it worked.
The second question is to ask, for whom is the algorithm failing? We need to be asking, “Does it fail more often for women than for men? Does it fail more often for minorities than for whites? Does it fail more often for old people than for young people?” Every single class should get a question and an answer. The big example I have for this one is the facial recognition software that the MIT Media Lab found worked muchbetter for white men than black women. That is a no-brainer test that every single facial recognition software company should have done and it’s embarrassing that they didn’t do it.
The third category of question is simply, is this working for society? Are we tracking the mistakes of the system? Are we inputting these mistakes back into the algorithm so that it’ll work better? Is it causing some other third unintended consequence? Is it destroying democracy? Is it making people worse off?