The bot can act as a guide on the side and assist with some resources that may help. The bot can recognize the prior achievement of the learner and adjust the level of support it provides. The bot can provide realtime assurance by walking through the assignment with the learner, and either collecting the assignment, providing feedback and a chance to resubmit, or granting an extension of the deadline if things get too pressing.
Done well, the use of bots in education offers an opportunity to free up the instructor while offering better scaffolding for learners. Educators can be freed up from the traditional frustrations of data collection, report filing, and administrative tasks.
Technology provides the starting point, but we cannot lose high touch when we move to high tech. Culture and professional development for learners, instructors, and support staff are even more important.
This reminds me of Bill Ferriter’s argument that technology makes learning more doable. I guess the question then becomes what sort of learning is supported and made more doable. Maybe sometimes friction actually serves a purpose?
Because not everything that is meaningful can be measured.
Before you optimize a task or function, take a step back and consider the goal. If extreme efficiency is the only goal, by all means optimize away — because that will make you happy.
But if a personal component is involved — purpose, or meaning, or satisfaction, or fulfillment, or self-awareness, or any number of other emotional rather than quantifiable outcomes — then make sure optimization doesn’t require too high of a cost.
In 2018, Rafaela Vasquez was working as a “safety driver” for Uber in Arizona. Employed to sit in a “self-driving car”, and seize control if something went wrong, she was behind the wheel when the car, a modified Volvo, hit and killed a pedestrian. The details, as they always are, are messy. The car had been altered to disable Volvo’s own automatic braking function, so as to test Uber’s machine learning system. The pedestrian was crossing the road outside of a designated spot. Arizona had passed wildly permissive laws allowing testing of self-driving vehicles with minimal oversight, in an effort to tempt valuable engineering jobs from companies like Google and Uber. And Vasquez, at the time of the collision, was watching TV.
Waymo now says that experience was crucial in guiding how it approached self-driving cars. Rather than aiming for so-called “level 4” autonomy, where the car can mostly drive itself but a human needs to take over in emergencies, the company decided to jump straight to “level 5” – where a human driver is never needed. Their experience was that human drivers simply weren’t capable of serving as a back-up to a nearly-but-not-entirely infallible robot.
The reality in the end is that although full automation is the goal, companies like Uber are still reliant on humans to step in when needed and this is easier said than done.
Sign in to your IFTTT account
Did you catch that? Apparently Google has a new mission — to bring a more helpful Google to you. So much for organizing the world’s information!
It’s been almost six years since I rode in one of Google’s self-driving cars. I think about all the data that Google has amassed since then – all the mapping data and geolocation data and sensor data and historical data and traffic data and all the machine learning that their machines are supposedly doing with that. Why, it’s almost as if the problems of navigating the world with AI are much, much harder than engineers imagined.
Personally, I’d prefer to see greater investment in public transportation than in cars, and I’d rather hear stories that predict that sort of future.
Interestingly, that might be a more logical space for automation, especially trains.
Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?
Our technologies are extensions of ourselves, codified in machines and infrastructures, in frameworks of knowledge and action. Computers are not here to give us all the answers, but to allow us to put new questions, in new ways, to the universe
This is a part of a few posts from Bridle going around at the moment, including a reflection on technology whistleblowers and YouTube’s response to last years exposé. Some of these ideas remind me of some of the concerns raised in Martin Ford’s Rise of the Robots and Cathy O’Neil’s Weapons of Math Destruction.
Going forward we need to be aware of all the inherent limitations of what AI is and the very human challenges using algorithms and big data. They are human inventions and are embedded in political, economic and social contexts that come with the biases and ideologies. AI can definitely augment our profession and help us become better teachers, but as teachers and students we need to be aware of the context in which this change is playing out. We need to understand it and use it where it will be to the benefit of us all.
All the personal tasks in our lives are being made easier. But at what cost?
The paradoxical truth I’m driving at is that today’s technologies of individualization are technologies of mass individualization. Customization can be surprisingly homogenizing. Everyone, or nearly everyone, is on Facebook: It is the most convenient way to keep track of your friends and family, who in theory should represent what is unique about you and your life. Yet Facebook seems to make us all the same. Its format and conventions strip us of all but the most superficial expressions of individuality, such as which particular photo of a beach or mountain range we select as our background image.
I do not want to deny that making things easier can serve us in important ways, giving us many choices (of restaurants, taxi services, open-source encyclopedias) where we used to have only a few or none. But being a person is only partly about having and exercising choices. It is also about how we face up to situations that are thrust upon us, about overcoming worthy challenges and finishing difficult tasks — the struggles that help make us who we are. What happens to human experience when so many obstacles and impediments and requirements and preparations have been removed?
Wu argues that struggling and working things out is about identity:
We need to consciously embrace the inconvenient — not always, but more of the time. Nowadays individuality has come to reside in making at least some inconvenient choices. You need not churn your own butter or hunt your own meat, but if you want to be someone, you cannot allow convenience to be the value that transcends all others. Struggle is not always a problem. Sometimes struggle is a solution. It can be the solution to the question of who you are.
I recently reflected on the impact of convienience on learning. I guess that is a part of my ‘identity’.
via Audrey Watters
It’s not tools, culture or communication that make humans unique but our knack for offloading dirty work onto machines
There are two ways to give tools independence from a human, I’d suggest. For anything we want to accomplish, we must produce both the physical forces necessary to effect the action, and also guide it with some level of mental control. Some actions (eg, needlepoint) require very fine-grained mental control, while others (eg, hauling a cart) require very little mental effort but enormous amounts of physical energy. Some of our goals are even entirely mental, such as remembering a birthday. It follows that there are two kinds of automation: those that are energetically independent, requiring human guidance but not much human muscle power (eg, driving a car), and those that are also independent of human mental input (eg, the self-driving car). Both are examples of offloading our labour, physical or mental, and both are far older than one might first suppose.
Although it can be misconstrued as making us stupid, the intent of automation is complexity:
The goal of automation and exportation is not shiftless inaction, but complexity. As a species, we have built cities and crafted stories, developed cultures and formulated laws, probed the recesses of science, and are attempting to explore the stars. This is not because our brain itself is uniquely superior – its evolutionary and functional similarity to other intelligent species is striking – but because our unique trait is to supplement our bodies and brains with layer upon layer of external assistance.
My question is whether some automation today is actually intended to be stupid or too convenient as a means of control. This touches on Douglas Rushkoff’s warning ‘program or be programmed. I therefore wonder what the balance is between automation and manually completing various tasks in order to create more complexity.