From experience, when people bypass the AI or properly filtering the various applications, they fall back on who they know, which sometimes promotes certain types over others.
There has to be a better way, just not sure what it is.
Also on: Read Write Collect
We offer up this map and essay as a way to begin seeing across a wider range of system extractions. The scale required to build artificial intelligence systems is too complex, too obscured by intellectual property law, and too mired in logistical complexity to fully comprehend in the moment. Yet you draw on it every time you issue a simple voice command to a small cylinder in your living room: ‘Alexa, what time is it?”
Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch.
Smartphone batteries, for example, usually have less than eight grams of this material. 5 Each Tesla car needs approximately seven kilograms of lithium for its battery pack. 6
There are deep interconnections between the literal hollowing out of the materials of the earth and biosphere, and the data capture and monetization of human practices of communication and sociality in AI.
Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product.
Media technologies should be understood in context of a geological process, from the creation and the transformation processes, to the movement of natural elements from which media are built.
According to research by Amnesty International, during the excavation of cobalt which is also used for lithium batteries of 16 multinational brands, workers are paid the equivalent of one US dollar per day for working in conditions hazardous to life and health, and were often subjected to violence, extortion and intimidation. 16 Amnesty has documented children as young as 7 working in the mines. In contrast, Amazon CEO Jeff Bezos, at the top of our fractal pyramid, made an average of $275 million a day during the first five months of 2018, according to the Bloomberg Billionaires Index. 17
A child working in a mine in the Congo would need more than 700,000 years of non-stop work to earn the same amount as a single day of Bezos’ income.
The most severe costs of global logistics are born by the atmosphere, the oceanic ecosystem and all it contains, and the lowest paid workers.
In the same way that medieval alchemists hid their research behind cyphers and cryptic symbolism, contemporary processes for using minerals in devices are protected behind NDAs and trade secrets.
Hidden among the thousands of other publicly available patents owned by Amazon, U.S. patent number 9,280,157 represents an extraordinary illustration of worker alienation, a stark moment in the relationship between humans and machines. 37 It depicts a metal cage intended for the worker, equipped with different cybernetic add-ons, that can be moved through a warehouse by the same motorized system that shifts shelves filled with merchandise. Here, the worker becomes a part of a machinic ballet, held upright in a cage which dictates and constrains their movement.
As human agents, we are visible in almost every interaction with technological platforms. We are always being tracked, quantified, analyzed and commodified. But in contrast to user visibility, the precise details about the phases of birth, life and death of networked devices are obscured. With emerging devices like the Echo relying on a centralized AI infrastructure far from view, even more of the detail falls into the shadows.
At every level contemporary technology is deeply rooted in and running on the exploitation of human bodies.
The new gold rush in the context of artificial intelligence is to enclose different fields of human knowing, feeling, and action, in order to capture and privatize those fields.
At this moment in the 21st century, we see a new form of extractivism that is well underway: one that reaches into the furthest corners of the biosphere and the deepest layers of human cognitive and affective being. Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with error. Yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed.
via Doug Belshaw
I think there’s a lot to say about machine learning and the push for “personalization” in education. And the historian in me cannot help but add that folks have trying to “personalize” education using machines for about a century now. The folks building these machines have, for a very long time, believed that collecting the student data generated while using the machines will help them improve their “programmed instruction” – this decades before Mark Zuckerberg was born.
I think we can talk about the labor issues – how this continues to shift expertise and decision making in the classroom, for starters, but also how students’ data and students’ work is being utilized for commercial purposes. I think we can talk about privacy and security issues – how sloppily we know that these companies, and unfortunately our schools as well, handle student and teacher information.
But I’ll pick two reasons that we should be much more critical about education technologies.
Anytime you hear someone say “personalization” or “AI” or “algorithmic,” I urge you to replace that phrase with “prediction.”
We need to pursue a political philosophy that was embraced in ’68, of living the new society through authentic action in the here and now.
Perhaps the revolution will not be televised, but it will certainly be subject to algorithmic analysis.
like global warming, AI has become a hyperobject so massive that its totality is not realised in any local manifestation, a higher dimensional entity that adheres to anything it touches, whatever the resistance, and which is perceived by us through its informational imprints
When people deliberately feed AI the wrong kind of data it makes surreal classifications. It’s a lot of fun, and can even make art that gets shown in galleries but, like the Situationist drive through the Harz region of Germany while blindly following a map of London, it can also be a poetic disorientation that coaxes us out of our habitual categories
A counterculture of AI must be based on immediacy. The struggle in the streets must go hand in hand with a detournement of machine learning; one that seeks authentic decentralization, not Uber-ised serfdom, and federated horizontalism not the invisible nudges of algorithmic governance.
We want a fun yet anti-fascist AI, so we can say “beneath the backpropagation, the beach!”.
via Cory Doctorow
Social credit will be affected by more than just internet browsing and shopping decisions.
Who your friends and family are will affect your score. If your best friend or your dad says something negative about the government, you’ll lose points too.
Who you date and ultimately partner with will also affect social credit.
via Audrey Watters
Ex-Intel executive Milena Marinova is sifting through a mountain of learning data to build the world’s most-effective virtual tutor.
Forget the humans versus machine dichotomy. Our relationship with technology is far more complicated than that. To understand AI, first we need to appreciate the role humans play in shaping it.
I am also reminded of Kin Lane’s point about storytelling:
90% of what you are being told about AI, Blockchain, and automation right now isn’t truthful. It is only meant allocate space in your imagination, so that at the right time you can be sold something, and distracted while your data, privacy, and security can be exploited, or straight up swindled out from under you.
This flows on from Audrey Watters’ argument:
The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.
China is reversing the commonly held vision of technology as a great democratizer, bringing people more freedom and connecting them to the world. In China, it has brought control.
Ready or not, technologies such as online surveys, big data, and wearable devices are already being used to measure, monitor, and modify students’ emotions and mindsets.
For years, there’s been a movement to personalize student learning based on each child’s academic strengths, weaknesses, and preferences. Now, some experts believe such efforts shouldn’t be limited to determining how well individual kids spell or subtract. To be effective, the thinking goes, schools also need to know when students are distracted, whether they’re willing to embrace new challenges, and if they can control their impulses and empathize with the emotions of those around them.
Something that Martin E. P. Seligman has discussed about in regards to Facebook. Having recently been a part of demonstration of SEQTA, I understand Ben Williamson’s point that this “could have real consequences.” The concern is that all consequences are good. Will Richardson shares his concern that we have forgotten about learning and the actual lives of the students. Providing his own take on the matter, Bernard Bull has started a seven-part series looking at the impact of AI on education, while Neil Selwyn asks the question, “who does the automated system tell the teacher to help first – the struggling girl who rarely attends school and is predicted to fail, or a high-flying ‘top of the class’ boy?” Selwyn also explains why teachers will never be replaced.
Going forward we need to be aware of all the inherent limitations of what AI is and the very human challenges using algorithms and big data. They are human inventions and are embedded in political, economic and social contexts that come with the biases and ideologies. AI can definitely augment our profession and help us become better teachers, but as teachers and students we need to be aware of the context in which this change is playing out. We need to understand it and use it where it will be to the benefit of us all.
Machine learning often boils down to the art of developing an intuition for where something went wrong (or could work better) when there are many dimensions of things that could go wrong (or work better). This is a key skill that you develop as you continue to build out machine learning projects: you begin to associate certain behavior signals with where the problem likely is in your debugging space.
There is a rich design space for interacting with enumerative algorithms, and we believe an equally rich space exists for interacting with neural networks. We have a lot of work left ahead of us to build powerful and trusthworthy interfaces for interpretability. But, if we succeed, interpretability promises to be a powerful tool in enabling meaningful human oversight and in building fair, safe, and aligned AI systems
(Crossposted on the Google Open Source Blog)
In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as De…
Watch Dr Simon Longstaff’s presentation for more on ethics.
Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.