Replied to The Challenge Of Work (email.mg2.substack.com)

ATS systems filter applications based on keywords, skills, former employers, years of experience, schools …. you name it. If you really need ‘3 years of full stack development’ for that job you are looking to fill – then the system weeds out any resume that doesn’t reveal that the candidate has a minimum of three years doing just that.

So, the winning candidate needs to be just as skilled at tuning their resume, cover letters and conversations to maximize the chance that the AI picks them … as they are at actually doing the job they are applying to fill!

John, this was an interesting read, especially in light of Malcolm Gladwell’s call to remove the name of the university from applications in a recent podcast.

From experience, when people bypass the AI or properly filtering the various applications, they fall back on who they know, which sometimes promotes certain types over others.

There has to be a better way, just not sure what it is.

Replied to AI and Human Freedom by Cameron Paterson (learningshore.edublogs.org)

Historian Yuval Noah Harari writes, “The algorithms are watching you right now.  They are watching where you go, what you buy, who you meet.  Soon they will monitor all your steps, all your breaths, all your heartbeats.  They are relying on Big Data and machine learning to get to know you bette…

This is useful provocation Cameron. In part it reminds me of James Bridle’s contribution to the rethinking of Human Rights for the 21st century. I think we are entering or in a challenging time when consuming (or prosuming) comes before being informed. Something I elaborated elsewhere. With AI do we know the consequence anymore and what does it mean to discuss this in the humanities not just the tech class?

Also on: Read Write Collect

Bookmarked Anatomy of an AI System by Kate Crawford and Vladan Joler (Anatomy of an AI System)

We offer up this map and essay as a way to begin seeing across a wider range of system extractions. The scale required to build artificial intelligence systems is too complex, too obscured by intellectual property law, and too mired in logistical complexity to fully comprehend in the moment. Yet you draw on it every time you issue a simple voice command to a small cylinder in your living room: ‘Alexa, what time is it?”

This dive into the world of the Amazon Echo provides an insight into the way that engages with vast planetary network of systems in a complicated assemblage. This includes the use of rare metals, data mining, slavery and black box of secrets. These are topics touched upon by others, such as Douglas Rushkoff and Kin Lane, where this piece differs though is the depth it goes to. Through the numerous anecdotes, it is also reminder why history matters.

Marginalia

Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch.

Smartphone batteries, for example, usually have less than eight grams of this material. 5 Each Tesla car needs approximately seven kilograms of lithium for its battery pack. 6

There are deep interconnections between the literal hollowing out of the materials of the earth and biosphere, and the data capture and monetization of human practices of communication and sociality in AI.

Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product.

Media technologies should be understood in context of a geological process, from the creation and the transformation processes, to the movement of natural elements from which media are built.

According to research by Amnesty International, during the excavation of cobalt which is also used for lithium batteries of 16 multinational brands, workers are paid the equivalent of one US dollar per day for working in conditions hazardous to life and health, and were often subjected to violence, extortion and intimidation. 16 Amnesty has documented children as young as 7 working in the mines. In contrast, Amazon CEO Jeff Bezos, at the top of our fractal pyramid, made an average of $275 million a day during the first five months of 2018, according to the Bloomberg Billionaires Index. 17
A child working in a mine in the Congo would need more than 700,000 years of non-stop work to earn the same amount as a single day of Bezos’ income.

The most severe costs of global logistics are born by the atmosphere, the oceanic ecosystem and all it contains, and the lowest paid workers.

In the same way that medieval alchemists hid their research behind cyphers and cryptic symbolism, contemporary processes for using minerals in devices are protected behind NDAs and trade secrets.

Hidden among the thousands of other publicly available patents owned by Amazon, U.S. patent number 9,280,157 represents an extraordinary illustration of worker alienation, a stark moment in the relationship between humans and machines. 37 It depicts a metal cage intended for the worker, equipped with different cybernetic add-ons, that can be moved through a warehouse by the same motorized system that shifts shelves filled with merchandise. Here, the worker becomes a part of a machinic ballet, held upright in a cage which dictates and constrains their movement.

As human agents, we are visible in almost every interaction with technological platforms. We are always being tracked, quantified, analyzed and commodified. But in contrast to user visibility, the precise details about the phases of birth, life and death of networked devices are obscured. With emerging devices like the Echo relying on a centralized AI infrastructure far from view, even more of the detail falls into the shadows.

At every level contemporary technology is deeply rooted in and running on the exploitation of human bodies.
The new gold rush in the context of artificial intelligence is to enclose different fields of human knowing, feeling, and action, in order to capture and privatize those fields.

At this moment in the 21st century, we see a new form of extractivism that is well underway: one that reaches into the furthest corners of the biosphere and the deepest layers of human cognitive and affective being. Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with error. Yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed.

via Doug Belshaw

Bookmarked Machine Teaching, Machine Learning, and the History of the Future of Public Education (Hack Education)

I think there’s a lot to say about machine learning and the push for “personalization” in education. And the historian in me cannot help but add that folks have trying to “personalize” education using machines for about a century now. The folks building these machines have, for a very long time, believed that collecting the student data generated while using the machines will help them improve their “programmed instruction” – this decades before Mark Zuckerberg was born.


I think we can talk about the labor issues – how this continues to shift expertise and decision making in the classroom, for starters, but also how students’ data and students’ work is being utilized for commercial purposes. I think we can talk about privacy and security issues – how sloppily we know that these companies, and unfortunately our schools as well, handle student and teacher information.


But I’ll pick two reasons that we should be much more critical about education technologies.

In some prepared remarks, delivered on a panel titled “Outsourcing the Classroom to Ed Tech & Machine-learning: Why Parents & Teachers Should Resist” at the Network for Public Education conference in Indianapolis, Audrey Watters highlights two major points of concern. Firstly, (ed)tech is a black box providing little understanding of how to works. While secondly, AI is biased. Machine learning is better considered as machine predictions.

Anytime you hear someone say “personalization” or “AI” or “algorithmic,” I urge you to replace that phrase with “prediction.”

Bookmarked Rethinking AI through the politics of 1968 (openDemocracy)

We need to pursue a political philosophy that was embraced in ’68, of living the new society through authentic action in the here and now.

Dan McQuillan looks at AI and discusses some of the possibilities of a counter-culture built around humans and play.

Marginalia

Perhaps the revolution will not be televised, but it will certainly be subject to algorithmic analysis.

like global warming, AI has become a hyperobject[11] so massive that its totality is not realised in any local manifestation, a higher dimensional entity that adheres to anything it touches, whatever the resistance, and which is perceived by us through its informational imprints

When people deliberately feed AI the wrong kind of data it makes surreal classifications. It’s a lot of fun, and can even make art that gets shown in galleries[21] but, like the Situationist drive through the Harz region of Germany while blindly following a map of London, it can also be a poetic disorientation that coaxes us out of our habitual categories

A counterculture of AI must be based on immediacy. The struggle in the streets must go hand in hand with a detournement of machine learning; one that seeks authentic decentralization, not Uber-ised serfdom, and federated horizontalism not the invisible nudges of algorithmic governance.

We want a fun yet anti-fascist AI, so we can say “beneath the backpropagation, the beach!”.

via Cory Doctorow

Bookmarked Leave no dark corner – ABC News (Australian Broadcasting Corporation) (mobile.abc.net.au)

Social credit will be affected by more than just internet browsing and shopping decisions.

Who your friends and family are will affect your score. If your best friend or your dad says something negative about the government, you’ll lose points too.

Who you date and ultimately partner with will also affect social credit.

Matthew Carney provides an insight into the digital dictatorship that China is exerting over its citizens through the use of “social credit”. This is a part of the wider push to use facial recognition in schools and universities and shopping centres. Yu Hua provides a different perspective on China’s rise, looking at the changes in generations. Foreign Correspondence also dives into the topic:

via Audrey Watters

Listened The role of humans in the technological age from Radio National

Forget the humans versus machine dichotomy. Our relationship with technology is far more complicated than that. To understand AI, first we need to appreciate the role humans play in shaping it.


Listened
This episode of Future Tense raised so many questions. Just because we could, it doesn’t always mean we should. For me, this is the point of the Black Mirror series.

I am also reminded of Kin Lane’s point about storytelling:

90% of what you are being told about AI, Blockchain, and automation right now isn’t truthful. It is only meant allocate space in your imagination, so that at the right time you can be sold something, and distracted while your data, privacy, and security can be exploited, or straight up swindled out from under you.

This flows on from Audrey Watters’ argument:

The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

Replied to Microsoft’s Ethical Reckoning Is Here (WIRED)

On Sunday, critics noted a blog post from January in which Microsoft touted its work with US Immigration and Customs Enforcement (ICE). The post celebrated a government certification that allowed Microsoft Azure, the company’s cloud-computing platform, to handle sensitive unclassified information for ICE. The sales-driven blog post outlined ways that ICE might use Azure Government, including enabling ICE employees to “utilize deep learning capabilities to accelerate facial recognition and identification,” Tom Keane, a general manager at Microsoft wrote. “The agency is currently implementing transformative technologies for homeland security and public safety, and we’re proud to support this work with our mission-critical cloud,” the post added.

I am currently reading James Bridle’s New Dark Age and wonder if the partnership between ICE and Azure is just technology returning home?
Bookmarked How (and Why) Ed-Tech Companies Are Tracking Students’ Feelings by Benjamin Herold (Education Week)

Ready or not, technologies such as online surveys, big data, and wearable devices are already being used to measure, monitor, and modify students’ emotions and mindsets.

Benjamin Herold takes a dive into the rise of edtech to measure the ‘whole’ student, with a particular focus on wellbeing.

For years, there’s been a movement to personalize student learning based on each child’s academic strengths, weaknesses, and preferences. Now, some experts believe such efforts shouldn’t be limited to determining how well individual kids spell or subtract. To be effective, the thinking goes, schools also need to know when students are distracted, whether they’re willing to embrace new challenges, and if they can control their impulses and empathize with the emotions of those around them.

Something that Martin E. P. Seligman has discussed about in regards to Facebook. Having recently been a part of demonstration of SEQTA, I understand Ben Williamson’s point that this “could have real consequences.” The concern is that all consequences are good. Will Richardson shares his concern that we have forgotten about learning and the actual lives of the students. Providing his own take on the matter, Bernard Bull has started a seven-part series looking at the impact of AI on education, while Neil Selwyn asks the question, “who does the automated system tell the teacher to help first – the struggling girl who rarely attends school and is predicted to fail, or a high-flying ‘top of the class’ boy?” Selwyn also explains why teachers will never be replaced.

Replied to Too Long; Didn’t Read #152 (W. Ian O’Byrne)

Well, amazing things happen when you dump that repository of info into one of the world’s best machine learning engines.

I find all these seemingly incidental combinations intriguing for lack of a better word. When Google hit the books, is this what they had in mind? I was reading today about the move away from ‘research’ to ‘AI’. I wonder what the consequences of Sidewalk Labs on military surveillance?
Liked Artificial Intelligence and education: moving beyond the hype by Jelmer Evers (Medium)

Going forward we need to be aware of all the inherent limitations of what AI is and the very human challenges using algorithms and big data. They are human inventions and are embedded in political, economic and social contexts that come with the biases and ideologies. AI can definitely augment our profession and help us become better teachers, but as teachers and students we need to be aware of the context in which this change is playing out. We need to understand it and use it where it will be to the benefit of us all.

Liked Why is machine learning ‘hard’? (ai.stanford.edu)

Machine learning often boils down to the art of developing an intuition for where something went wrong (or could work better) when there are many dimensions of things that could go wrong (or work better). This is a key skill that you develop as you continue to build out machine learning projects: you begin to associate certain behavior signals with where the problem likely is in your debugging space.

Bookmarked The Building Blocks of Interpretability by Chris Olah; Arvind Satyanarayan (Google Brain Team)

There is a rich design space for interacting with enumerative algorithms, and we believe an equally rich space exists for interacting with neural networks. We have a lot of work left ahead of us to build powerful and trusthworthy interfaces for interpretability. But, if we succeed, interpretability promises to be a powerful tool in enabling meaningful human oversight and in building fair, safe, and aligned AI systems

(Crossposted on the Google Open Source Blog)

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as De…

Is it just me, or is this new article exploring how feature visualization can combine together with other interpretability techniques to understand aspects of how networks make decisions a case of creating a solution and then working out how or why it works? Seems reactive or maybe I just don’t get it.
Listened https://www.theguardian.com/technology/audio/2018/mar/02/who-needs-ethics-anyway-chips-with-everything-podcast from theguardian.com
This conversation led by Jordan Erica Webber is a useful introduction to debate about ethics and technology. One of the interesting points made was in regards to Google and the situation where Google Photos mislabelled people with dark skin as gorillas. This is a consequence of years of racism or focus on whiteness within technology.

Watch Dr Simon Longstaff’s presentation for more on ethics.

Bookmarked Beyond the Rhetoric of Algorithmic Solutionism by dana boyd (Points)

Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.

danah boyd reviews a book by Virginia Eubanks which takes a look at the way(s) that algorithms work within particular communities. Along with Weapons of Math Destruction and Williamson’s Big Data in Education, they provide a useful starting point for discussing big data today.