Bookmarked Algorithmic Autobiographies and Fictions – How to Write With Your Digital Self (Medium)

In this blog post, we provide an outline for researchers who want to run our practical workshops to understand how audiences conceive of their personal relationships with their data.

Sophie Bishop and Tanya Kant share their approach for helping people to grapple with personal relationships with their data. Their activity involves users:

  • Exploring their ad preferences
  • Jotting down some notes about what “interest categories” stand out for them
  • Drawing a picture based on their preferences
  • Writing a short story, poem, or play (or any format of writing they like) describing a “day out” with their digital self

This relates back to Neil Selwyn’s discussion algorithmic literacies in an interview with Antony Funnel on the Future Tense podcast.

Bookmarked 12 unexpected ways algorithms control your life by Sasha Lekach (Mashable)

These are just some of the ways hidden calculations determine what you do and experience.

Sasha Lekach unpacks a number of examples of algorithms and their impacts, including getting into university, your ability to get a mortgage and getting hired. This builds on Dana Simmons’ discussion of gaming the automated grading process.
Bookmarked

Dana Simmons shares some advice for improving grades with Edgenuity and gaming the algorithm.

via Cory Doctorow

Listened Machine-enhanced decision making; and clapping, flapping drones from RN Future Tense

Artificial Intelligence and other advanced technologies are now being used to make decisions about everything from family law to sporting team selection. So, what works and what still needs refinement?


Also, they’re very small, very light and very agile – they clap as they flap their wings. Biologically-inspired drones are now a reality, but how and when will they be used?

I am left wondering about the implications of such developments in machine-learning. What it enhances, reverses, retrieves and makes obsolete? I am reminded of the work being done in regards to monitoring mental health using mobile phones.

To Neguine Rezaii, it’s natural that modern psychiatrists should want to use smartphones and other available technology. Discussions about ethics and privacy are important, she says, but so is an awareness that tech firms already harvest information on our behavior and use it—without our consent—for less noble purposes, such as deciding who will pay more for identical taxi rides or wait longer to be picked up.

“We live in a digital world. Things can always be abused,” she says. “Once an algorithm is out there, then people can take it and use it on others. There’s no way to prevent that. At least in the medical world we ask for consent.”

Maybe the inclusion of a personal devise changes the debate, however I am intrigued by the open declaration of data to a third-party entity. Although such solutions bring a certain sense of ease and efficiency, I imagine they also involve handing over a lot of personal information. I wonder what checks and balances have been put in place?

Bookmarked I Am a Book Critic. Here’s What Is Wrong With “Black Lists” — and What Is Good. (theintercept.com)

It’s difficult to know, in the typical chicken-and-egg conundrum, the extent to which Amazon is driving the public discussion on race, or our public debate is driving Amazon sales. Are the “Black Lists” pushing traffic on Amazon to particular books, and then those books pick up steam through the Amazon algorithm and get even more prominence? Or are loafing critics and readers cribbing from Amazon? At any rate, the online behemoth continues to hawk products by prioritizing them according to strong sales history and high conversion rates. The tyranny of the algorithm worsens our collective mental sloth where race is concerned. This mixture of culture, publishing, and code conflates traffic analytics with quality, and algorithmic recommendations with urgency.

Rich Benjamin discusses some of the problems and limitations to lists of books responding to political turmoil, particularly the impact of recommendation algorithms.
Liked Review of Hello World: How to Be Human in the Age of the Machine by Neil MatherNeil Mather (doubleloop.net)

when people talk about whether algorithms are good or bad, they pretty much always mean decision-making algorithms – something that makes a decision that affects a human in some way. So for example long division is an algorithm, but it’s not really having any decision making effect on society. We’re talking more about things like putting things in a category, making an ordered list, finding links between things, and filtering stuff out. And they might be ‘rule-based’ expert systems, in that the creator programs in a set of rules that the system then executes, or more recently machine learning algorithms, where you train an algorithm on a dataset by reinforcing ‘good’ or ‘bad’ behaviour. Often with these we can’t always be sure how the algorithms has come to a conclusion.

So what the book is really focused on is the effect our increased use of decision-making algorithms like these is having on things like power, advertising, medicine, crime, justice, cars and transport, basically stuff that makes up the fabric of society, and where we’re starting to outsource these decisions to algorithms.

Listened The artist and the algorithm: how YouTube is changing our relationship with music from abc.net.au

You often hear about artists under-appreciated in their time, who don’t find recognition until long after they’ve died.

Little known Japanese composer Hiroshi Yoshimura was one of those people.

Despite being a pioneer of the unique genre of kankyo ongaku – ambient music produced in Japan in the 1980s and 90s – most of his airplay came from the speakers of art galleries, museums and show homes.

He died in 2003, with most of his albums sitting as rare vinyls on the shelves of obscure record collectors.

That was, until a few years ago, when Hiroshi suddenly found millions of fans in the most unlikely place – YouTube.

Miyuki Jokiranta explores the way in which YouTube algorithms promote certain types of music to sustain our time and attention on the platform. This is something touched upon by the Rabbit Hole podcast.

Alternatively, Jez reflects on Music for Nine Post Cards.

Replied to Everyone has a mob self and an individual self, in varying proportions – Doug Belshaw’s Thought Shrapnel (Doug Belshaw’s Thought Shrapnel)

At the moment it’s not the tech that’s holding people back from such decentralisation but rather two things. The first is the mental model of decentralisation. I think that’s easy to overcome, as back in 2007 people didn’t really ‘get’ Twitter, etc. The second one is much more difficult, and is around the dopamine hit you get from posting something on social media and becoming a minor celebrity. Although it’s possible to replicate this in decentralised environments, I’m not sure we’d necessarily want to?

Doug, I find the ‘I don’t get x’ an interesting discussion. Personally speaking, I thought I got Twitter five years ago, but now I am not so sure. Has Twitter changed? I guess. But what is more significant is that i have changed, along with my thinking about the web. I therefore wonder how long dopermine model will last until it possibly loses its shine? In part, it feels like this is something Cal Newport touched upon recently in regards to Facebook:

The thought that keeps capturing my attention, however, is that perhaps in making this short term move toward increased profit, Facebook set itself up for long term trouble.

When this platform shifted from connection to distraction it abdicated its greatest advantage: network effects. If Facebook’s main pitch is that it’s entertaining, it must then compete with everything else that’s entertaining.

I am not exactly sure of the moderation associated with decentralised networks, but I am more interested in streams that we are able to manage ourselves.

Liked https://dogtrax.edublogs.org/2020/02/21/grappling-with-algorithms-and-justice-oh-the-humanity/ (dogtrax.edublogs.org)

There is no real solution — the algorithmic genie is long gone from its bottle. But we can be aware, and make some decisions about how what information we share and how we are being manipulated by technology.

Bookmarked VINYL (kenziemurphy.github.io)

About the Data The data visualized here were pulled from Spotify API. Most data attributes are computed by Spotify’s audio analysis algorithms.

Kenzie Murphy provides a tool for visualising Spotify’s data. This is an interesting example of big data.

via Ian O’Byrne

Listened The strange world of TikTok: viral videos and Chinese censorship – podcast from the Guardian

The Guardian’s Alex Hern tells Anushka Asthana about a series of leaked documents he has seen that showed the company’s moderation policies. They included guidance to censor videos that mention Tiananmen Square, Tibetan independence and the banned religious group Falun Gong.

Anushka Asthana and Alex Hern discuss social video app TikTok. This includes unpacking the censorship associated with the algorithm central to the app. One of the challenges is that without the leaked documentation it is very difficult to know what has been blocked as videos are not actually removed, but instead they are not promoted by the timeline algorithm. This all comes back to timelessness of the app.
Bookmarked Engaged Reading Time – Issue #48 (engagedreadingtime.com)

So, let’s break this down, for clarity’s sake:

  1. Google are changing the news and general search algorithms to prioritise high quality original reporting — and display it for longer
  2. They have also updated their human reviewer guidelines to suggest using awards are one metric of “high quality”
Adam Tinworth discusses the news that Google is changing the algorithm associated with Google News to prioritise award winning original news. This reminds me of Seth Godin’s question about who controls the future and whether more needs to be done to influence such decisions.
Liked How Does Spotify Know You So Well? (Medium)

To create Discover Weekly, there are three main types of recommendation models that Spotify employs:

  • Collaborative Filtering models (i.e. the ones that Last.fm originally used), which analyze both your behavior and others’ behaviors.
  • Natural Language Processing (NLP) models, which analyze text.
  • Audio models, which analyze the raw audio tracks themselves.
Bookmarked What will happen when machines write songs just as well as your favorite musician? (Mother Jones)

On the upside, the rise of AI tools could spur entirely new genres. Fresh music technologies often do. The electric guitar gave us rock, the synth helped create new wave, electronic drum machines and samplers catalyzed the growth of hip-hop. Auto-Tune was a dirty little secret of the record industry, a way to clean up bad singing performances, until artists like Cher and T-Pain used it to craft entirely new, wild vocal styles. The next great trend in music could be sparked by an artist who takes the AI capabilities and runs with them. “Someone can make their own and really develop an identity of, I’m the person who knows how to use this,” says Magenta project engineer Adam Roberts. “A violin­—this was technology that when you give it to Mozart, he goes, ‘Look what I can do with this piece of technology!’” exclaims Cohen, the Orchard co-founder. “If Mozart was a teenager in 2019, what would he do with AI?”

Clive Thompson looks at the marriage of music and machine learning to create tracks on demand. He discusses some of the possibilities, such as generating hours of ambient music on the fly or creating quick and easy soundtracks. It is interesting to think about this alongside software music and the innovation driven by broken machines.
Bookmarked Testifying at the Senate about A.I.-Selected Content on the Internet—Stephen Wolfram Blog (blog.stephenwolfram.com)
In a hearing of the US Senate Commerce Committee’s Subcommittee on Communications, Technology, Innovation and the Internet, Stephen Wolfram suggests that instead of breaking up the platforms we need to open up the possibility for third-party algorithms for people to choose between. This is all a part of what Wolfram explains as a movement towards an AI constitution. This post is useful in that it not only unpacks what is involved in creating such an algorithm, but it also unpacks a range of computational terms, such as data deducibility, computational irreducibility, non-explainability and ethical incompleteness.

Marginalia

Why does every aspect of automated content selection have to be done by a single business? Why not open up the pipeline, and create a market in which users can make choices for themselves?

Social networks get their usefulness by being monolithic: by having “everyone” connected into them. But the point is that the network can prosper as a monolithic thing, but there doesn’t need to be just one monolithic AI that selects content for all the users on the network. Instead, there can be a whole market of AIs, that users can freely pick between

I don’t think it’s realistic that everyone will be able to set up everything in detail for themselves. So instead, I think the better idea is to have discrete third-party providers, who set things up in a way that appeals to some particular group of users.

I wish we were ready to really start creating an AI Constitution. But we’re not (and it doesn’t help that we don’t have an AI analog of the few thousand years of human political history that were available as a guide when the US Constitution was drafted). Still, issue by issue I suspect we’ll move closer to the point where having a coherent AI Constitution becomes a necessity

there’s a “final ranking” problem. Given features of videos, and features of people, which videos should be ranked “best” for which people? Often in practice, there’s an initial coarse ranking. But then, as soon as we have a specific definition of “best”—or enough examples of what we mean by “best”—we can use machine learning to learn a program that will look at the features of videos and people, and will effectively see how to use them to optimize the final ranking.

As a variant of the idea of blocking all personal information, one can imagine blocking just some information—or, say, allowing a third party to broker what information is provided. But if one wants to get the advantages of modern content selection methods, one’s going to have to leave a significant amount of information—and then there’s no point in blocking anything, because it’ll almost certainly be reproducible through the phenomenon of data deducibility.

One feature of my suggestions is that they allow fragmentation of users into groups with different preferences. At present, all users of a particular ACS business have content that is basically selected in the same way. With my suggestions, users of different persuasions could potentially receive completely different content, selected in different ways.

I have lost count the amount of times that the art of making a sandwich has been used as the ultimate example of human algorithms. Although I agree it can be useful, I do not think that it provides the nuance for appreciating machine learning. For me this comes in the form of music.

I love listening to music with my daughters. One minute it might be a Disney classic, the next some pop song off the radio. What interests me is when I introduce something new to see the response. Each decision influences the next choice. This rather than sandwiches captures the challenges and complexities associated with ‘algorithms’ and ‘machine learning’.