Bookmarked Opinion | We’re Banning Facial Recognition. We’re Missing the Point. (nytimes.com)

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.

Bruce Schneier argues that simply banning facial recognition is far too simplistic.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

As Cory Doctorow summarises,

Schneier says that we need to regulate more than facial recognition, we need to regulate recognition itself — and the data-brokers whose data-sets are used to map recognition data to peoples’ identities.

Liked ‘Absolutely outrageous’ Trivago misled customers on hotel pricing (ABC News)

Both the ACCC and Trivago called computer science experts to express opinions on the algorithm used by Trivago to select the ‘Top Position Offer’.

Trivago’s own data showed that in more than 66 per cent of listings, higher priced hotel offers were selected as the ‘Top Position Offer’ over alternative lower priced offers.

Justice Mark Moshinsky found that Trivago had “engaged in conduct that was misleading and deceptive or likely to mislead or deceive”, and therefore had broken the law.

The court also looked at Trivago’s use of strike-through prices.

It found hotel room rate comparisons that used strike-through prices, or text in different colours (for example green versus red), gave consumers a false impressions of savings.

“They weren’t comparing like with like, they were often comparing a luxury room price to a standard room price,” Mr Sims said.

Replied to Digitally Literate #229

Fragmented Digital Lives
Digitally Lit #229 – 1/18/2020

Ian, the piece about Clearview AI is both disconcerting, yet not a surprise. In particular though, I liked your point about shadow profiles.

This is the stuff that really concerns me when we think about surveillance of our data online. It’s not as much the companies that I know are collecting our content (e.g., Google, Facebook, Amazon) it is these shadowy, secretive groups that are collecting and archiving our content…and connecting the dots between all of our content.

Whether it be Google, Facebook or the plethora of any other secretive startups hoovering up the data, I am left wondering about the right to be forgotten.

It is also interesting to consider this alongside Seth Godin’s recent argument about privacy and permission marketing. Maybe it has always been this way, it is just in the digital age we are becoming more aware of it all?

Listened Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’ from the Guardian

Company’s work in 68 countries laid bare with release of more than 100,000 documents

Just when you think you have heard everything there was to know about Cambridge Analytica, someone goes and leaks over 100,000 documents.

The release of documents began on New Year’s Day on an anonymous Twitter account, @HindsightFiles, with links to material on elections in Malaysia, Kenya and Brazil. The documents were revealed to have come from Brittany Kaiser, an ex-Cambridge Analytica employee turned whistleblower, and to be the same ones subpoenaed by Robert Mueller’s investigation into Russian interference in the 2016 presidential election.

Liked Airbnb claims its AI can predict whether guests are psychopaths (Futurism)

According to patent documents reviewed by the Evening Standard, the tool takes into account everything from a user’s criminal record to their social media posts to rate their likelihood of exhibiting “untrustworthy” traits — including narcissism, Machiavellianism, and even psychopathy.

Bookmarked The PISA Illusion (Education in the Age of Globalization)

PISA successfully marketed itself as a measure of educational quality with the claim to measure skills and knowledge that matters in modern economies and in the future world. Upon closer examination, the excellence defined by PISA is but an illusion, a manufactured claim without any empirical evidence. Furthermore, PISA implies a monolithic and espouses a distorted and narrow view of purpose for all education systems in the world. The consequence is a trend of global homogenization of education and celebration of authoritarian education systems for their high PISA scores, while ignoring the negative consequences on important human attributes and local cultures of such systems.

In response to the latest release of PISA results, Yong Zhao highlights some of the problems associated with the program. This includes concern about what is measured and the purpose of education. For more on the representation of PISA, read Aspa Baroutsis and Bob Lingard.
Bookmarked The Old Internet Died And We Watched And Did Nothing (BuzzFeed News)

Quick: Can you think of a picture of yourself on the internet from before 2010, other than your old Facebook photos? How about something you’ve written? Maybe some old sent emails in Gmail or old Gchats?


But what about anything NOT on Facebook or Google?


Most likely, you have some photos that are lost somewhere, some old posts to a message board or something you wrote on a friend’s wall, some bits of yourself that you put out there on the internet during the previous decade that is simply gone forever.


The internet of the 2010s will be defined by social media’s role in the 2016 election, the rise of extremism, and the fallout from privacy scandals like Cambridge Analytica. But there’s another, more minor theme to the decade: the gradual dismantling and dissolution of an older internet culture.


This purge comes in two forms: sites or services shutting down or transforming their business models. Despite the constant flurries of social startups (Vine! Snapchat! TikTok! Ello! Meerkat! Peach! Path! Yo!), when the dust was blown off the chisel, the 2010s revealed that the content you made — your photos, your writing, your texts, emails, and DMs — is almost exclusively in the hands of the biggest tech companies: Facebook, Google, Microsoft, Amazon, or Apple.


The rest? Who knows? I hate to tell you, but there’s a good chance it’s gone forever.

Katie Notopoulos discusses the sites that came and went during 2010’s. The IndieWeb has a more extensive list of site deaths.
Bookmarked The sad state of personal data and infrastructure | Mildly entertainingᵝ
Jestem Króliczkie unpacks the challenges associated with keeping a record of your personal data and digital traces. His solution is data mirror app, “that merely runs in background on client side and continuously/regularly sucks in and synchronizes backend data to the latest state.” One of the problems is that the ‘average’ user are not often not motivated enough to make such requests.

This reminds me Kin Lane’s discussion of personal API from a few years ago and Tom Woodward’s attempt at a dashboard. I also wonder where data mirroring fits within Cory Doctorow’s discussions if adversarial interoperability. Although Kin Lane warns that interoperability is a myth.

Sadly, my current method is manual til it hurts. And it hurts.

Liked API Evangelist | The Instructure LMS Data Points (API Evangelist)

This isn’t about shaming Instructure and it’s shareholders. This is about pointing out that we do not have any policies in place to prevent the exploitation of our schools and the students they serve. There is no approach to business or technology that will prevent the exploitation of student data. There is only a need to establish and strengthen federal and state policies that protect the privacy of students and their data, and minimizing the damage any platform can cause–no matter who owns it.

Bookmarked Kids’ YouTube as we know it is over. Good. (Vox)

On January 1, YouTube videos for kids will look much different. But will it be better?

With the policy changes to YouTube requiring creators to classify if content is for children, Rebecca Jennings dives into the world of YouTube for children.

Throughout its history, YouTube has stubbornly maintained that it’s a site aimed at users 13 and over, freeing the platform from obtaining parental consent to track user data. Yet the FTC’s investigation found that Google had been touting YouTube’s popularity with children to toy brands like Mattel and Hasbro in order to sell ads, including the assertion that YouTube is the No. 1 website regularly visited by kids.

The FTC’s fine is arguably a pittance of what Google owes. Though it may be a record-breaking fine for the organization, as Recode’s Peter Kafka explains, $170 million is basically “a rounding error” in YouTube’s profit, which could reach around $20 billion this year. Two of the FTC’s five commissioners voted against the settlement, with one arguing the fine should have been in the billions.

One of the biggest concerns is that much of this content is driven by algorithms, rather than the recommendations of educational specialists.

Maybe, though, the problem isn’t that the YouTube algorithm serves up stupid or bad videos to kids, but that an algorithm is in charge of what kids are watching at all. Toddlers are always going to click on the video with the brightest, most bonkers thumbnail with words they might recognize. Moving kids’ content to separate streaming apps — made specifically for children, with fewer commercials, more gatekeepers in charge of quality control, and fair, clear payment structures — seems like a change for good.

Alexis Madrigal (‘Raised by YouTube‘) and James Bridle (‘The nightmare videos of children’s YouTube — and what’s wrong with the internet today‘) also unpack some of the issues associated with YouTube.

Bookmarked Colleges are turning students’ phones into surveillance machines, tracking the locations of hundreds of thousands (Washington Post)

The systems highlight how widespread surveillance has increasingly become a fact of life: Students “should have all the rights, responsibilities and privileges that an adult has. So why do we treat them so differently?”

As someone who supports schools with attendance, I understand to a degree where this is all coming from. However, this does not mean it is right. Along with the take-up of video surveillance as perpetuated by companies such as Looplearn, the use of phones as a means of tracking is raising a lot of questions about the purpose and place of technology within learning.

The Chicago-based company has experimented with ways to make the surveillance fun, gamifying students’ schedules with colorful Bitmoji or digital multiday streaks. But the real value may be for school officials, who Carter said can split students into groups, such as “students of color” or “out-of-state students,” for further review. When asked why an official would want to segregate out data on students of color, Carter said many colleges already do so, looking for patterns in academic retention and performance, adding that it “can provide important data for retention. Even the first few months of recorded data on class attendance and performance can help predict how likely a group of students is to” stay enrolled.

What is most disconcerting is the hype around such data.

The company also claims to see much more than just attendance. By logging the time a student spends in different parts of the campus, Benz said, his team has found a way to identify signs of personal anguish: A student avoiding the cafeteria might suffer from food insecurity or an eating disorder; a student skipping class might be grievously depressed. The data isn’t conclusive, Benz said, but it can “shine a light on where people can investigate, so students don’t slip through the cracks.”

Here I am reminded of the work by Cathy O’Neil in regards to big data.

Replied to Opinion | Be Paranoid About Privacy (nytimes.com)

We need to take back our privacy from tech companies — even if that means sacrificing convenience.

I really like your point about being sloppy Kara.

We’re digitally sloppy, even if it can be very dangerous, as evidenced by a disturbing New York Times story this week about an Emirati secure messaging app called ToTok, which is used by millions across the Middle East and has also recently become one of the most downloaded in the United States.

This is what I was trying to touch upon in my post on being better informed.

I think the challenge is that there is always more that we can do. I guess something is better than nothing.

Liked Messaging app ToTok is reportedly a secret UAE surveillance tool (Mashable)

Rather than sticking to strictly messaging-app-like activities, ToTok reportedly intended to use that access to surveil its users. And by blocking other chat apps in the country, the U.A.E. practically ensured the app’s success.


“You don’t need to hack people to spy on them if you can get people to willingly download this app to their phone,” Wardle told the New York Times. “By uploading contacts, video chats, location, what more intelligence do you need?”

Bookmarked HEWN, No. 333

It is mind-boggling — MIND-BOGGLING — that folks want to argue that the value of a technology company has little to do with its accrual of data. The argument for the past decade has been “data is the new oil” — its extraction and analysis necessary for predictions and profit. Now data is the new nothing-burger, I guess.

Audrey Watters discusses the news that private equity firm Thoma Bravo is to Acquire Instructure for $2 billion and asks how this cannot be about data.
Replied to Digitally Literate #225 by an author

Parents at a public school district in Maryland have won a major victory for student privacy. Tech companies that work with the school district now have to purge the data they have collected on students once a year.

Experts say the district’s “Data Deletion Week” may be the first of its kind in the country.

We have to wonder why this doesn’t happen elsewhere in Pre-K up through higher education.

Another great read Ian.

I was particularly taken by the piece about erasing kids’ data. In particular, I was intrigued by what data is deleted.

While not all student data is deleted that week, the district works to clean much of students’ digital slates over the summer, including data collected by Google and by GoGuardian, which tracks students’ web searches, according to Peter Cevenini, the district’s chief technology officer.

The district demands more than a vague assurance from tech companies that the data has been erased: “They send us a certification that officially confirms legally that the information has been deleted from their servers,” Cevenini said.

I would assume that students would still have access and ownership over their content and that it is the periphery that is stripped out? Imagine if instead of simply deleting, students were actually given insight into the data that is both captured and deleted?

Hope all is well,

Aaron

Liked The Government Protects Our Food and Cars. Why Not Our Data? (nytimes.com)

The United States was not always a data protection laggard. In 1974, Congress passed a law, the Privacy Act, regulating how federal agencies handled personal information. It was based on a credo, known as fair information practices, that people should have rights over their data. The law enabled Americans to see and correct the records that federal agencies held about them. It also barred agencies from sharing a person’s records without their permission.

Congress never passed a companion law giving Americans similar rights over the records that private companies have on them. Historically, Americans have feared big government more than big business. The European Union, by contrast, established a directive in 1995 governing the fair processing of personal data by both companies and government agencies.

Today, the European Union has an even more comprehensive law, the General Data Protection Regulation, and each member state has a national agency to enforce it. Those agencies in BelgiumFranceGermany and other European countries have recently acted to curb data exploitation at FacebookGoogle and other tech giants.

It’s not just the European Union. AustraliaCanadaJapan and New Zealand have also established stand-alone data agencies. By contrast, American consumers have to rely largely on the F.T.C. to safeguard their personal information, a data protection system that privacy advocates consider as airtight as Swiss cheese.

Bookmarked The Next Big Cheap — Real Life (Real Life)

Borrowing a term from Marxist geographer Jason Moore, I propose that data is the new big “cheap thing” — the new commodity class that is emerging to reshape the world and provide a new arena for accumulation and enclosure. Following Erich Hörl, whose essay “The Environmentalitarian Situation” briefly mentions data as a potential new entry in Moore’s litany of “cheap things,” I want to explore how framing data as a new cheap thing — rather than “the new oil” or “the new soil” or “the new nuclear waste” — gives us a way of looking directly at the process by which things become available for use and profiteering. Thinking about data in line with other cheap commodities throughout the history of capitalism might help us imagine better frameworks for its management and regulation, and provide models for how to successfully push back against the capture and exploitation of yet another aspect of our lives and the world that sustains us.

Kelly Pendergrast borrows from Jason Moore in proposing that data is the new big ‘cheap thing’. What has made data big is the lowering of costs across the board.

Just as data wasn’t always “big,” it wasn’t always cheap enough to accumulate like giant fatbergs in AWS’s digital sewers (data is the new fatberg). Governments, corporations, and institutions have long collected large data sets and wielded them as a tool of power, but those data weren’t nearly as interconnected, accessible, or easy to analyze as they are today. The transformation of data into “cheap data” required massive computing power, algorithmic accuracy, and cheap storage. Each of these was built on the backs of other cheaps: cheap energy (from fossil fuels), cheap money (often from Silicon Valley), cheap labor, and cheap nature (in the form of extracted minerals and metals) were all enlisted in the development of powerful and omnipresent computing technology used to transform data from just a collection of info points into an omnipresent strategy for profit making. This litany of enabling conditions didn’t conjure cheap data into existence. But I suspect that they created an imaginative fissure through which a new frontier could be glimpsed.

This touches on the idea of technology as a system, with a part of this system being cheap work.

At the cheap data frontiers, industrial workers (cheap labor) like those working in Amazon fulfillment centers are tracked and monitored, doing double time for employers who profit from their labor while also accumulating screeds of data about the movement of their bodies in space, their time spent per task, and their response to incentives. Friends and families provide uncompensated but necessary social support (cheap care) for one another on digital platforms like Facebook, helping maintain social cohesion and reproducing labor forces while also producing waterfalls of valuable data for the platform owners. This magic trick, where cheap data is gleaned as a byproduct of different kinds of cheap work, is a great coup for capital and one more avenue for extraction from the rest of us.

These demands on cheap work also bring with them further costs to employees who wear the mental costs.

Recent research has highlighted the stress and horror experienced by precarious workers in the digital factory, who annotate images of ISIS torture or spend their days scanning big social platforms for hate speech and violent videos. As with all cheap things, cheap data relies on massive externalities, the ability to offload risk and harm onto other people and natures, while the profits all flow in the opposite direction.

All in all, Pendergrast calls calls for a review of data collection, with a focus on small data and sovereignty.

These demands that Indigenous peoples retain sovereignty over their own data, refuse to let it be stored by AWS or reused without their consent, and re-inscribe it with Indigenous principles point towards an alternative data future in which data is slower, smaller, and less alienated. In this future, some kinds of data collection and use may be abolished entirely, as Ruha Benjamin suggests for algorithms and surveillance that amplify racial hierarchies; while other kinds of collection may continue, but in a less-networked way that is controlled and decided by the communities to whom the data pertain.

John Philpin frames this all around energy.

Liked net.wars: The choices of others

A lawyer friend corrects my impression that GDPR does not apply. The Information Commissioner’s Office is clear that cameras should not be pointed at other people’s property or shared spaces, and under GDPR my neighbor is now a data controller. My friends can make subject access requests. Even so: do I want to pick a fight with people who can make my life unpleasant? All over the country, millions of people are up against the reality that no matter how carefully they think through their privacy choices they are exposed by the insouciance of other people and robbed of agency not by police or government action but by their intimate connections – their neighbors, friends, and family..

Yes, I mind. And unless my neighbor chooses to care, there’s nothing I can practically do about it.

Listened Digital Technology and the lonely from Radio National

The CSIRO’s Paul Tyler on the risks associated with data “re-identification”; and engineer Andrew Rae explains how the new aircraft he’s created can stay airborne for months on end without the need for an engine.

In light of the recent Myki data leaks, Antony Funnell talks with Paul Tyler about the challenges of data and de-identification.