Liked The Government Protects Our Food and Cars. Why Not Our Data? (nytimes.com)

The United States was not always a data protection laggard. In 1974, Congress passed a law, the Privacy Act, regulating how federal agencies handled personal information. It was based on a credo, known as fair information practices, that people should have rights over their data. The law enabled Americans to see and correct the records that federal agencies held about them. It also barred agencies from sharing a person’s records without their permission.

Congress never passed a companion law giving Americans similar rights over the records that private companies have on them. Historically, Americans have feared big government more than big business. The European Union, by contrast, established a directive in 1995 governing the fair processing of personal data by both companies and government agencies.

Today, the European Union has an even more comprehensive law, the General Data Protection Regulation, and each member state has a national agency to enforce it. Those agencies in BelgiumFranceGermany and other European countries have recently acted to curb data exploitation at FacebookGoogle and other tech giants.

It’s not just the European Union. AustraliaCanadaJapan and New Zealand have also established stand-alone data agencies. By contrast, American consumers have to rely largely on the F.T.C. to safeguard their personal information, a data protection system that privacy advocates consider as airtight as Swiss cheese.

Bookmarked The Next Big Cheap — Real Life (Real Life)

Borrowing a term from Marxist geographer Jason Moore, I propose that data is the new big “cheap thing” — the new commodity class that is emerging to reshape the world and provide a new arena for accumulation and enclosure. Following Erich Hörl, whose essay “The Environmentalitarian Situation” briefly mentions data as a potential new entry in Moore’s litany of “cheap things,” I want to explore how framing data as a new cheap thing — rather than “the new oil” or “the new soil” or “the new nuclear waste” — gives us a way of looking directly at the process by which things become available for use and profiteering. Thinking about data in line with other cheap commodities throughout the history of capitalism might help us imagine better frameworks for its management and regulation, and provide models for how to successfully push back against the capture and exploitation of yet another aspect of our lives and the world that sustains us.

Kelly Pendergrast borrows from Jason Moore in proposing that data is the new big ‘cheap thing’. What has made data big is the lowering of costs across the board.

Just as data wasn’t always “big,” it wasn’t always cheap enough to accumulate like giant fatbergs in AWS’s digital sewers (data is the new fatberg). Governments, corporations, and institutions have long collected large data sets and wielded them as a tool of power, but those data weren’t nearly as interconnected, accessible, or easy to analyze as they are today. The transformation of data into “cheap data” required massive computing power, algorithmic accuracy, and cheap storage. Each of these was built on the backs of other cheaps: cheap energy (from fossil fuels), cheap money (often from Silicon Valley), cheap labor, and cheap nature (in the form of extracted minerals and metals) were all enlisted in the development of powerful and omnipresent computing technology used to transform data from just a collection of info points into an omnipresent strategy for profit making. This litany of enabling conditions didn’t conjure cheap data into existence. But I suspect that they created an imaginative fissure through which a new frontier could be glimpsed.

This touches on the idea of technology as a system, with a part of this system being cheap work.

At the cheap data frontiers, industrial workers (cheap labor) like those working in Amazon fulfillment centers are tracked and monitored, doing double time for employers who profit from their labor while also accumulating screeds of data about the movement of their bodies in space, their time spent per task, and their response to incentives. Friends and families provide uncompensated but necessary social support (cheap care) for one another on digital platforms like Facebook, helping maintain social cohesion and reproducing labor forces while also producing waterfalls of valuable data for the platform owners. This magic trick, where cheap data is gleaned as a byproduct of different kinds of cheap work, is a great coup for capital and one more avenue for extraction from the rest of us.

These demands on cheap work also bring with them further costs to employees who wear the mental costs.

Recent research has highlighted the stress and horror experienced by precarious workers in the digital factory, who annotate images of ISIS torture or spend their days scanning big social platforms for hate speech and violent videos. As with all cheap things, cheap data relies on massive externalities, the ability to offload risk and harm onto other people and natures, while the profits all flow in the opposite direction.

All in all, Pendergrast calls calls for a review of data collection, with a focus on small data and sovereignty.

These demands that Indigenous peoples retain sovereignty over their own data, refuse to let it be stored by AWS or reused without their consent, and re-inscribe it with Indigenous principles point towards an alternative data future in which data is slower, smaller, and less alienated. In this future, some kinds of data collection and use may be abolished entirely, as Ruha Benjamin suggests for algorithms and surveillance that amplify racial hierarchies; while other kinds of collection may continue, but in a less-networked way that is controlled and decided by the communities to whom the data pertain.

John Philpin frames this all around energy.

Liked net.wars: The choices of others

A lawyer friend corrects my impression that GDPR does not apply. The Information Commissioner’s Office is clear that cameras should not be pointed at other people’s property or shared spaces, and under GDPR my neighbor is now a data controller. My friends can make subject access requests. Even so: do I want to pick a fight with people who can make my life unpleasant? All over the country, millions of people are up against the reality that no matter how carefully they think through their privacy choices they are exposed by the insouciance of other people and robbed of agency not by police or government action but by their intimate connections – their neighbors, friends, and family..

Yes, I mind. And unless my neighbor chooses to care, there’s nothing I can practically do about it.

Listened Digital Technology and the lonely from Radio National

The CSIRO’s Paul Tyler on the risks associated with data “re-identification”; and engineer Andrew Rae explains how the new aircraft he’s created can stay airborne for months on end without the need for an engine.

In light of the recent Myki data leaks, Antony Funnell talks with Paul Tyler about the challenges of data and de-identification.
Replied to Digital Downsizing (part two) (rtschuetz.net)

What did I get in return for my one-hour investment? I reduced email spam from roughly sixty daily messages to two. I see very few pop-up ads, and my browser searches are more neutral. I have confidence that most of my web activity isn’t being tracked, although that’s difficult to fully quantify.

Another great reflection Bob on the importance of reviewing our settings regularly. Another interesting post you might want to check out is Doug Belshaw’s discussion of our digital estate.
Replied to An Online Student Dashboard by gregmiller68

Any student online dashboard will need to be far more interactive than a semester report. It will need to provide more information and be far more more readily available, more often. However, 24/7 access will not be something that St Luke’s will provide. The last thing any child needs is a parent or teacher hovering over them for incremental steps that may take days or weeks to notice and record.

This sounds really interesting Greg. I really like your point about being both interactive, but also managed in regards to when information is available. My question with a dashboard is always what data? How is it structured? Is there any possibility it could be misinterpreted?

I also wonder how this fits with the idea of digital portfolios and student voice? Is it a case of who controls the data controls the learning?

Bookmarked Psychodata (code acts in education)

Overall, what I’ve tried to show in the article is that SEL is a policy field in-the-making and that it remains inchoate and in some ways incoherent. We can understand it as a policy infrastructure that is being assembled from highly diverse elements, and that is centrally focused on the production of ‘psychodata’. In fact, the potential of a SEL policy infrastructure depends to a great extent on the creation of the data infrastructure required to produce policy-relevant knowledge. In other words, the generation of psycho-economic calculations is at the very core of current international policy interest in social-emotional learning, which is already relaying into classroom practices globally, governing teachers’ practices, and shaping the priorities of education systems to be focused on the enumeration of student emotions.

Ben Williamson disassembles the growing world of social and emotional learning in an article published in Journal of Education Policy. In it he makes six points:

  1. SEL needs to be understood as the product of a ‘psycho-economic’ fusion of psychological and economics expertise
  2. There are sets of moving relations among think tanks, philanthropies and campaigning coalitions which have been central to establishing SEL as an emerging policy field
  3. SEL is a site of considerable movement of money
  4. A huge industry of SEL products, consultancy and technologies has emerged, which has allowed SEL practices to proliferate through schools
  5. SELs enactment is contingent on local, regional and national priorities
  6. The OECD overtly brings together psychology and economics with their new test positioned as a way of calculating the contribution of social-emotional skills to ‘human capital

This has me rethinking the book Counting what Counts and my reflections:

It feels like the real question in need of answering isn’t what needs to be counted, but why? Although it might be useful to measure how interested we may be or our global awareness, what seems more important is what purpose does this actually achieve. In an age when counting seems to be a given and we only care about what we can count, the book it at least offers a vision about what we can measure.

Bookmarked Education before Regulation: Empowering Students to Question Their Data Privacy

We must work not only toward providing better security around student data but also toward educating students about the need to critically evaluate how their data is used and how to participate in shaping data privacy practices and policies. These policies and practices will affect them for the rest of their lives, as individuals with personal data and also as leaders with power over the personal data of others. Regulation is necessary, but education is the foundation that enables society to recognize when its members’ changing needs require a corresponding evolution in its regulations. And for those of us in academia, unlike those in industry, education is our work.

Autumm Caines and Erin Glass discuss data privacy and the importance of educating students about the topic. To support this, the two authors provide a number of resources and references, including the Ethical EdTech wiki and a collaboratively created course statement.  This is also something Sonia Livingstone, Mariya Stoilova and Rishita Nandagiri discuss.
Replied to Facebook confirms data sharing with Chinese companies (U.S.)

Facebook allowed Apple and other device makers to have “deep” access to users’ personal data without their consent, according to the Times.

The Times said Facebook allowed companies access to the data of users’ friends without their explicit consent, even after it had declared it would no longer share the information with outsiders.

Archibong said the data was only shared with device makers in order to improve Facebook users’ access to the information. “These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences.”

I am confused as to why Facebook would need to provide an authentic dataset for third-party development? Why wouldn’t they develop a de-identified dataset for this purpose?
Replied to How IBM’s Technology Powered the Holocaust (kottke.org)

It’s not difficult to see the relevance of this episode today. Should Microsoft-owned GitHub provide software to ICE for possible use in the agency’s state-sanctioned persecution of immigrants and asylum seekers? Should Twitter allow Donald Trump to incite terrorism on their service? Should Google provide AI to the Pentagon for the potential development of deadlier weapons? And Christ, where do you even start with Facebook? Palantir, Apple, and Amazon have also been criticized recently for allowing unethical usage of their technology and platforms. “It’s just business” and the belief in the neutrality of technology (and technology platforms) have combined to produce a shield that contemporary companies use to protect themselves from activists’ ethical criticisms. And increasingly, the customers and employees of these companies aren’t buying it because they don’t want history to repeat itself.
According to a book by human rights journalist Edwin Black, Hitler needed logistical help in carrying out the genocide of Europe

I wonder about the technology behind China’s social credit system and the links there. It would seem that what is different is that a lot of this technology is designed by the state for the state?
Replied to Friday flowerings by Doug BelshawDoug Belshaw (Doug Belshaw’s Thought Shrapnel)

Google’s auto-delete tools are practically worthless for privacy (Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”

Doug, I will admit that maybe I do not truly understand how all this works, but isn’t there a difference between deleting the data and deleting the data about the data? For example, the Guardian recently hightlighted that Facebook’s ‘delete data’ merely disconnects the data from the user profile? I would imagine that it would not wipe any sort of shadow profile that is inferred from the data?
Liked How to Set Your Google Data to Self-Destruct (nytimes.com)

In offering these privacy tools, Google is a step ahead of other internet giants like Facebook and Twitter, which don’t provide ways to easily delete large batches of dated posts.

Yet there’s no one-size-fits-all for how people should use Google’s privacy controls, since everyone has different lifestyles and levels of paranoia. To give an idea of how you can tailor these settings, here’s my personal setup:

  • I set my search history to auto-delete. I rarely use Google Assistant and don’t visit Google News, meaning I don’t benefit from personalized recommendations. But I’m often checking Google Maps, and it’s useful to have a recent history of those searches to revisit destinations. So I set Web & App Activity to automatically delete after three months.

  • I set my YouTube history to self-destruct. I go in and out of phases that involve cooking different types of foods, and I like it when YouTube surfaces new recipes based on recent searches. So I set my YouTube history to auto-delete after three months.

  • I set my location history to auto-delete, too. I use Google Maps regularly, and I go on big trips twice a year. It’s useful for me to let Google know where I have been recently so that its Maps app can load relevant addresses and remember places I have been. But it’s not useful for Google to continue to know that I went to Hawaii last month for vacation. So I set my location history to auto-purge after three months.

It’s difficult to imagine why anyone wouldn’t want to take advantage of Google’s auto-delete tools. There’s no practical benefit to letting Google keep a history of our online activities from years back. So don’t delay in wiping a tiny bit of your digital traces away.

Bookmarked Privacy matters because it empowers us all – Carissa Véliz | Aeon Essays (Aeon)

Don’t just give away your privacy to the likes of Google and Facebook – protect it, or you disempower us all

Carissa Véliz pushes back on the idea that anyone can say they have ‘nothing to hide’. Whether it be attention, money, reputation or identity, she argues that we all have something worth getting at.

You have your attention, your presence of mind – everyone is fighting for it. They want to know more about you so they can know how best to distract you, even if that means luring you away from quality time with your loved ones or basic human needs such as sleep. You have money, even if it is not a lot – companies want you to spend your money on them. Hackers are eager to get hold of sensitive information or images so they can blackmail you. Insurance companies want your money too, as long as you are not too much of a risk, and they need your data to assess that. You can probably work; businesses want to know everything about whom they are hiring – including whether you might be someone who will want to fight for your rights. You have a body – public and private institutions would love to know more about it, perhaps experiment with it, and learn more about other bodies like yours. You have an identity – criminals can use it to commit crimes in your name and let you pay for the bill. You have personal connections. You are a node in a network. You are someone’s offspring, someone’s neighbour, someone’s teacher or lawyer or barber. Through you, they can get to other people. That’s why apps ask you for access to your contacts. You have a voice – all sorts of agents would like to use you as their mouthpiece on social media and beyond. You have a vote – foreign and national forces want you to vote for the candidate that will defend their interests.

Protecting our privacy is therefore an important aspect in preventing others from being empowered with knowledge about us. This touches on John Philpin’s argument that ‘data is power‘.

Veliz argues that we need to disrupt platform capitalism and the data economy:

Privacy is not only about you. Privacy is both personal and collective. When you expose your privacy, you put us all at risk. Privacy power is necessary for democracy – for people to vote according to their beliefs and without undue pressure, for citizens to protest anonymously without fear of repercussions, for individuals to have freedom to associate, speak their minds, read what they are curious about. If we are going to live in a democracy, the bulk of power needs to be with the people. If most of the power lies with companies, we will have a plutocracy. If most of the power lies with the state, we will have some kind of authoritarianism. Democracy is not a given. It is something we have to fight for every day. And if we stop building the conditions in which it thrives, democracy will be no more. Privacy is important because it gives power to the people. Protect it.

This makes me wonder about the IndieWeb and other such movements, and where they fit within this disruption of power?

Filed an Issue Export more detailed listening statistics (Customer Feedback & suggestions for Podcast Addict)

I like the listening statistics feature, but I often want to know more about my habits, more explicitly, and to be able to work with that data a bit.
Being able to track/download events such as time of playback, time of stopping, episode timestamp of start/stop, sleep timer timeout, or sleep timer interrupted, would be really interesting for me.
I write scripts that graoh and analyse my behaviour from exported data from mood tracker apps/my exported calendar, and often podcast addict is a good indicator of when i actually slept, or how I was feeling during my commute to/from work.

I think that this would be really useful. I would love to use this data to develop a monthly review of some sorts.
Bookmarked Learning Science: The Problem With Data, And How You Can Measure Anything (Julian Stodd’s Learning Blog)

The qualitative to quantitative switch means that you can measure anything; how you feel about breakfast, the weight of your shoes, or the validity of democracy. But it does not mean that the scale of measurement you choose, or the mechanism of measurement, is valid. So measure anything, but do it with care. And be both wary and careful of the measurements that people give you to prove a point. Especially when they are charging you for it.

Julian Stodd provides a useful introduction to quantitative and qualitative data. It is interesting to think about measurement alongside Nassim Nicholas Taleb’s Black Swans:

Now, there are other themes arising from our blindness to the Black Swan: We focus on preselected segments of the seen and generalize from it to the unseen: the error of confirmation. We fool ourselves with stories that cater to our Platonic thirst for distinct patterns: the narrative fallacy. We behave as if the Black Swan does not exist: human nature is not programmed for Black Swans. What we see is not necessarily all that is there. History hides Black Swans from us and gives us a mistaken idea about the odds of these events: this is the distortion of silent evidence. We “tunnel”: that is, we focus on a few well-defined sources of uncertainty, on too specific a list of Black Swans.(Page 49)

Another book on the topic of measurement and education is Counting What Counts.

Bookmarked Opinion | Don’t Trust Facebook With Your Love Life (nytimes.com)

Happiness, brought to you by the company that gave you the Cambridge Analytica Scandal™!

First Facebook flagged its own currency, now they are trying to capture our deep connections. The further this all goes, the more I think about The Circle and the platform as a social utility.

via Ian O’Byrne