Listened Does data science need a Hippocratic oath? from ABC Radio National

The use and misuse of our data can have enormous personal and societal consequences, so what ethical constraints are there on data scientists?

Continuing the conversation about forgetting and ethics, Antony Funnell speaks with Kate Eichhorn and Kate Mannell about digital forgetting.

Eichhorn, the author of The End of Forgetting, discusses the long and complicated history that children have and challenges associated with identity. She explains that our ability to control what is forgotten has been diminished in the age of social media. Although new solutions may allow us to connect, this also creates its own problems and consequences, such as the calcification of polarised politics. Eichhorn would like to say things are going to change, but she argues that there is little incentive for big tech. Although young people are becoming more cynical, there maybe resistance, but little hope for a return to an equitable utopian web.

Kate Mannell explores the idea of forcing a sense of ethics through the form of a hypocratic oath. Some of the problems with this is that there are many versions of the oath, it does not resolve the systemic problems and it is hard to have an oath of no harm when it is not even clear what harms are actually at play. In the end, it risks being a soft form of self regulation.

I found Eichhorn’s comments about resistance interesting when thinking about my engagement with the IndieWeb and Domain of One’s Own. I guess sometimes all we have is hope. While Mannell’s point about no harm when it is not even clear what harm is at play reminds me about Zeynep Tufekci’s discussion of shadow profiles,Ā  complications of inherited datasets and the challenges of the next machine age. In regards to education, the issue is in regards to artificial intelligence and facial recognition.

Liked The ethical dilemmas of GoFundMe (ABC Religion & Ethics)

GoFundMe CEO Tim Cadogan has made clear that they wonā€™t necessarily de-platform any campaigns associated with the far-right or alt-right, but will intervene when certain lines are crossed ā€” such as the promotion of white supremacist views. Of course, the problem is that such worldviews are not easily disentangled. They may even be inseparable. Moreover, in unfolding protests one permissible form of expression can quickly evolve into something that is in gross violation of GoFundMeā€™s policies.

Liked The Questions Concerning Technology by L. M. Sacasas (The Convivial Society)

A set of 41 questions drafted with a view to helping us draw out the moral or ethical implications of our tools.

  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy? [N.B. This was years before I even heard of Marie Kondo!]
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldnā€™t?
į”„ “Alan Jacobs” in hubris ā€“ Snakes and Ladders ()
Replied to Should I blog about my studies? Some thoughtsā€¦ (Marginal Notes)

I should also mention that you need to think through potential ethical issues. If any posts you write discuss people, whether authors of texts, fellow students or academics, and especially participants (or the groups to which they belong), itā€™s crucial to think through potential ramifications and the impact your blogging might have. Then thereā€™s how you refer to organisations such as schools, your own university, professional bodies etc). You will doubtless have been issued your universityā€™s ethical code of conduct and will likely have been (or will be) required to make an ethics submission. Revisit these in the light of your blogging.

Thank you Ian for your elaboration. I am really intrigued by the ethical side of things. I often wonder about this in a general sense in regards to sharing online.
Bookmarked Timnit Gebruā€™s Exit From Google Exposes a Crisis in AI (WIRED)

This crisis makes clear that the current AI research ecosystemā€”constrained as it is by corporate influence and dominated by a privileged set of researchersā€”is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isnā€™t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.

Alex Hanna reports on Timnit Gebru’s exit from Google and the implications that this has for research into artificial intelligence. It highlights the dark side of being funded by the company that you are at the same time researching:

Meredith Whittaker, faculty director at New York Universityā€™s AI Now institute, says what happened to Gebru is a reminder that, although companies like Google encourage researchers to consider themselves independent scholars, corporations prioritize the bottom line above academic norms. ā€œItā€™s easy to forget, but at any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest,ā€ she says.

In an interview with Karen Hao, Gebru questions the response from Google suggesting they treat those involved in gross misconduct better.

I didnā€™t expect it to be in that wayā€”like, cut off my corporate account completely. Thatā€™s so ruthless. Thatā€™s not what they do to people whoā€™ve engaged in gross misconduct. They hand them $80 million, and they give them a nice little exit, or maybe they passive-aggressively donā€™t promote them, or whatever. They donā€™t do to the people who are actually creating a hostile workplace environment what they did to me.

John Naughton suggests that this is no different to what has happened in the past with oil and tobacco.

And my question is: why? Is it just that the paper provides a lot of data which suggests that a core technology now used in many of Googleā€™s products is, well, bad for the world? If that was indeed the motivation for the original dispute and decision, then it suggests that Googleā€™s self-image as a technocratic force for societal good is now too important to be undermined by high-quality research which suggests otherwise. In which case, it suggests that thereā€™s not that much difference between big tech companies and tobacco, oil and mining giants. Theyā€™re just corporations, doing what corporations always do.

This all reminds me of Jordan Erica Webber’s discussion from a few years ago about the push for more ethics and whether this it is just a case of public relations?

Replied to Movement of Ideas Project: Approach (cpdin140.wordpress.com)

The compromise I settled on was to produce a ā€˜Listā€™ of those accounts which appear to be interested in literacy in primary schools; there is then no potential pressure to follow back. By describing my list as ā€œTeachers and organisations tweeting about literacy (within the (UK) Primary school context)ā€, when people were notified that someone had added them to a list, they could choose to follow it. As I write, ten people have done so, are hopefully learning something from the List members and as a consequence I feel slightly happier that Iā€™ve made a modest contribution that might help the primary literacy community.

Ian, I like the idea of adding people to lists rather than merely ‘following’ them. I also like the possibility of being able to subscribe to other people’s lists. Personally speaking, I actually follow my lists in my feed reader using Granary to create the feed.
Bookmarked Ten weird tricks for resisting surveillance capitalism in and through the classroom . . . next term! (HASTAC)

Check out these ten weird tricks for resisting surveillance capitalism in and through the classroom . . . next term! Listed with handy difficulty levels because we know Teach is busy! Add your own brilliant ideas and strategies by commenting here or on this tweet. And remember only we, the people, can truly bring the world closer together.

Erin Glass shares a number of strategies for responding to surveillance capitalism. They include engaging with community driven tools, exploring terms of services, owning your data and exploring the topic further. This touches on Audrey Watters’ discussion of a domain of one’s own, Glass’ presentation with Autumm Caines and the reading list from the Librarianshipwreck.
Liked Further Defining Digital Literacies: The Ethics of Information Creation by Kevin’s Meandering Mind | Author | dogtrax (dogtrax.edublogs.org)

Do learners share information in ways that consider all sources?
Do learners consider the contributors and authenticity of all sources?
Do learners practice the safe and legal use of technology?
Do learners create products that are both informative and ethical?
Do learners avoid accessing another computerā€™s system, software, or data files without permission?
Do learners engage in discursive practices in online social systems with others without deliberately or inadvertently demeaning individuals and/or groups?
Do learners attend to the acceptable use policies of organizations and institutions?
Do learners attend to the terms of service and/or terms of use of digital software and tools?
Do learners read, review, and understand the terms of service/use that they agree to as they utilize these tools?
Do learners respect the intellectual property of others and only utilize materials they are licensed to access, remix, and/or share?
Do learners respect and follow the copyright information and appropriate licenses given to digital content as they work online?

Liked Rethinking the Context of Edtech (er.educause.edu)

If we know that we have reached the limits of what education technology can do (edtech 2.0), we now need to think about what education technology should do (edtech 3.0). I strongly believe we should be grounding edtech in the core of the disciplinary conversation, rather than leaving it at the periphery.

Listened Artificial intelligence, ethics and education from Radio National

AI holds enormous potential for transforming the way we teach, says education technology expert Simon Buckingham Shum, but first we need to define what kind of education system we want.

Also, the head of the UKā€™s new Centre for Data Ethics and Innovation warns democratic governments that they urgently need an ethics and governance framework for emerging technologies.

And Cognizant’s Bret Greenstein on when it would be unethical not to use AI.

Guests

Roger Taylor ā€“ Chair of the UK Governmentā€™s Centre for Data Ethics and Innovation

Simon Buckingham Shum – Professor of Learning Informatics, University of Technology Sydney, leader of the Connected Intelligence Centre; co-founder and former Vice-President of the Society for Learning Analytics Research

Bret Greenstein ā€“ Senior Vice President and Global head of AI and Analytics, Cognizant

In this episode of RN Future Tense, Antony Funnell leads an exploration of artificial intelligence, educational technology and ethics. Simon Buckingham Shum discusses the current landscape and points out that we need to define the education we want, while Roger Taylor raises the concern that if we do not find a position that fits with our state that we will instead become dictated by either America’s market based solutions or China’s focus on the state. This is a topic that has been discussed on a number of fronts, including by Erica Southgate. This also reminds me of Naomi Barnes’ 20 Thoughts on Automated Schooling.
Bookmarked Unraveling the Secret Origins of an AmazonBasics Battery (Medium)

The battery becomes less trackable the further it progresses down the chain. This is overwhelmingly due to U.S. shipping rules that allow companies to move product virtually in secret. And as Amazon expands into all modes of transport ā€” cars, trucks, air and ocean freight ā€” its logistics will likely become even more invisible.

Sarah Emerson reflects on her experience with the AmazonBasics Battery. In the process she follows the thread back to a Fujitsu factory in Indonesia. Although there are is not concrete data in regards to the environmental impact, she makes an effort to put the pieces of the puzzle back together. For example, she discusses a paper co-authored by Jay Turner and Leah Nugent, in which they argued that:

It takes more than 100 times the energy to manufacture an alkaline battery than is available during its use phase.ā€ And when the entirety of a batteryā€™s emissions are added up ā€” including sourcing, production, and shipping ā€” its greenhouse gas emissions are 30 times that of the average coal-fired power plant, per watt-hour.

Other than the problem that companies are not required to log such informed, Emerson also highlights that the impact is often only focued on the disposal of the item.

Bookmarked The Delicate Ethics of Using Facial Recognition in Schools (Wired)

A growing number of districts are deploying cameras and software to prevent attacks. But the systems are also used to monitor studentsā€”and adult critics.

Tom Simonite and Gregory Barber discuss the rise in facial recognition within US schools. This software is often derived from situations such as Israeli checkpoints. It serves as a ‘free‘ and ‘efficient‘ means for maintaining student safety at the cost of standardising a culture of surveillance. What is worse is the argument that the use of facial recognition is a case of fighting fire with fire:

“You meet superior firepower with superior firepower,ā€ Matranga says. Texas City schools can now mount a security operation appropriate for a head of state. During graduation in May, four SWAT team officers waited out of view at either end of the stadium, snipers perched on rooftops, and lockboxes holding AR-15s sat on each end of the 50-yard line, just in case.(source)

I am with Audrey Watters here, what is ‘delicate’ ethics?

Replied to The War On the Smartphone: Has Data Cherry-Picking Destroyed a Generation? by Mike Crowley (crowleym.com)

The truth is that most issues that are associated with ā€œproblem technology useā€ have their roots elsewhere. Bullying existed before smartphones, as did pornography, screen addiction, and social isolation. While it is true that smartphones can exacerbate or facilitate these things, they can also have significant positive benefits for learning, social connection, and communication. We canā€™t teach students to balance their screen time with personal interaction by taking the choice away from them. It is difficult to pursue lessons in the pernicious reality of data privacy and surveillance capitalism without a real and critical engagement with these issues.

I am not so concern about ‘access’ to smartphones Mike, as I am about the opportunity for ethical technology. Although we can preach digital minimalism or rooting devices, why can’t there be a solution that actually supports users rights and privacy by default?
Liked The Internet Can Make Us Feel Awful. It Doesn’t Have to Be That Way by Eli Pariser (Time)

Over our history, weā€™ve found ways to create tools and spaces that call out and amplify the best parts of human nature. Thatā€™s the great story of -civilizationā€”the development of technologies like written language that have moderated our animal impulses. What we need now is a new technological -enlightenmentā€”a turn from our behaviorally optimized dark age to an era of online spaces that embrace what makes us truly human. We need online spaces that treat us as the unique, moral beings we areā€”that treat us, and encourage us to treat one another, with care, respect and dignity.

Liked Ethical design is not superficial by Laura Kalbag (laurakalbag.com)

We should embrace being uncomfortable. We live in a political and social hellscape. The majority of us have no job security, we canā€™t afford houses and we canā€™t afford to have families. Many of us canā€™t even afford healthcare. None of this is comfortable, so we may as well do something to change that for our futures, and for future generations.

Replied to AI and Human Freedom by Cameron Paterson (learningshore.edublogs.org)

Historian Yuval Noah Harari writes, ā€œThe algorithms are watching you right now. Ā They are watching where you go, what you buy, who you meet. Ā Soon they will monitor all your steps, all your breaths, all your heartbeats. Ā They are relying on Big Data and machine learning to get to know you bette…

This is useful provocation Cameron. In part it reminds me of James Bridle’s contribution to the rethinking of Human Rights for the 21st century. I think we are entering or in a challenging time when consuming (or prosuming) comes before being informed. Something I elaborated elsewhere. With AI do we know the consequence anymore and what does it mean to discuss this in the humanities not just the tech class?

Also on: Read Write Collect

Bookmarked You canā€™t buy an ethical smartphone today (Engadget)

Right now, it’s impossible to buy a smartphone you can be certain was produced entirely ethically. Any label on the packaging wouldn’t stand a chance of explaining the litany of factors that go into its construction. The problem is bigger than one company, NGO or trade policy, and will require everyone’s effort to make things better.

Daniel Cooper explains the challenges associated with buying an ethical smartphone. He touches on the challenges associated with the construction (often in the Shenzhen province) and the number of rare materials involved.

Devices vary, but your average smartphone may use more than 60 different metals. Many of them are rare earth metals, so-called because they’re available in smaller quantities than many other metals, if not genuinely rare.

There is also limitations on the ability to recycle or refurbish devices, with significant challenges associated with replacing parts. This is also something that Adam Greenfield discusses in his book Radical Technologies.

via Douglas Rushkoff

Bookmarked Tools come and go. Learning should not. And whatā€™s a ā€œfreeā€ edtech tool, anyway? by Lyn (lynhilt.com)

Do I need this tool? Why? How does it really support learning?
What are the costs, both monetary and otherwise, of using this service? Do the rewards of use outweigh the risks?
Is there a paid service I could explore that will meet my needs and better protect the privacy of my information and my studentsā€™ information?
How can I inform parents/community members about our use of this tool and what mechanisms are in place for parents to opt their children out of using it?
When this tool and/or its plan changes, how will we adjust? What will our plans be to make seamless transitions to other tools or strategies when the inevitable happens?

Lyn Hilt reflects on Padlet’s recent pivot to a paid subscription. She argues that if we stop and reflect on what we are doing in the classroom, there are often other options. Hilt also uses this as an opportunity to remind us what ‘free’ actually means, and it is not free as in beer. We therefore need to address some of the ethical questions around data and privacy. A point highlighted by the revelations of the ever increasing Cambridge Analytica breach.