Replied to Should I blog about my studies? Some thoughts… (Marginal Notes)

I should also mention that you need to think through potential ethical issues. If any posts you write discuss people, whether authors of texts, fellow students or academics, and especially participants (or the groups to which they belong), it’s crucial to think through potential ramifications and the impact your blogging might have. Then there’s how you refer to organisations such as schools, your own university, professional bodies etc). You will doubtless have been issued your university’s ethical code of conduct and will likely have been (or will be) required to make an ethics submission. Revisit these in the light of your blogging.

Thank you Ian for your elaboration. I am really intrigued by the ethical side of things. I often wonder about this in a general sense in regards to sharing online.
Bookmarked Timnit Gebru’s Exit From Google Exposes a Crisis in AI (WIRED)

This crisis makes clear that the current AI research ecosystem—constrained as it is by corporate influence and dominated by a privileged set of researchers—is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isn’t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.

Alex Hanna reports on Timnit Gebru’s exit from Google and the implications that this has for research into artificial intelligence. It highlights the dark side of being funded by the company that you are at the same time researching:

Meredith Whittaker, faculty director at New York University’s AI Now institute, says what happened to Gebru is a reminder that, although companies like Google encourage researchers to consider themselves independent scholars, corporations prioritize the bottom line above academic norms. “It’s easy to forget, but at any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest,” she says.

In an interview with Karen Hao, Gebru questions the response from Google suggesting they treat those involved in gross misconduct better.

I didn’t expect it to be in that way—like, cut off my corporate account completely. That’s so ruthless. That’s not what they do to people who’ve engaged in gross misconduct. They hand them $80 million, and they give them a nice little exit, or maybe they passive-aggressively don’t promote them, or whatever. They don’t do to the people who are actually creating a hostile workplace environment what they did to me.

John Naughton suggests that this is no different to what has happened in the past with oil and tobacco.

And my question is: why? Is it just that the paper provides a lot of data which suggests that a core technology now used in many of Google’s products is, well, bad for the world? If that was indeed the motivation for the original dispute and decision, then it suggests that Google’s self-image as a technocratic force for societal good is now too important to be undermined by high-quality research which suggests otherwise. In which case, it suggests that there’s not that much difference between big tech companies and tobacco, oil and mining giants. They’re just corporations, doing what corporations always do.

This all reminds me of Jordan Erica Webber’s discussion from a few years ago about the push for more ethics and whether this it is just a case of public relations?

Replied to Movement of Ideas Project: Approach (cpdin140.wordpress.com)

The compromise I settled on was to produce a ‘List’ of those accounts which appear to be interested in literacy in primary schools; there is then no potential pressure to follow back. By describing my list as “Teachers and organisations tweeting about literacy (within the (UK) Primary school context)”, when people were notified that someone had added them to a list, they could choose to follow it. As I write, ten people have done so, are hopefully learning something from the List members and as a consequence I feel slightly happier that I’ve made a modest contribution that might help the primary literacy community.

Ian, I like the idea of adding people to lists rather than merely ‘following’ them. I also like the possibility of being able to subscribe to other people’s lists. Personally speaking, I actually follow my lists in my feed reader using Granary to create the feed.
Bookmarked Ten weird tricks for resisting surveillance capitalism in and through the classroom . . . next term! (HASTAC)

Check out these ten weird tricks for resisting surveillance capitalism in and through the classroom . . . next term! Listed with handy difficulty levels because we know Teach is busy! Add your own brilliant ideas and strategies by commenting here or on this tweet. And remember only we, the people, can truly bring the world closer together.

Erin Glass shares a number of strategies for responding to surveillance capitalism. They include engaging with community driven tools, exploring terms of services, owning your data and exploring the topic further. This touches on Audrey Watters’ discussion of a domain of one’s own, Glass’ presentation with Autumm Caines and the reading list from the Librarianshipwreck.
Liked Further Defining Digital Literacies: The Ethics of Information Creation by Kevin’s Meandering Mind | Author | dogtrax (dogtrax.edublogs.org)

Do learners share information in ways that consider all sources?
Do learners consider the contributors and authenticity of all sources?
Do learners practice the safe and legal use of technology?
Do learners create products that are both informative and ethical?
Do learners avoid accessing another computer’s system, software, or data files without permission?
Do learners engage in discursive practices in online social systems with others without deliberately or inadvertently demeaning individuals and/or groups?
Do learners attend to the acceptable use policies of organizations and institutions?
Do learners attend to the terms of service and/or terms of use of digital software and tools?
Do learners read, review, and understand the terms of service/use that they agree to as they utilize these tools?
Do learners respect the intellectual property of others and only utilize materials they are licensed to access, remix, and/or share?
Do learners respect and follow the copyright information and appropriate licenses given to digital content as they work online?

Liked Rethinking the Context of Edtech (er.educause.edu)

If we know that we have reached the limits of what education technology can do (edtech 2.0), we now need to think about what education technology should do (edtech 3.0). I strongly believe we should be grounding edtech in the core of the disciplinary conversation, rather than leaving it at the periphery.

Listened Artificial intelligence, ethics and education from Radio National

AI holds enormous potential for transforming the way we teach, says education technology expert Simon Buckingham Shum, but first we need to define what kind of education system we want.

Also, the head of the UK’s new Centre for Data Ethics and Innovation warns democratic governments that they urgently need an ethics and governance framework for emerging technologies.

And Cognizant’s Bret Greenstein on when it would be unethical not to use AI.

Guests

Roger Taylor – Chair of the UK Government’s Centre for Data Ethics and Innovation

Simon Buckingham Shum – Professor of Learning Informatics, University of Technology Sydney, leader of the Connected Intelligence Centre; co-founder and former Vice-President of the Society for Learning Analytics Research

Bret Greenstein – Senior Vice President and Global head of AI and Analytics, Cognizant

In this episode of RN Future Tense, Antony Funnell leads an exploration of artificial intelligence, educational technology and ethics. Simon Buckingham Shum discusses the current landscape and points out that we need to define the education we want, while Roger Taylor raises the concern that if we do not find a position that fits with our state that we will instead become dictated by either America’s market based solutions or China’s focus on the state. This is a topic that has been discussed on a number of fronts, including by Erica Southgate. This also reminds me of Naomi Barnes’ 20 Thoughts on Automated Schooling.
Bookmarked Unraveling the Secret Origins of an AmazonBasics Battery (Medium)

The battery becomes less trackable the further it progresses down the chain. This is overwhelmingly due to U.S. shipping rules that allow companies to move product virtually in secret. And as Amazon expands into all modes of transport — cars, trucks, air and ocean freight — its logistics will likely become even more invisible.

Sarah Emerson reflects on her experience with the AmazonBasics Battery. In the process she follows the thread back to a Fujitsu factory in Indonesia. Although there are is not concrete data in regards to the environmental impact, she makes an effort to put the pieces of the puzzle back together. For example, she discusses a paper co-authored by Jay Turner and Leah Nugent, in which they argued that:

It takes more than 100 times the energy to manufacture an alkaline battery than is available during its use phase.” And when the entirety of a battery’s emissions are added up — including sourcing, production, and shipping — its greenhouse gas emissions are 30 times that of the average coal-fired power plant, per watt-hour.

Other than the problem that companies are not required to log such informed, Emerson also highlights that the impact is often only focued on the disposal of the item.

Bookmarked The Delicate Ethics of Using Facial Recognition in Schools (Wired)

A growing number of districts are deploying cameras and software to prevent attacks. But the systems are also used to monitor students—and adult critics.

Tom Simonite and Gregory Barber discuss the rise in facial recognition within US schools. This software is often derived from situations such as Israeli checkpoints. It serves as a ‘free‘ and ‘efficient‘ means for maintaining student safety at the cost of standardising a culture of surveillance. What is worse is the argument that the use of facial recognition is a case of fighting fire with fire:

“You meet superior firepower with superior firepower,” Matranga says. Texas City schools can now mount a security operation appropriate for a head of state. During graduation in May, four SWAT team officers waited out of view at either end of the stadium, snipers perched on rooftops, and lockboxes holding AR-15s sat on each end of the 50-yard line, just in case.(source)

I am with Audrey Watters here, what is ‘delicate’ ethics?

Replied to The War On the Smartphone: Has Data Cherry-Picking Destroyed a Generation? by Mike Crowley (crowleym.com)

The truth is that most issues that are associated with “problem technology use” have their roots elsewhere. Bullying existed before smartphones, as did pornography, screen addiction, and social isolation. While it is true that smartphones can exacerbate or facilitate these things, they can also have significant positive benefits for learning, social connection, and communication. We can’t teach students to balance their screen time with personal interaction by taking the choice away from them. It is difficult to pursue lessons in the pernicious reality of data privacy and surveillance capitalism without a real and critical engagement with these issues.

I am not so concern about ‘access’ to smartphones Mike, as I am about the opportunity for ethical technology. Although we can preach digital minimalism or rooting devices, why can’t there be a solution that actually supports users rights and privacy by default?
Liked The Internet Can Make Us Feel Awful. It Doesn’t Have to Be That Way by Eli Pariser (Time)

Over our history, we’ve found ways to create tools and spaces that call out and amplify the best parts of human nature. That’s the great story of -civilization—the development of technologies like written language that have moderated our animal impulses. What we need now is a new technological -enlightenment—a turn from our behaviorally optimized dark age to an era of online spaces that embrace what makes us truly human. We need online spaces that treat us as the unique, moral beings we are—that treat us, and encourage us to treat one another, with care, respect and dignity.

Liked Ethical design is not superficial by Laura Kalbag (laurakalbag.com)

We should embrace being uncomfortable. We live in a political and social hellscape. The majority of us have no job security, we can’t afford houses and we can’t afford to have families. Many of us can’t even afford healthcare. None of this is comfortable, so we may as well do something to change that for our futures, and for future generations.

Replied to AI and Human Freedom by Cameron Paterson (learningshore.edublogs.org)

Historian Yuval Noah Harari writes, “The algorithms are watching you right now.  They are watching where you go, what you buy, who you meet.  Soon they will monitor all your steps, all your breaths, all your heartbeats.  They are relying on Big Data and machine learning to get to know you bette…

This is useful provocation Cameron. In part it reminds me of James Bridle’s contribution to the rethinking of Human Rights for the 21st century. I think we are entering or in a challenging time when consuming (or prosuming) comes before being informed. Something I elaborated elsewhere. With AI do we know the consequence anymore and what does it mean to discuss this in the humanities not just the tech class?

Also on: Read Write Collect

Bookmarked You can’t buy an ethical smartphone today (Engadget)

Right now, it’s impossible to buy a smartphone you can be certain was produced entirely ethically. Any label on the packaging wouldn’t stand a chance of explaining the litany of factors that go into its construction. The problem is bigger than one company, NGO or trade policy, and will require everyone’s effort to make things better.

Daniel Cooper explains the challenges associated with buying an ethical smartphone. He touches on the challenges associated with the construction (often in the Shenzhen province) and the number of rare materials involved.

Devices vary, but your average smartphone may use more than 60 different metals. Many of them are rare earth metals, so-called because they’re available in smaller quantities than many other metals, if not genuinely rare.

There is also limitations on the ability to recycle or refurbish devices, with significant challenges associated with replacing parts. This is also something that Adam Greenfield discusses in his book Radical Technologies.

via Douglas Rushkoff

Bookmarked Tools come and go. Learning should not. And what’s a “free” edtech tool, anyway? by Lyn (lynhilt.com)

Do I need this tool? Why? How does it really support learning?
What are the costs, both monetary and otherwise, of using this service? Do the rewards of use outweigh the risks?
Is there a paid service I could explore that will meet my needs and better protect the privacy of my information and my students’ information?
How can I inform parents/community members about our use of this tool and what mechanisms are in place for parents to opt their children out of using it?
When this tool and/or its plan changes, how will we adjust? What will our plans be to make seamless transitions to other tools or strategies when the inevitable happens?

Lyn Hilt reflects on Padlet’s recent pivot to a paid subscription. She argues that if we stop and reflect on what we are doing in the classroom, there are often other options. Hilt also uses this as an opportunity to remind us what ‘free’ actually means, and it is not free as in beer. We therefore need to address some of the ethical questions around data and privacy. A point highlighted by the revelations of the ever increasing Cambridge Analytica breach.
Listened
This is a useful introduction to debate about ethics and technology. One of the interesting points made was in regards to Google and the situation where Google Photos mislabelled people with dark skin as gorillas. This is a consequence of years of racism or focus on whiteness within technology.

Watch Dr Simon Longstaff’s presentation for more on ethics.

Watched
On the 8th of December at The Overseas Passenger Terminal in Sydney Australia, BVN hosted its bi-annual conference – Futures Forum 2. The theme was ‘Knowledge and Ethics in the Next Machine Age’.

23:21 Larry Prusak: Knowledge and it’s Practices in the 21st Century

Prusak discusses the changes in knowledge over time and the impact that this has. This reminds me of Weinberger’s book Too Big To Know. Some quotes that stood out were:

Knowledge won’t flow without trust

and

Schools measure things they can measure even if it is not valuable

Again and again Prusak talks about going wide, getting out and meeting new people.

1:21:59 Professor Genevieve Bell: Being Human in a Digital Age

Bell points out that computing has become about the creation, circulation, curation and resistence of data. All companies are data companies now. For example, Westfield used to be a real estate company, but they are now a data company.

The problem with algorithms is that they are based on the familiar and retrospective, they do not account for wonder and serendipity.

As we design and develop standards for tomorrow, we need to think about the diversity associated with those boards and committees. If there are only white males at the table, how does this account for other perspectives.

We do want to be disconnected, even if Silicon Valley is built around being permanently connected. One of the things that we need to consider is what is means to have an analogue footprint.

Building on the discussion of data and trust, Bell makes the point:

The thing about trust is that you only get it once.

The question remains, who do we trust when our smart devices start selling our data.

In regards to the rise of the robots, our concern should be the artificial intelligence within them. One of the big problems is that robots follow rules and we don’t.

The future of technology that we need to be aspiring to develop a future where technology can support us with our art, wonder and curiosity.


A comment made during the presentation and shared after Bell had finished:

Is your current job the best place for you to make the world a better place?


2:49:51 Phillip Bernstein: The Future of Making Things: Design Practice in the Era of Connected Technology

Berstein unpacks six technical disruptions – data, computational design, simulation analysis, the internet of things, industrial construction and machine learning – and looks at the implications for architecture.

3:51:44 Dr Simon Longstaff: Ethics in the Next Machine Age

Dr Longstaff explores the ethics associated with technology. This includes the consideration of ethical design, a future vision – Athens or Eden – and the purpose to making. Discussing the technology of WWII, Longstaff states:

Technical mastery devoid of ethics is the root of all evil

He notes that just because we can, it does not mean we ought.

A collection of points to consider in regards to ethics in technology
A screenshot from Dr Longstaff

He also used two ads from AOL to contrast the choices for tomorrow:


H/T Tom Barrett