Liked To work for society, data scientists need a hippocratic oath with teeth by Tom Upchurch (WIRED UK)

The first question is, are the algorithms that we deploy going to improve the human processes that they are replacing? Far too often we have algorithms that are thrown in with the assumptions that they’re going to work perfectly, because after all they’re algorithms, but they actually end up working much worse than the system that they’re replacing. For example in Australia they implemented an algorithm that sent a bunch of ridiculously threatening letters to people saying that they had defrauded the Australian Government. That’s a great example where they actually just never tested it to make sure it worked.

The second question is to ask, for whom is the algorithm failing? We need to be asking, “Does it fail more often for women than for men? Does it fail more often for minorities than for whites? Does it fail more often for old people than for young people?” Every single class should get a question and an answer. The big example I have for this one is the facial recognition software that the MIT Media Lab found worked muchbetter for white men than black women. That is a no-brainer test that every single facial recognition software company should have done and it’s embarrassing that they didn’t do it.

The third category of question is simply, is this working for society? Are we tracking the mistakes of the system? Are we inputting these mistakes back into the algorithm so that it’ll work better? Is it causing some other third unintended consequence? Is it destroying democracy? Is it making people worse off?