Bookmarked The Building Blocks of Interpretability by Chris Olah (Google Brain Team)
There is a rich design space for interacting with enumerative algorithms, and we believe an equally rich space exists for interacting with neural networks. We have a lot of work left ahead of us to build powerful and trusthworthy interfaces for interpretability. But, if we succeed, interpretability promises to be a powerful tool in enabling meaningful human oversight and in building fair, safe, and aligned AI systems (Crossposted on the Google Open Source Blog) In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as De...
Is it just me, or is this new article exploring how feature visualization can combine together with other interpretability techniques to understand aspects of how networks make decisions a case of creating a solution and then working out how or why it works? Seems reactive or maybe I just don’t get it.
Bookmarked Beyond the Rhetoric of Algorithmic Solutionism by dana boyd (Points)
Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.
danah boyd reviews a book by Virginia Eubanks which takes a look at the way(s) that algorithms work within particular communities. Along with Weapons of Math Destruction and Williamson’s Big Data in Education, they provide a useful starting point for discussing big data today.