🤔 Neural Network Building Blocks

Bookmarked The Building Blocks of Interpretability by Chris Olah (Google Brain Team)
There is a rich design space for interacting with enumerative algorithms, and we believe an equally rich space exists for interacting with neural networks. We have a lot of work left ahead of us to build powerful and trusthworthy interfaces for interpretability. But, if we succeed, interpretability promises to be a powerful tool in enabling meaningful human oversight and in building fair, safe, and aligned AI systems (Crossposted on the Google Open Source Blog) In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as De...
Is it just me, or is this new article exploring how feature visualization can combine together with other interpretability techniques to understand aspects of how networks make decisions a case of creating a solution and then working out how or why it works? Seems reactive or maybe I just don’t get it.

One response on “🤔 Neural Network Building Blocks”

Leave a Reply

Your email address will not be published. Required fields are marked *