This 'Microscope' From OpenAI Visualizes Neural Networks Using Millions Of Images

OpenAI Microscope

Artificial Intelligence is a broad subject. By breaking it down, it's still a difficult topic to understand.

One of the ways to make machines learn like how humans would, is by making them work vaguely like the biological neural networks that constitute the human brain. The intention is to make computers solve problems in the same way that a human brain would.

For example, neural network can be programmed to understand a 'cat' by identifying the animal's characteristics from examples it has been given, and not by knowing that cats have fur, tails, whiskers, vertical pupils and so forth.

In order words, this approach makes computers "learn" by seeing examples, generally without being programmed to understand task or rule.

To visualize how these neural networks work, OpenAI introduces 'Microscope', which is a library of neuron visualizations using a collection of millions of images.

On its blog post, Open AI said that:

"We’re introducing OpenAI Microscope, a collection of visualizations of every significant layer and neuron of eight vision 'model organisms' which are often studied in interpretability. Microscope makes it easier to analyze the features that form inside these neural networks, and we hope it will help the research community as we move towards understanding these complicated systems."
OpenAI Microscope
Initially, Microscope visualizes every significant layer and neuron of eight vision “model organisms”. (Credit: OpenAI)

OpenAI's Microscope systematically visualizes every single neuron in several commonly studied image-recognition AI models.

Microscope visualizes them by making all the neurons linkable.

Although these models and visualizations are already open source by their respective developers, OpenAI wants to help enthusiasts, developers and researchers, knowing that visualizing neurons is a tedious task.

For example, modern neural networks are the result of the interactions of thousands, or even tens of thousands or more of neurons. With that many neurons, it would be difficult for anyone to understand how the neurons are linked to other neurons. And this is what essentially what Microscope is trying to convey.

Like a traditional microscope in laboratories, OpenAI's Microscope is made to help AI researchers better understand the architecture and behavior of neural networks.

Here, Microscope changes the feedback loop of exploring neurons from minutes to seconds.

"This quick feedback loop has been essential for us in discovering unexpected features like high-low frequency detectors in the ongoing circuits project," explained OpenAI.

"Making models and neurons linkable allows immediate scrutiny and further exploration of research making claims about those neurons. It also removes potential confusion about which model and neuron is being discussed (which of the five versions of InceptionV1 are we talking about again?). This is really helpful for collaboration, especially when researchers are at different institutions."

OpenAI Microscope
Microscope visualizing a model. (Credit: OpenAI)

Initially, Microscope visualizes eight different neural networks, that include:

  1. AlexNet: A landmark in computer vision, this 2012 winner of ImageNet has over 50,000 citations.
  2. AlexNet (Places): The same architecture as the classic AlexNet model, but trained on the Places365 dataset.
  3. Inception v1: Also known as GoogLeNet, this network set the state of the art in ImageNet classification in 2014.
  4. Inception v1 (Places): The same architecture as the classic Inception v1 model, but trained on the Places365 dataset.
  5. VGG 19: Introduced in 2014, this network is simpler than Inception variants, using only 3x3 convolutions and no branches.
  6. Inception v3: Released in 2015, this iteration of the Inception architecture improved performance and efficiency.
  7. Inception v4: Released in 2016, this is the fourth iteration of the inception architecture, focusing on uniformity.
  8. ResNet v2 50: ResNets use skip connections to enable stronger gradients in much deeper networks. This variant has 50 layers.

“While we’re making this available to anyone who’s interested in exploring how neural networks work, we think the primary value is in providing persistent, shared artifacts to facilitate long-term comparative study of these models. We also hope that researchers with adjacent expertise — neuroscience, for instance — will find value in being able to more easily approach the internal workings of these vision models,” OpenAI said.

"We hope that, by sharing our visualizations, we can help keep interpretability highly accessible."

Published: 
20/04/2020