Researchers From Google Revealed An Alternative To The Traditional Neural Networks

The name is the 'capsule network.'

In the modern age of technology, Artificial Intelligence (AI) has seen tremendous growth in which it has been applied to many sorts of things to solve many types of problems.

Much of AI's capabilities come from neural networks, which is a computer system modeled on the human brain and nervous system. As their name implies, neural networks mimic our brains, and the way neurons are presumed to aggregate to understand complex interconnections between many, many other individual neurons, each of which is handling some piece of an overall puzzle.

But there is growing concern here.

A neural network is a classifier that can sort an object into a correct category based on input data. But the fundamental requirement for those systems to be able to work as designed, is to need a huge quantities of data from which to learn. This is perhaps the biggest problem for AI. It needs to feed on data, and more data, just to get smarter.

In order for AI to be of use with more limited data sets, like for example analyzing medical imagery, it needs to be smarter with less input data.

Below is how traditional neural network works:

Geoff Hinton from Google unveiled a new way for AI to learn besides using traditional neural networks. He calls it 'capsule networks.'

"I think the way we're doing computer vision is just wrong. It works better than anything else at present, but that doesn’t mean it’s right."

In his papers on arXIv and OpenReview, Hinton explained how it works.

Capsule networks are organized in such a way that in layers, they can identify things inside images as well as videos.

The layers are comprised not of individual artificial network, but rather of small groups of artificial networks arranged in functional 'capsules.' Each capsule is programmed to detect a particular attribute of the object being classified, thus getting around the need for massive input data sets.

When capsules one one layer agree on having detected something, they will activate capsules at a higher level, and so on until the network is able to make a judgment about what it sees.

Each of those capsules is designed to detect a specific feature in an image in such a way that it can recognize them in different scenarios, like for example, from varying angles.

This makes capsule networks an alternative to "let them teach themselves" approach of traditional neural networks.

Hinton claims that the approach, which has been in the making for decades, should enable the networks to recognize objects using less data than regular neural networks need.

In his paper, the capsule networks have initially been able to keep up with regular neural networks when it comes to identifying handwritten characters. They have the advantage of making fewer mistakes when trying to recognize previously observed toys from different angles.

Below is how the capsule network works:

Capsule networks aim to solve the weakness of machine-learning systems that limits their effectiveness.

Image-recognition software in use by Google, for example, needs a large number of example photos fed into many layers for it to reliably recognize objects in all kinds of situations. The reason for this is because the software using traditional neural network isn't very good at generalizing what it learns to new scenarios.

To teach a computer to recognize a cat from many angles, for example, an AI needs thousands of photos covering a variety of perspectives. Human children however, don’t need such extensive training to learn to recognize the household pet.

Hinton’s idea is narrowing the gap between the best of AI systems and the ability of ordinary toddlers.

But at its initial state, the capsule network method is a bit slower than traditional neural networks counterpart.

The researchers are pushing the boundaries to see how deep-learning alternatives can stack up, or to even replace older technologies.

Published: 
11/11/2017