This 'RawNeRF' AI From Google Can Shine More Than Just Light Into Darkness

RawNeRF

Lightness' is the key to everything. Without it, living things cease, because no activity can be done.

Only nocturnal animals, as well as deep sea creatures, are the elites that can deal with living and functioning in darkness. But the rest of living organisms, cannot live without sources of light. The Sun and the Moon, and the stars have long been humanity's sources of light before artificial lights were discovered - from fire to electricity.

Not only that light allows humans to live their lives, photography also requires light to operate.

As a matter of fact, with great lighting, anyone with steady hands and a decent camera can capture great photos. But without light, even professional photographers will give up.

Even when camera hardware has improved over the years, without sources of light, it seems that nothing can be photographed properly.

Trying too hard is possible, but will create what's called a digital noise.

Digital noise, or electronic noise, is the randomness caused by a camera sensor and internal that work with it, which introduce imperfections to an image.

Sometimes, digital noise will have visible patterns, which make them appear like grains scattered throughout an image.

Noise in photography is essentially a 'backdrop' to is present on all image a camera takes. The goal of photography in this case, is overpowering that backdrop.

The best way to do this, is by making the camera to capture more light.

If more light is not possible, one must rely on advanced computation. And in this case, includes AI.

And this AI from Google, is taking denoising to a whole different level.

The company has released an open source project it calls 'MultiNerf'.

Particularly, one project under MultiNerf, is called the RawNeRF.

What this tool does, is using AI-powered algorithms to figure out what a footage "should have" looked like without the distinct video noise generated by imaging sensors.

The technology scans every single detail inside a footage or an image, to then reconstruct a 3D render of the scene.

This is where the NeRF comes to play.

NeRF is short for neural radiance field, in which a neural network can take 2D images and create a 3D scene from them.

This way, the AI is able to understand the changing camera positions, and fine tune the exposure, and even the focus. RawNeRF then "combines images taken from many different camera viewpoints to jointly denoise and reconstruct a scene," said Google researcher Ben Mildenhall.

And because it uses AI, RawNeRF is capable of handling scenes with large dynamic range, in which it can also change tone-mapping, exposure levels, and even shooting at different angles.

At the moment of its introduction, the project is under development and is considered a research than a commercially available product.

Computational photography is already present in all modern smartphones to some degree, and it’s a question of time before this kind of AI is fully integrated for the masses.

In short, the technology is able to blue the lines between photography and computer graphics.

Published: 
28/08/2022