Nvidia AI Tech Lets Computers Understand the 3D World From 2D Photos

Sam Fried

Nvidia AI tech constructs 3D models out of a collection of 2D photographs. Nvidia; animation by Stephen Shankland/CNET Graphics chips are good at taking 3D scenes like video game battlefields or airplane designs and rendering them as 2D images on a screen. Nvidia, a top maker of such chips, now […]

Nvidia AI tech constructs 3D models out of a collection of 2D photographs.


Nvidia; animation by Stephen Shankland/CNET

Graphics chips are good at taking 3D scenes like video game battlefields or airplane designs and rendering them as 2D images on a screen. Nvidia, a top maker of such chips, now is using AI to do the exact opposite.

In a talk at Nvidia’s GTC, the company’s annual GPU Technology Conference, researchers described how they can reconstruct a 3D scene from a few camera images. To do so, Nvidia uses a processing technical called a neural radiance field, or NeRF. Nvidia’s is way faster than earlier methods — so fast that it can run at a video’s 60 frames per second.

A NeRF ingests photo information and trains a neural network, an AI processing system somewhat like a human brain, to understand the scene, including how light rays travel from it to any given point surrounding it. That means you can place a virtual camera anywhere to get a new view of that scene.

It may not seem useful, but reconstructing 3D scenes is important for computers trying to understand the real world. One example Nvidia also showed off at GTC is autonomous vehicle technology that turns video into a 3D model of streets so developers can replay many variations of that scene to improve their vehicles’ behavior.

Creating computer models of the real world also could be useful in building the 3D realms called metaverse that the tech industry is eager for you to inhabit for entertainment, shopping, work, chats and games. Nvidia, with its Omniverse technology, is keen on making it easier to create interactive “digital twins” of real world areas like roads and warehouses.

Nvidia’s work also showcases the growing capability of artificial intelligence technology. By aping real brains and the way they learn from real-world data, the computing industry has found a way to program computers to recognize patterns in complex data. You’ll likely be familiar with some AI uses, like detecting faces for camera focusing or processing Amazon Alexa voice commands. But AI is spreading everywhere, like detecting fraudulent financial transactions nearly instantly, designing computer chips and scrubbing bogus businesses off Google Maps.

Chip circuitry to accelerate AI is spreading across the tech world, too, from Nvidia’s enormous new H100 processor designed to train neural network AI models to Apple iPhones that run those models.


Now playing:
Watch this:

Nvidia Unveils H100 AI Chip



5:35


https://www.cnet.com/tech/computing/nvidia-ai-tech-lets-computers-understand-the-3d-world-from-2d-photos/

Next Post

Reef Technology is working with Miami to fix permitting problems

Following reports last fall and winter that ghost kitchen disruptor Reef Technology had temporarily halted operations in multiple cities due to permitting issues, Reef is now working with city authorities to update food vendor regulations, starting with the technology company’s hometown of Miami. Reef began working with the Miami mayor […]

Subscribe US Now