In the future, your video game engine won’t generate the graphics, artificial intelligence will.
NVidia just revealed an incredible demo of a car driving through a world generated in real-time, via an artificial neural network. Rendering scenes in games is computationally expensive, and current tech uses complex mathematical models to do so. But as this demo shows, a central “brain” could now design the virtual environments applications use.
GAN algorithms have already shown their ability to deepfake images our brains can not differentiate from reality. For a long time it was common knowledge that eventually virtual environments will look “real”, and this might be the path to getting there. Considering this demo is running on a single GPU (well yeah, it’s the $3,000 Titan V), it seems we are not that far from a hardware perspective.
Furthermore, games like “No Man’s Sky” have already demonstrated the creation of game environments in real-time, effectively making the explorable space infinite. However, the domain is small, they create planets with limited interaction and terrain. But with this technology, that concept can be expanded to an algorithm that renders environments such as cities, vehicles, inner buildings, etc.
Be the first to comment on "AI Rendered Virtual Worlds"