Are neural networks any good for games, movies or virtual reality?
From the very beginning, neural networks were used for complex mathematical, physical, bio and medical calculation and forecasting. But technologies are evolving all the time. Sometimes it can be scary, and sometimes – dope as fuck. There is why you can use neural networks and machine learning in some ordinary and mundane things – like entertainment. Those things only started to be implemented to such sphere but already are showing some awesome results.
Let’s check a couple of examples.
Few of us know that remastering video is hellofa complex and time-taking. It can be a real pain in the ass. But due to neural networks and machine learning you can achieve some tremendous results even at home. For example, one YouTube guy by the name of Stefan Rumen (also known as CaptRobau) decided to show how neural networks can help to remaster an old Star Trek: Deep Space Nine tv series (sorry Star Wars fans).
Stefan already made such an experiment before with Remako Mod. It is an HD remake of one of the most popular Japanese RPG called Final Fantasy VII. He used AI Gigapixel to scale the original image four times in order to modify it into HD resolution without any loss. Therefore, instead of waiting for Square Enix to release an official remastered version of the best part of that gaming series, you can use Stefan Rumens’ mod. Link to download
And such remastering is a trending thing. More and more mod creators working on making old games looking more up to date and much cooler without losing its original gameplay and soul.
Check ESRGAN (Enhanced Super Resolution Generative Adversarial Networks), which is being used to scale the image 2-8 times without any quality loss. You can simply feed that algorithm with some low-quality image and it will not only scale the basic resolution, but it will also increase image quality by adding realistic details and making textures more “natural”.
The original one is on the left, processed by a neural network – on the right
The original one is on the left, processed by a neural network – on the right
So, while remastering FFVII Stefan decided to take another step and to use the same machine learning technology but for remastering an epic space saga ST:DS9.
According to Stefan, scaling such a “live image” of a tv series strongly differs from scaling an already rendered image of Final Fantasy VII. That is why though the processed image is looking better than the original one, it is still far from being ideal. You can find some artifacts for sure.
Rumen also used AI Gigapixel technology for that experiment. With that being said, we can see that neural networks can be applied for different genres.
But you can use neural networks not only for processing old images. Hell no! With VR and 360 panoramic cameras gaining more and more popularity, developers started to look in that direction as well to study its potential on that field. And one of the latest development is the neural network that can sound over the static panoramic image. Mindblowing. It has been created by tech guys from the University of Massachusetts, Columbia University and George Mason University.
This algorithm detects the environment and objects on the photo, and then selects and matches sounds according to distance and dimension calculation to the sources on that image. And due to such a feature, the panoramic image gets more realistic and surround sound giving you new astonishing experience.
According to developers, this technology can generate interest from VR-content designers (movies and games). A huge boost for the whole industry.
And I’m sure – it’s only a beginning. More cool stuff will be revealed due to neural networks and machine learning. Agree?