قالب وردپرس درنا توس
Home / Technology / AI upscaling and the future of content delivery

AI upscaling and the future of content delivery



The rumor mill has recently buzzed about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM and an improved OLED display are pretty much provided, as you’d expect for a mid-generation update. The upgraded specs will almost certainly also have an inflated price tag, but given the incredible demand for the current switch, a $ 50 or even $ 100 bump is unlikely to discourage many potential buyers.

But according to a report from Bloomberg, the new switch may have a little more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim that the new system will use an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed in several notable PC games in recent years, uses machine learning to upscale rendered images in real time. So instead of forcing the GPU to produce an original 4K image, the engine can render the game at a lower resolution and make DLSS make the difference.

The current model Nintendo Switch

The implications of this technology, especially on computing devices, are enormous. For the Switch, which also acts as a battery-powered handheld when removed from the dock, the use of DLSS can allow it to produce graphics similar to the much larger and more expensive Xbox and PlayStation systems it competes with. If Nintendo and NVIDIA can prove that DLSS is viable on something as small as the Switch, we’ll probably see the technology come to future smartphones and tablets to compensate for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, there is a reason why the same techniques can be applied to other forms of content. Instead of saturating your internet connection with a 16K video stream, will the TVs of the future only get the best out of what they have using a machine learning algorithm trained on popular shows and movies?

How low can you sink?

Obviously, you do not need machine learning to resize an image. You can take a standard definition video and scale it up to high definition easily enough, and your TV or Blu-ray player does just that when watching older content. But it does not take much attention to immediately see the difference between a DVD that has been blown up to fit an HD screen and modern content that is actually produced in that resolution. Taking a 720 x 480 image and sliding it up to 1920 x 1080, or even 3840 x 2160 in the case of 4K, will result in a fairly obvious image degradation.

To address this fundamental issue, AI-enhanced scaling actually creates new visual data to fill in the gaps between source and target resolutions. When it comes to DLSS, NVIDIA trained neural networks by taking low and high resolution images of the same game and having their own supercomputer analyze the differences. To maximize results, high-resolution images were rendered at a level of detail that would be computationally impractical or even impossible to achieve in real time. Combined with motion vector data, neural networks were tasked with not only filling in the necessary visual information to make the low-resolution image better near the idealistic goal, but predicting what the next animation frame might look like.

NVIDIA’s DLSS 2.0 architecture

While fewer than 50 PC games support the latest version of DLSS at the time of writing, the results so far have been extremely promising. The technology will allow current computers to run newer and more complex games for longer, and for current titles, it will lead to significantly improved frames per second (FPS) rendering. In other words, if you have a computer powerful enough to run a game at 30 FPS in 1920 x 1080, the same computer could potentially reach 60 FPS if the game was rendered at 1280 x 720 and scaled up with DLSS.

There have been many opportunities to measure the true performance of DLSS on supported titles over the last year or two, and YouTube is filled with comparisons that show what the technology is capable of. In a particularly extreme test, 2kliksphilip ran the 2019 century Control and the 2020s Death Stranding only 427 x 240 and used DLSS to scale it to 1280 x 720. Although the results were not perfect, both games ended up looking far better than they had any right to consider that they were given in a resolution we wanted more likely related to the Nintendo 64 than a modern gaming PC.

AI-enhanced entertainment

While these may be early days, it seems pretty clear that machine learning systems like Deep Learning Super Sampling provide a lot of promise for games. But the idea is not limited to just video games. There is also a lot of pressure to use similar algorithms to improve older movies and TV series for which there is no higher resolution version. Both proprietary and open source software are now available that leverage the computing power of modern GPUs to upscale still images as well as video.

Of the open source tools in this arena, the Video2X project is known and under active development. This Python 3 framework uses waifu2x and Anime4K upscellers, which you may have collected from their names, are designed to work primarily with anime. The idea is that you can take an animated film or series that was only ever released in standard definition, and by running it through a neural network that is specially trained on visually similar content, bring it up to 1080 or even 4K resolution .

Although the software may get started, it can be a bit difficult given the different GPU acceleration frameworks available, depending on the operating system and hardware platform, but this is something that anyone with a relatively modern computer is able to do alone. As an example, I have taken a 640 x 360 frame from Big Buck Bunny and scaled it up to 1920 x 1080 using default settings on waifu2x upscale backend in Video2X:

Compared to the original 1920 x 1080 image, we can see some subtle differences. Shading of the rabbit fur is not quite as nuanced, the eyes lack a certain luster, and in particular the grass has gone from individual leaves to something more similar to an oil painting. But would you really have noticed any of it if the two pictures were not side by side?

Some assembly required

In the previous example, AI managed to increase the resolution of an image three times with insignificant graphic objects. But what is perhaps more impressive is that the file size of the 640 x 360 frame is only one-fifth of the original 1920 x 1080 frame. Extrapolating this difference to the length of a feature film, it is clear how technology can have a huge impact on the enormous bandwidth and storage costs associated with streaming video.

Imagine a future where your device instead gets a video stream of 1/2 or even 1/3 of the target resolution, along with a neural network model that instead of streaming an ultra-high resolution movie from the Internet. on the specific content. Your AI-enabled player can then take this “dehydrated” video and scale it in real time to the resolution appropriate for the screen. Instead of saturating your internet connection, it would be a bit like how they delivered pizza Back to the future II.

The only technical challenge that stands in the way is the time it takes to perform this type of upscaling: When running Video2X on even fairly advanced hardware, a rendering speed of 1 or 2 FPS is considered fast. It will take a huge boost in computing power to do AI scaling in real time, but the progress NVIDIA has made with DLSS is certainly encouraging. Of course, film enthusiasts will argue that such a reproduction may not fit the director’s intent, but when people watch movies 30 minutes at a time on the phones while commuting to work, it is safe to say that the ship has already sailed.


Source link