قالب وردپرس درنا توس
Home / Technology / Do not buy Ray-Traced Hype around the Nvidia RTX 2080

Do not buy Ray-Traced Hype around the Nvidia RTX 2080

This site can earn affiliate commissions from the links on this page. Terms of Use.

On Monday, Nvidia announced a new set of GPUs in a presentation focused on ray tracing and the advent of ray tracing in today's game. Nvidia has built a whole new capacity around ray tracing and announced a brand new SM (Streaming Multiprocessor) architecture around it. But Nvidia also debuted GPUs at significantly higher prices than previous generations, and it showed no correct reference data that was not about ray tracing.

Buying CPUs and GPUs  SEEAMAZON_ET_135 See Amazon ET commerce for a First Generation feature is almost always a bad idea. If you purchased a Nvidia Maxwell or Pascal video card because you thought DX12 and Vulkan were the future, do you feel that you have what you paid for as far as the current feature is concerned? Probably not. AMD does not get a bow on this either. True, DX12 has been more fun to Team Red than Team Green, but if you bought a Radeon mindset for 2013, Mantle would take over the game industry, you did not get much shipping titles until it was retired for the benefit of other APIs. If you bought a Radeon in 2013, you thought you should come in at the beginning of a new game time, well you were wrong.

The list continues. The first DX10 cards were not very fast, including models like Nvidia's GTX 8800 Ultra. The first AMD GPUs that supported tessellation in the DX11 were not very good at it. If you bought a VR headset and a top-end Pascal, Maxwell or AMD GPU to run it, guess what? Since VR is well-established, if ever, you play it on very different and significantly improved hardware. The first strike to buy to RTX specifically is that raytracing is well established, handy useful and runs modern games, the RTX 2080 will be a rubbish GPU. It's not an accusation for Nvidia, it's a consequence of the significant lead time between when a new GPU feature is released and when enough games take advantage of this feature to make it a serious asset.

But there is also reason to ask just how much performance these GPUs will deliver, period, and Nvidia left significant questions on the table at that point. The company showed no references that did not involve radiation tracking. To try to predict what we can see from this new generation, let's look at what previous cards were delivered. We are helped in this [H] ardOCP which recently published a massive generation comparison of GTX 780 against GTX 980 and 1080. They tested a 14-game package from Crysis 3 to Far Cry 5. Let's compare the GPUs to the speed of performance improvement and see what we can empty:


Click to enlarge

There's a lot going on in this chart, so let's turn it down. When Nvidia moved from Kepler to Maxwell, we see evidence that they made the kernel much less dependent on raw memory bandwidth (GTX 980 has significantly less than 780), but that Nvidia lost nothing in total performance. Maxwell was a better balanced architecture than Kepler, and Nvidia successfully delivered great performance improvements without a hub change. However, while Maxwell used less bandwidth than Kepler did, it still enjoyed a big increase in fill rate, and the overall improvement over 14 games traces the filling rate. Clock speeds also increased significantly. The percent comparison data from [H] ardOCP reflect 14 game improvements for GTX 980 compared to GTX 780, and then from GTX 1080 compared to GTX 980.

Maxwell to Kepler duplicates this enhancement. The filling rate increases a monstrous 1.6x, thanks to the increased bells (ROPs were identical). Bandwidth increased on the decision of GDDR5X, and the overall improvement in gaming performance is directly in line with these gains. The point here is this: While a given game may depend more or less on the engine's specific characteristics and the special features of the design, the average trend shows a strong relationship between throwing more bandwidth and game frequency and performance of those titles.

Now we come to the RTX 2080. Its fill rate is actually a bit smaller than GTX 1080  SEEAMAZON_ET_135 See Amazon ET commerce . Core increase is less than one of the previous two generations. The bandwidth increase is smaller. And these facts alone indicate that if Nvidia failed to deliver the mother to all IPC enhancements by building up his GPU core, the RTX 2080 family is unlikely to deliver a big improvement in gaming. This preliminary conclusion is further strengthened by the company's refusal to show any game data that did not focus on ray tracing this week.

Ray Tracing Future is not here yet

Furthermore, when you look at what Nvidia is using RTX for, it's clear that the company does not actually deliver completely raytraced games. Instead, the focus is on using beam tracking to handle certain specific tasks, such as improved noise reduction or shading. And it's fine as far as it goes. Screenshots from PCGamesN (provided by Nvidia's comparison tool) show that RTX can make a nice difference in certain scenes:

Right: RTX on. Left, RTX off.

But the RTX hardware of the Nvidia GPU, including the RTX 2080 Ti, will not be fast enough to just stretch a full AAA game. Although it was, game engines were not made for this. This point simply can not be emphasized enough. There are no ray tracing engines for games right now. It will take time to make them. At this stage, the goal of RTX and Microsoft DTX is to let raytracing be distributed in certain areas of game engines where rasterization is bad and ray tracing can provide better visual credibility at significantly lower performance costs.

It's not a new realization. When I wrote about ray tracing in 2012, one point I found out that there are certain areas where ray tracing can actually be faster than rasterization, while providing a higher quality score. Combining the two techniques in the same engine is difficult and it's hard to keep the beam tracking fast enough to work in real time, and Nvidia and Microsoft deserve credit to pull it off – but remember exactly what you buy here. Despite the implications of the hype train, you will not play a game that looks like a ray-traced version of Star Wars any time soon because the GPU that would deliver that kind of fidelity and resolution does not exist . Demos are always will look better than shipping products because demos are not keen to imitate the entire game world – just the beautiful pictures.

Look for RTX's features to give a nominal increase to image quality. But do not expect the moon. And never, ever, buy a GPU for a feature that someone promised you will appear later. Buy a GPU for the features it offers today, in shipping titles, which you definitely can benefit from.

What can we say about RTX performance?

I'm not willing to declare the performance of RTX 2080 a definite question because numbers tell always the whole story. When Nvidia overheard his GPUs from Fermi to Kepler, it moved to a dramatically different architecture. The ability to predict performance as a result of comparing core numbers and bandwidth broke as a result. I have not seen any information that Turing is as big a departure from Pascal as Kepler was from Fermi, but it is always best to celebrate the precautionary side of formal reference data available. If Nvidia fundamentally reworked its GPU kernels, it's possible that the gain may be much greater than the simple math suggestion suggests.

Nevertheless, simple math suggests that the gain here is not particularly strong. Combining it with real-but-less-than-awe-inspiring gains from the incremental addition of beam tracking to freight engines, and the significant price increases Nvidia has turned on, there's good reason to keep your wallet in your pocket and wait and see how this plays out. But the only way the RTX 2080 is going to deliver significant performance improvements over Pascal, beyond 1.2x – 1.3x suggested by core values ​​and bandwidth gains, is if Nvidia has pulled out a huge efficiency gain as to how much work can be done per SM.

Now Read: How Nvidia RTX Works, Nvidia Unveils Turing GPU Architecture, and Nvidia Launches the New RTX GPU Family

Source link