When they launched in 2018, Nvidia's Turing generation of GPUs introduced some fascinating new features for gamers around the world. Ray tracing is easiest when you wrap your head around your head, but deep learning supersampling (DLSS) is a little more nebulous.

While more complicated to understand, DLSS is one of the most important graphical features of Nvidia. It offers higher frame rates and resolutions and uses fewer GPU resources. To help you understand how it works, here's our guide to everything you need to know about Nvidia's RTX DLSS technology so you can decide if it's reason enough to move on to a new GPU on the RTX 30 series update.

What is DLSS?

Deep Learning Super Sampling uses artificial intelligence and machine learning to create an image that looks like a higher resolution image without the rendering overhead. Nvidia's algorithm learns from tens of thousands of rendered image sequences created with a supercomputer. This trains the algorithm in such a way that it can produce similarly beautiful images without the graphics card having to work so hard.

DLSS also includes more traditional embellishment techniques such as anti-aliasing to create an eventual image that looks like it was rendered at a much higher resolution and level of detail without sacrificing the frame rate.

This is all possible thanks to Nvidia's tensor cores, which are only available in RTX GPUs (outside of data center solutions such as the Nvidia A100). Although the RTX 20 series GPUs contain tensor cores, the RTX 3070, 3080 and 3090 are equipped with Nvidia's second generation tensor cores, which offer higher performance per core.

Where it originally started with little competition, other sharpening technologies from AMD and Nvidia are even now competing with DLSS for mindshare and effective use in 2020 – even if they don't work in the same way.

What does DLSS actually do?

DLSS is the end result of an extensive process of teaching Nvidia's A.I. Algorithm to create better looking games. After the game is rendered at a lower resolution, DLSS derives information from its high resolution image training knowledge base to generate an image that still looks like it is running at a higher resolution. The idea is to make games that are rendered at 1440p look like they are running at 4K, or 1080p games to make them look like 1440p. DLSS 2.0 offers 4x the resolution with which you can render games at 1080p and output at 4K.

More traditional super-resolution techniques can introduce artifacts and errors in the final image. However, DLSS was designed to work with these errors and produce an even better looking image. It's still being tweaked, and Nvidia claims that DLSS will keep improving in the months and years to come. However, given the right circumstances, it can provide significant performance gains without affecting the appearance of a game.

While early DLSS games like Final Fantasy XV provided modest frame rate improvements of just five to 15 FPS, far greater improvements have been made in newer versions. With games like Deliver us the Moon and Wolfenstein: Youngblood, Nvidia introduced a new A.I. The DLSS engine we know improves image quality, especially at lower resolutions like 1080p, and can increase the frame rate by over 50% in some cases.

There are also new quality adjustment modes that DLSS users can do. You can choose between performance, balance, and quality, and focus the RTX GPU's core tensor performance on a different aspect of DLSS.

How does DLSS work?

DLSS 2.0

Everything you need to know about nvidia's rtx dlss technology performanceDLSS 2.0 performance

everything you need to know about nvidia's rtx dlss technologyDLSS 2.0 quality

DLSS forces a game to render at a lower resolution (typically 1440p) and then uses the trained A.I. Algorithm to infer what it would look like if rendered at a higher value (usually 4K). To do this, some anti-aliasing effects (probably Nvidia's own TAA) and some automated sharpening are used. Visual artifacts that wouldn't be there at higher resolutions are also ironed out and even used to infer the details that should be in an image.

As Eurogamer explains, the A.I. The algorithm is trained to view certain games at extremely high resolutions (supposedly 64x supersampling) and is reduced to just a few megabytes in size before being added to the latest Nvidia driver versions and made available to players around the world. Originally, Nvidia had to go through this process game by game. With DLSS 2.0, Nvidia now offers a general solution so that the A.I. The model no longer needs to be trained for every game.

In fact, DLSS is a real-time version of Nvidia's screenshot-optimizing Ansel technology. It renders the image at a lower resolution for better performance and then applies different effects to create a relatively comparable overall effect to increase the resolution.

The end result may be a mixed bag, but generally results in higher frame rates without significant loss of visual fidelity. Nvidia claims that frame rates in Remedy Entertainments Control can improve by up to 75% when using both DLSS and ray tracing. It's usually less pronounced and not everyone is a fan of the later look of a DLSS game, but the option is certainly there for those looking to beautify their games without paying the cost of higher resolution.

In Death Stranding, we saw significant improvements over native rendering at 1440p. Performance mode lost some of the finer details on the back, especially on the tape. Quality mode kept most of the details and smoothed out some of the rough edges of the native rendering. Our screenshot “DLSS off” shows the quality without anti-aliasing. While DLSS does not maintain this level of quality, it is very effective in fighting aliasing while preserving most of the details.

We didn't see any excessive sharpening with Death Stranding, but that could happen when using DLSS.

Better over time

Deep learning supersampling has the potential to allow gamers who cannot achieve comfortable frame rates at resolutions above 1080p to do so using inference. DLSS could be the most powerful feature of future Nvidia RTX GPUs. They're not as powerful as we'd hoped and the ray tracing effects are pretty, but they have a significant impact on performance, but DLSS could give us the best of both worlds: better looking games that also perform better provide.

The best place for this type of technology might be lower-priced cards, but unfortunately it's only supported on RTX graphics cards, the weakest of which is the RTX 2060 – a $ 300 card. The new RTX 3000 GPUs offer an insight into the future use of DLSS by Nvidia: Resolutions over 4 KB at stable frame rates.

Nvidia showed off the RTX 3090, a $ 1,500 GPU with 24GB of storage that renders games like Wolfenstein: YoungBlood at 8K with ray tracing and DLSS turned on. While 8K is still far from widespread adoption, 4K displays are becoming more common. Instead of rendering at native 4K and hoping for 50-60 FPS, gamers can render at 1080p or 1440p and use DLSS to fill in the missing information. The result is higher frame rates with no noticeable loss of image quality.

Because DLSS works over a neural network, it gets better over time. DLSS 2.0 already has far fewer artifacts compared to the original DLSS, so games like Death Stranding can deliver a cleaner image compared to other image reconstruction techniques like checkerboard rendering. The problem now is game support.

There are currently only 15 games that support DLSS 2.0. This is less than the number of games that support ray tracing. Fortunately, widespread acceptance should come soon. Upcoming releases like Cyberpunk 2077 and Call of Duty: Black Ops Cold War will support DLSS. With the advent of Ampere GPUs, developers will likely look for ways to save system resources while displaying at high resolutions. DLSS offers a tested, generalized solution.

It could be that in a year or two, DLSS will be a common feature in most games due to its ease of implementation and the dominance of RTX GPUs in gamer-sy

Editor's recommendations