The fundamental fact about 3D graphics that you need to remember is that there isn't any canonical, right way to do it. The goal is to make something that looks good and performs well. There's a whole lot of fakery going on under the hood, but if it looks good and performs well, that's a sufficient justification for it.
The problem is that what looks good is a matter of opinion. On the performance side, average frame rates are more objective, but how much you value consistency versus high averages is also partially a matter of opinion. There are a lot of graphical effects that will certainly make a game run slower, but people can reasonably disagree on how much better they make the game look, and whether it justifies the performance hit.
My own personal opinion is that rasterized shadows look awful, so I usually turn them off if I can. Depth of field brings a performance hit to make games look worse, not better, so that gets turned off, too. Ambient occlusion also brings a performance hit, and makes games look different, but not really better or worse, so I also turn that off. But having enough samples to preserve the intended detail and avoid artifacting from interpolation is hugely important to how good a game looks. Other people may reasonably disagree with some or all of those opinions, and that's fine. They can set their graphical settings differently from how I do.
As 4K monitors have become more common, there have been people who argue that the higher resolution doesn't matter, as you can't tell the difference. Depending on how large the monitor is, how far away from it you sit, and how good your vision is, that argument has more merit in some situations than others. Nvidia has introduced DLSS, arguing that rendering at a lower resolution and using their algorithm to upscale it to 4K is good enough, while offering the performance benefits of rendering at the lower resolution. While it has gotten less attention, variable rate shading does something similar, allowing games to increase performance by declining to generate a new sample for every pixel of every frame, the way that 3D graphics traditionally has.
All three of these are making effectively the same argument. They aren't arguing that a 2560x1440 resolution looks just as good as 3840x2160, or that DLSS upscaling to 4K looks just as good as native 4K, or that variable rate shading looks just as good as getting a new sample for every pixel of every frame. Rather, they're arguing that the difference in image quality is small enough that the large difference in performance justifies a small hit to image quality. And that's a plausible argument, though again, whether you buy it is a matter of opinion.
Well, usually they're not arguing that it's just as good. Yesterday, this site ran a remarkably awful review of a video card (MSI GeForce RTX 3090 Suprim) that argued that DLSS upscaling gives better image quality than native 4K. That review has been mercifully removed from the site.
On another note, I'd also like to use DLSS somewhat improperly to refer to the process of rendering a game at a lower resolution and then using some fancy algorithm to upscale it to a higher resolution. That can be done well or badly, and even the fans of Nvidia's DLSS 2.0 seem to mostly agree that DLSS 1.0 was garbage. And my usage of "DLSS" also includes AMD's upcoming FidelityFX Super Resolution and any other DLSS-like algorithms that may arise in the future.
It's not a coincidence that DLSS and variable rate shading arrived as the transition to 4K monitors was underway. There's no reason why you have used something much like DLSS a decade ago to render a game at 1280x720 and upscale that to 1920x1080. The reason they didn't is that it would have looked terrible, and far inferior to rendering the game at native 1920x1080. For that matter, they could use DLSS today to render a game at 1280x720 and upscale that all the way to 4K, but it will look terrible if you do.
And no, DLSS wasn't enabled by Turing's tensor cores. That's marketing garbage and an attempt at convincing gamers that they should pay extra for some stupid chunk of silicon that they don't have any real use for. Nvidia put the tensor cores in because they wanted to sell the same GPUs for compute, such as in the Tesla T4, and some machine learning algorithms that they wanted to sell such cards for benefit tremendously from the use of tensor cores. Convincing gamers that this was something that they should pay extra for was a marketing problem, and DLSS "requiring" tensor cores was the marketing solution that they came up with.
This is readily demonstrated by some back of the envelope arithmetic. Let's suppose that you're running a game at 3840x2160 and 144 Hz, and let's suppose that computing each color of each pixel from DLSS involves taking a linear combination of 100 other samples. In that case, you're looking at about 0.7 TFLOPS at half-precision to do the computations for DLSS, or less than 1% of what the GeForce RTX 3090 is rated at using only packed half math and not tensor operations. Or for another comparison, less than 3% of what the older Radeon RX Vega 64 can do without having tensor cores at all. And that's probably an overestimate of the brute computational work involved.
Now, there is a lot of other stuff to do as part of DLSS. At least as used in Nvidia's DLSS 2.0 algorithm, it requires computing direction vectors for each pixel that is rendered, and storing them somewhere. It requires loading the right data into the right caches at the right time, which is likely to be rather complicated. But those portions of the work do not and cannot use tensor cores at all.
In objective terms, the image quality loss of rendering at 1280x720 and upscaling to 1920x1080 is the same as of rendering at 2560x1440 and upscaling to 3840x2160 ("4K"). But they're not perceived the same way by the human eyes. The smaller the pixels get, the less important each pixel is, and the more acceptable it becomes for some pixels to be a little wrong. It has similarly been argued that if individual pixels are small enough that you can't see them, do you really need anti-aliasing? Individual pixels were very visible at the NES's resolution of 256x240, but at 4K, you have to look awfully closely to see the individual pixels along the edge of a curve.
Comments
tl;dr
DLSS is marking fluff (and I totally agree)
What I called marketing garbage is the narrower claim that DLSS proves the value of tensor cores. That's wrong even if you think DLSS is the greatest graphical feature ever. You can do DLSS or something much like it just fine without tensor cores.