Jump to content

DLSS is now free


Benchboy

Do you want DLSS?  

329 members have voted

  1. 1. Do you want DLSS?

    • Yes
      293
    • No
      36


Recommended Posts

2 minutes ago, twistking said:

i think one important thing to understand is, that DLSS works fundamentally different than AMD FSR.

AMD FSR is indeed nothing more than a fancy upscaler. It does some combination of vector scaling, smoothing, sharpening etc. It has no "concept" of what is in the scene, it just applies a combination of effects to create clarity in an upscaled image, that would otherwise look more blurry and/or aliased.

When it was first shown off by AMD i did expect nothing to be honest, because this type of non-aware upscaling has been around for ages and never looked good. However, most reviewers found it to work surprisingly good, definitely better than maybe expected and at least comparable to Nvidia DLSS, which is quite an archivment considering that DLSS is technically way more sophisticated.

 

so from that you could definitely argue that AMD FSR would be the sensible choice for DCS, because it is hardware agnostic and at least comparable to DLSS in quality, HOWEVER i fear that FSR would not work particularly well in DCS, while i believe that DLSS would.

The reason is, that in DCS it's not just about getting rid of jagged edges and creating a smooth picture with great clarity. In DCS you need detail (detail is not the same as clarity).

Since FSR has no concept of the scene and no temporal element, it is less capable of maintaining precise detail like small text in cockpit details or aircrafts in the distance. There is a high chance that those details would get lost. The scene would still look nice, with smooth edges, but you'd definitely loose pixel detail, which would be more of a critical issue in DCS than it would be in other titles.

 

DLSS on the other hand is equipped to maintain Pixel level details and even create new pixel level detail (hyper-resolution) through the temporal element and it's ability to get clues from both cameras of a stereoscopic renderer. I don't know how well that would work within DCS, but i'm very optimistic after all what i've learned about it.

 

I want to say again, that AMD FSR might generally be better - if only for the fact that it's vendor agnostic - i just think that because of how it operates, it won't work too well with DCS and it's unique properties.

I hope the ED devs have a sincere look at both technologies and make a more educated decision about it than we can.

well said

  • Like 2
Link to comment
Share on other sites

DLSS2.0 AND AMD FSR are both AI Upscaling, AMD FSR is not a Direct Competitor to nVidia DLSS2.0.
While DLSS2.0 does look better, AMD FSR is way above DLSS1.0 and a hair behind DLSS2.0, AT LAUNCH.
Give them time to tweak it.

They Both use Algorithms (Call it AI Learning, call it Pre Programmed, they are both code),
Either way, they both analyze scene. DLSS2.0 just uses those Tensor Cores that nVidia shoved into gamers faces with little to no use for them.

DLSS2.0 is Limited to Specific GPUs, FSR is Not.

FSR will work on AMD GPUs and APUs, and nVidia GPUs all the way back to the 900 Series.

This is very important.
 


Edited by SkateZilla
  • Like 2

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

On 7/25/2021 at 10:02 AM, BIGNEWY said:

I have already mentioned it to the team, but I have no news to share with you. 

 

thanks

PLEASE PLEASE PLEASE. With DLSS VR will be finally the ultimate experience

Ryzen 3700x - 2080ti - 16GB 3200 - 500G SSD - OCULUS RIFT S

Link to comment
Share on other sites

1 hour ago, despinoza said:

With DLSS VR will be finally the ultimate experience

If you’re limited by the CPU, which VR mostly is. Then DLSS will be of no benefit. 

  • Like 3

i9-13900K @ 6.2GHz oc | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | 24GB GeForce RTX 4090 | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5

Link to comment
Share on other sites

1 hour ago, SharpeXB said:

If you’re limited by the CPU, which VR mostly is. Then DLSS will be of no benefit. 

On VR no one is limited by CPU... no one

 

i own a 3080ti and i`m limited by GPU

  • Like 1
  • Thanks 1

Ryzen 3700x - 2080ti - 16GB 3200 - 500G SSD - OCULUS RIFT S

Link to comment
Share on other sites

is ED planning a DLSS for DCS sometime in the near future ?

PC: i7 9700K, 32 GB RAM, RTX 2080 SUPER, Tir 5, Hotas Warthog Throttle, VPC MongoosT-50CM2 Base with VPC MongoosT-50CM2 Grip, VKB-SIM T-RUDDER PEDALS MK.IV. Modules : NEVADA, F-5E, M-2000C, BF-109K4, A-10C, FC3, P-51D, MIG-21BIS, MI-8MTV2, F-86F, FW-190D9, UH-1H, L-39, MIG-15BIS, AJS37, SPITFIRE-MKIX, AV8BNA, PERSIAN GULF, F/A-18C HORNET, YAK-52, KA-50, F-14,SA342, C-101, F-16, JF-17, Supercarrier,I-16,MIG-19P, P-47D,A-10C_II

Link to comment
Share on other sites

26 minutes ago, despinoza said:

On VR no one is limited by CPU... no one

 

i own a 3080ti and i`m limited by GPU

In a free flight sure. Try flying over a city. And you’re running a 3080Ti. Point is DLSS isn’t a magic button for VR anymore than other settings. Are you getting a solid 90fps from your CPU if you turn down the graphics?

  • Like 2

i9-13900K @ 6.2GHz oc | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | 24GB GeForce RTX 4090 | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5

Link to comment
Share on other sites

Havin a RTX3090 and a 8kx VR ripps the GPU.......on Native 8k Resolution... DLSS for VR in DCS would be a big game Changer ...... and DCS is a really good game.... with a superb VR performance it is another step in the right direction..... 

Link to comment
Share on other sites

1 hour ago, SharpeXB said:

In a free flight sure. Try flying over a city. And you’re running a 3080Ti. Point is DLSS isn’t a magic button for VR anymore than other settings. Are you getting a solid 90fps from your CPU if you turn down the graphics?

 

the point for me is unless you're willing to compromise in graphics (including in my case reduce PD to 60%), on VR is hard even get to solid 45. DCS is getting beautiful and is a shame to miss all of that in VR no matter what GPU you own.

Ryzen 3700x - 2080ti - 16GB 3200 - 500G SSD - OCULUS RIFT S

Link to comment
Share on other sites

Just now, despinoza said:

 

the point for me is unless you're willing to compromise in graphics (including in my case reduce PD to 60%), on VR is hard even get to solid 45. DCS is getting beautiful and is a shame to miss all of that in VR no matter what GPU you own.

Apparently a 3080 is bottlenecked by the CPU so DLSS would do nothing for this situation  

 

i9-13900K @ 6.2GHz oc | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | 24GB GeForce RTX 4090 | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5

Link to comment
Share on other sites

50 minutes ago, SharpeXB said:

Apparently a 3080 is bottlenecked by the CPU so DLSS would do nothing for this situation  

Actually, it means DLSS would do even more good — you can use that excess GPU oomph to transform a lower-quality scene to a higher-quality one without any loss.

  • Like 2

❧ ❧ Inside you are two wolves. One cannot land; the other shoots friendlies. You are a Goon. ❧ ❧

Link to comment
Share on other sites

DLSS might work very excellent even. 
The way i understand it works with very high res 'screenshots' which are uploaded in advance and then the algorithm predicts what a scene should look like instead of rendering it all. 

Considering that the scenes in DCS are very repetitive (same trees, houses, same everything actually) this could immensely increase performance with basically no negative graphical impact. 

I'm probably completely wrong though :d 

Link to comment
Share on other sites

22 hours ago, SkateZilla said:

DLSS2.0 AND AMD FSR are both AI Upscaling, AMD FSR is not a Direct Competitor to nVidia DLSS2.0.
While DLSS2.0 does look better, AMD FSR is way above DLSS1.0 and a hair behind DLSS2.0, AT LAUNCH.
Give them time to tweak it.

They Both use Algorithms (Call it AI Learning, call it Pre Programmed, they are both code),
Either way, they both analyze scene. DLSS2.0 just uses those Tensor Cores that nVidia shoved into gamers faces with little to no use for them.

DLSS2.0 is Limited to Specific GPUs, FSR is Not.

FSR will work on AMD GPUs and APUs, and nVidia GPUs all the way back to the 900 Series.

This is very important.
 

 

 

FSR makes not use of deep learning techniques, so no AI involved, DLSS and FSR are two completely diferent approachs for the same objective.

NZXT H9 Flow Black | Intel Core i5 13600KF OCed P5.6 E4.4 | Gigabyte Z790 Aorus Elite AX | G.Skill Trident Z5 Neo DDR5-6000 32GB C30 OCed 6600 C32 | nVidia GeForce RTX 4090 Founders Edition |  Western Digital SN770 2TB | Gigabyte GP-UD1000GM PG5 ATX 3.0 1000W | SteelSeries Apex 7 | Razer Viper Mini | SteelSeries Artics Nova 7 | LG OLED42C2 | Xiaomi P1 55"

Virpil T-50 CM2 Base + Thrustmaster Warthog Stick | WinWing Orion 2 F16EX Viper Throttle  | WinWing ICP | 3 x Thrustmaster MFD | Saitek Combat Rudder Pedals | Oculus Quest 2

DCS World | Persian Gulf | Syria | Flaming Cliff 3 | P-51D Mustang | Spitfire LF Mk. IX | Fw-109 A-8 | A-10C II Tank Killer | F/A-18C Hornet | F-14B Tomcat | F-16C Viper | F-15E Strike Eagle | M2000C | Ka-50 BlackShark III | Mi-24P Hind | AH-64D Apache | SuperCarrier

Link to comment
Share on other sites

7 hours ago, 5ephir0th said:

 

FSR makes not use of deep learning techniques, so no AI involved, DLSS and FSR are two completely diferent approachs for the same objective.

 

Correct, FSR and DLSS are not the same.

However, FSR is AI Upscaling,

AI Upscaling by Definition is the use of heuristic or advanced algorithms running on hardware to create a higher resolution image from a lower resolution source.

FSR Might not use Deep Learning, (which DLSS2.0 does not use it locally either), but the core of FSR is use of Advanced algorithms to up-sample and filter the image. FSR Utilizes hardware to achieve this as well, so it's not a "Software Accelerated" Solution either.

Deep Learning isn't needed for real-time rendering, no frame is going to ever be the same. so no matter how much DLSS or the Tensor Cores "Learn", the scene is always going to be different, it's going to continue learning on a learning curve that has no limit, and the learning is done on nVidia's end, and sent out in driver updates, not on the consumer's end.

DLSS relies on the same use of filters and algorithms to up-sample the image and fill in missing data, using the explosively fast, but error prone tensor cores to process the numbers data (which is what they are good for).

DLSS is nVidia's way of making the consumer feel like they are getting something back for those expensive tensor cores that they crammed down the consumer throats and used as a selling point.

The Core Upscaling Algorithm used for each supported title still has to be computed by nVidia's super computer, and then the tensor cores analyze scenes locally and adjust the filters.

Keyword Algorithm

 

Oh and that verbage came directly from nVidia's DLSS2.x Support document's.
There is no "Deep Learning" being done on your end, it's a profile generated by nVidia's AI, and further enchanced using local Tensor cores.


Edited by SkateZilla
  • Like 1

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

8 hours ago, Tippis said:

Actually, it means DLSS would do even more good — you can use that excess GPU oomph to transform a lower-quality scene to a higher-quality one without any loss.

 

Actually it means the GPU isnt being fed DirectX instructions fast enough by the Overloaded CPU having the process the DirectX Overhead of the scene as more objects populate the scene.

DLSS will do nothing to fix the problem of DirectX API Overhead.

  • Like 1

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

4 minutes ago, SkateZilla said:

DLSS will do nothing to fix the problem of DirectX API Overhead.

…assuming that's where the problem lies to begin with.

  • Like 1

❧ ❧ Inside you are two wolves. One cannot land; the other shoots friendlies. You are a Goon. ❧ ❧

Link to comment
Share on other sites

6 minutes ago, Tippis said:

…assuming that's where the problem lies to begin with.

 

DirectX API Overhead isn't an assumption

  • Like 2

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

On 7/25/2021 at 10:07 PM, twistking said:

it is indeed the case that DLSS can add detail, mainly because it's temporal and uses information from already rendered frames.

in VR mode it can also use information from both eyes to reconstruct an image.

i think the following video gives a good udnerstanding of how DLSS works by letting it upscale from ridiculously low resolution.

 

the main question will be, how well it can handle the specifics of DCS. i'm very optimistic that it will shine in VR because of the extra raw data the stereoscopic rendering produces.

@Tom Kazansky you could theoretically simply view it as an anti aliasing techique, because you could use it to output even higher resolution to downsample again, thereby reducing aliasing. have a look at the video, i think your questions are covered in it quite well. in the end it comes down to marketing, but that does not change the fact that it is a very interesting technology.

 

Thanks again for that video. I tried the game with the most benefit from DLSS2.0 that is shown in a table in this video and now I'm a believer! The result on a 4k monitor with raytracing, (DLSS1440->)2160, ultra settings, 4xAA**, SSAO... is mindblowing compared to non DLSS2.0. I don't want to discuss AAA games. I'm sure flightsims are another story but the potential in that tech is huge. It may not look better (there are some very small artefacts at the edges from time to time*) but the performance gain in FPS is so overwhelming, so that it is not a question to use DLSS2.0 in this case.

 

I really want to see what it does for DCS+VR.

 

(my system i7 8700, 32 GB, RTX2080)

 

EDIT: *those artefacts are no factor while playing the game and I thought about deleting that part of my post completely. But I'm wondering what small gauges of a cockpit will look like. Never mind. I really want to see it with DCS.

 

Edit2: **AA is grayed out when DLSS is active. I suppose there is no extra AA possible nor needed with DLSS!?


Edited by Tom Kazansky
  • Like 3
Link to comment
Share on other sites

@SkateZilla i don't get what you are trying to convey. all i get from your posts is, that you think that nvidia is kinda evil and plays marketing mind games with their customers.

i wouldn't even argue too hard against that, however this is not the point of this discussion, is it?

it's absolutely irrelevant if DLSS is actually imploying deep learning algorithms on your card or not. names are made up by marketing departments. the only proof you'll get by dissecting those names, is the fact that they are, well... made up by marketing departments.

however you cannot deny that DLSS is technically way more sophisticated than AMD FSR. you cannot deny that DLSS is in theory better equipped to handle the requirements of DCS to not only produce a nice and stable image, but also provide the pixel level detail necessary.

https://www.youtube.com/watch?v=_gQ202CFKzA&t=266s (the video from my previous post, but this time with time-stamp that shows how it generates extra detail from temporal and vector data)

DLSS obvious achilles' heel is the fact that it only runs on tensor-core equipped GPUs. that is the very big disadvantage it has. while this is bad enough, all other arguments against it are strawman-arguments, i feel. well, at least in the realms of DCS that is, where the requirements of such a technology go beyond just producing a temporally stable, clear image.

 

 

  • Like 2
Link to comment
Share on other sites

2 hours ago, twistking said:

those names, is the fact that they are, well... made up by marketing departments.

That’s for certain. Deep Learning Super Sampling isn’t super sampling. It’s upscaling. They shoulda called it DLUS but that just doesn’t sound sexy…

  • Like 1

i9-13900K @ 6.2GHz oc | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | 24GB GeForce RTX 4090 | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5

Link to comment
Share on other sites

5 hours ago, freehand said:

lol so what are you saying ?

13 hours ago, Tippis said:

…assuming that's where the problem lies to begin with.

 

That we shouldn't flatly assume that DX API overhead is what keeps the GPU starved of data.

That we shouldn't flatly assume that the GPU needs to be continuously fed API instructions at all to do its temporal reconstruction of details.

That we shouldn't flatly assume that just because a single core is choked by DCS, the DX API is also choked.

 

Basically,  that we shouldn't flatly assume that DX API overhead is even an issue in this case — hence “assuming that's where the problem lies to begin with” — not that it doesn't exist, but that its existence doesn't particularly matter for the outcome we're looking for.

 

  

6 hours ago, SharpeXB said:

Deep Learning Super Sampling isn’t super sampling


It is if you tell it to be.

 


Edited by Tippis

❧ ❧ Inside you are two wolves. One cannot land; the other shoots friendlies. You are a Goon. ❧ ❧

Link to comment
Share on other sites

1 hour ago, Tippis said:

 

That we shouldn't flatly assume that DX API overhead is what keeps the GPU starved of data.

That we shouldn't flatly assume that the GPU needs toe be continuously fed API instructions at all to do its temporal reconstruction of details.

That we shouldn't flatly assume that just because a single core is choked by DCS, the DX API is also choked.

 

Basically,  that we shouldn't flatly assume that DX API overhead is even an issue in this case — hence “assuming that's where the problem lies to begin with” — not that it doesn't exist, but that its existence doesn't particularly matter for the outcome we're looking for.

 

  


It is if you tell it to be.

 

 

 

If you understood the history and backstory of the DirectX 5-11 API CPU overhead problem, you would see that its already been proven.

 

DirectX Call Sent to CPU, CPU Processes, Instructions Sent to GPU.

 

Has nothing to do with rendering resolution, its about draw calls period, objects, textures, shaders etc.

 

If the draw calls back up, the CPU gets behind processing, the GPU waits for instructions, GPU Utilization drops, as well as FPS. DLSS or FSR, can do whatever it wants, but will still have to wait for the scene frame to be rendered before it can fill in the data. It will just be another function tacked onto the back half of the rendering pipeline that will still sit idle and wait for DX API functions to be processed.

 

If DX API CPU Overhead wasnt an issue, then MS wouldnt have Ditched 4 yrs of DX11 API MultiCore Revamping for DX12 and a complete API Rewrite.

 

Nor would AMD have sunk billions into development of Mantle, and kronos wouldnt have bought the mantle source to use in creation of Vulkan.

 

DX12 and Vulkan are both Low Level Low Overhead APIs, most of the Graphics commands are sent directly to the GPU to process instead of waiting in line for tue CPU thread to process.

 

As for nVidia, they seem to have adopted 3DFX's core M.O. of creating a lpt of proprietary API functions that only work on specific GPUs and then give developers money to implement these features, and use as free advertising, as games will see nVidia Logos slapped on launch screens and options screens of those titles.

 

That M.O. worked really well in causing 3DFx's downfall.


Edited by SkateZilla
  • Like 1

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...