AMD has released FSR 2.0’s source code on GPUOpen, available for anyone to download and use — part of its commitment to making FSR fully open-source. The download contains all the necessary APIs and libraries for integrating the upscaling algorithm into DirectX 12 and Vulkan-based titles, along with a quick start checklist. AMD says DirectX 11 support needs to be discussed with AMD representatives, which suggests DirectX 11 is either not officially supported or is more difficult to implement.
Implementing FSR 2.0 will apparently take developers anywhere from under three days to four weeks (or more), depending on features supported within the game engine. FSR 2.0 uses temporal upscaling, which requires additional data inputs from motion vectors, depth buffers, and color buffers to generate a quality image. Games will need to add these structures to their engine if they’re not already available.
Games that already support 2.0 versions of DLSS will be the easiest to integrate, typically requiring less than three days of development, according to AMD. Next up are UE4 and UE5 titles with the new FSR 2.0 plugin. Games with support for decoupled display and render resolution are in the middle of AMD‘s “development timeline,” which includes most games with temporal anti-aliasing (TAA) support. Finally, games with none of FSR 2.0’s required inputs will take four weeks or longer.
Game developers will need to implement FSR 2.0 right in the middle of the frame rendering pipeline, because it fully replaces the duties of temporal anti-aliasing. This will require any post-processing effects that need anti-aliasing to be handled later in the pipeline after FSR 2.0 upscaling takes effect.
At the beginning of the pipeline you have rendered and pre-upscale plus post-processing effects that don’t require anti-aliasing. Directly in the middle is where FSR 2.0 upscaling takes place, then afterward post-upscale and anti-aliased post-processing effects are handled. Finally, HUD rendering takes place after everything else is completed.
AMD-says-machine-learning-is-overrated”>AMD Says Machine Learning Is Overrated
Perhaps the most controversial aspect of AMD‘s GPUOpen article is its view on machine learning. AMD says machine learning is not a prerequisite to achieving good image quality and is often times only used to combine previous frames to generate the upscaled image and that is it. This means there’s no AI algorithm for actually recognizing shapes or objects within a scene, which is what we would expect from an “AI upscaler.”
This statement is a direct attack on Nvidia‘s Deep Learning Super Sampling (DLSS) technology, as well as Intel’s upcoming XeSS upscaling algorithm — both of which are AI upscaled. Nvidia specifically has boasted greatly about DLSS’ AI requirements, suggesting it’s a necessity to generate native-like image quality.
However, we can’t back up AMD‘s statement that machine learning is only used for combining previous frame data and not for objects in the actual scene. Nvidia has stated that the AI training for DLSS takes lower and higher resolution images, and then all of that gets combined with the depth buffers and motion vectors with DLSS 2.0 and later. Stating exactly what the weighted AI network does and doesn’t do isn’t really possible with most machine learning algorithms.
Regardless, AMD has demonstrated with FSR 2.0 that you do not need machine learning hardware (i.e. Nvidia‘s Tensor cores or Intel’s upcoming Matrix Engines) to generate native-like image quality. FSR 2.0 has proven itself to be nearly as good as DLSS 2.x in tests we have conducted in both God of War and Deathloop, and more importantly it’s able to run on everything from current generation RX 6000- and RTX 30-series GPUs down to cards like the GTX 970 that launched clear back in 2014.
Even if we give Nvidia‘s DLSS a slight edge in image quality, restricting it to RTX cards makes it potentially far less useful for gamers. Going forward, any game that supports DLSS 2.x or FSR 2.0 will hopefully also support the other upscaling solution, providing all users access to one or the other Feature.