- ‘We were a bit too ambitious’ – how AMD is upping the game with FidelityFX Super Resolution | TechRadar
-
ID: hbnvgl8
-
Fucking Google ruined the internet with AMP.
ID: hbn17yxPeople who post AMP links ruined the internet*
ID: hbnsqoyTBH its so deeply rooted in some places that I can't be bothered to learn how not to do it. The other day I was sharing a bunch of stuff through my phone and every single link was an amp.
ID: hbnlurzYou can install a small extension for your browser called "Redirect AMP to HTML" in case you are browsing on your desktop. All those stupid AMP links will be gone and you will be a happier person 😀
ID: hbp704eAMP is mainly a problem on mobile, it's Accelerated Mobile Pages after all
ID: hbnn1wtamp?
ID: hbnzdezIt's a Google service that redirects the proper website you want to read through Google servers.
This allows Google to monitor everything you do without having to provide cookies, and it circumvents the GDPR in Europe.
ID: hbnunt7what even is that?
ID: hbo0g12Google "optimizing" web pages in a way that conveniently breaks many non-google things, allows them to track more people, and potentially changes the way the whole web works...
ID: hbpmc3aHere's a simple fix for Android phones..
-
i find it interesting how a few years ago everyone was busy jacking off over the ability to play games at 4K downscaled to 1080P as like a super anti-aliasing measure and now we are doing the same thing in reverse
ID: hboew0jIt makes sense, because a few years ago games of then (or ever more so earlier) were easier to run at 4k on hardware of then than games of today are to run at 4k on hardware of today, because of a) the massively resolution scaling cost of raytracing and b) price to performance being basically frozen still since RX580. Meanwhile, 4k screens of then were basically universally 60fps max and generally had other massive disadvantages, now price of native 4k screens has been dropping rapidly and 4k screens aren't hot garbage about response time and stuff, so it actually may make more sense to have one.
ID: hbq1dodStill to expensive for mainstream use. Especially with cards so expensive. At least high refresh rate 4k screens.
ID: hboyz6gI was never very impressed with the results from downsampling. These upscaling techniques are much more impressive in that regard.
-
Dead link
ID: hbmpwn5Link works, but it might be because it's an AMP link.
Try this
-
I actually found this a surprisingly forthright interview.
1) Yeah it's just spatial. This means it's trivial to implement and it's even in the XDK. 2) We heard a lot of feedback that we should be using temporal and motion data, and we're still working on other stuff that I can't talk about in this interview.
Seemed completely reasonable to me. FSR is what it is, it may not be competitive with DLSS2 but it offers real advantages, works on every GPU, and was instantly widely adopted. Hopefully new tech comes that will directly compete with DLSS2. Either that or XeSS takes off as an open standard and AMD supports it.
ID: hboyg2lI think it is pretty competitive with DLSS 2.x in image quality and without the ghosting that can come with DLSS. It's also much easier to integrate and supports a much broader range of hardware which are two key advantages. Having more good quality options is always better.
Amazing work from AMD given FSR is working with much less information than DLSS is.
ID: hbozalfI wouldn't really say competitive with DLSS 2.x in image quality - DLSS has better and, well, actual reconstruction, but FSR also delivers decent results at least on Ultra Quality. With the easier integration and broad hardware support it's a win for gamers anyways and nice option to have. But it's not as good as DLSS I would say, only having no ghosting effects is nice.
ID: hbq6kc8Putting FSR and DLSS 2.x into the same category for comparison is insulting. The latter relies heavily on inference to do temporal reconstruction of the frame at a higher resolution whilst the former simply upscales, sharpens, and does some color correction.
The difference between the two is as wide as the ocean is deep.
ID: hbou9zeInstantly widely adopted? Wtf did I miss...please tell me how you came to this conclusion because it is in a couple games I have heard of
ID: hbov4blEvery engine is adding it and it's in the XDK so every Xbox game will have it too.
ID: hboylfkYeah, probably not well-worded, but a lot of studios are on-board and it's in some engines (at least Unity), so more like fast acceptance and a positive response due to the easy implementation. So it's not widely adopted yet, but it definitely has the potential to catch up fast and have wider adoption than DLSS. But then there's also XeSS coming, so let's see. 🙂
ID: hbpxudgThere's this program on steam called lossless scaling that you can use to enable FSR to virtually any pc game. That seems like a very instant adoption to me.
ID: hbq7x8mHow many games used DLSS within a few months of release? It was also not good in terms of image quality until DLSS 2.0.
-
Pretty bad interview TBH. All the questions were softballs, basically setting up a marketing response. Compared to the issues of DLSS adoption and being proprietary but no mention of XeSS. Even Raja was willing to tweet positively about FSR. No questions about any new game announcements, if they would support something like XeSS officially, what their FSR roadmap looks like, etc.
If you read between the lines I think AMD knows FSR as we know it is going to be a backup solution and fade out of the limelight. There are numerous mentions of FSR being a first attempt, because temporal and ML were too difficult (Intel has more software engineers than AMD has employees, and Nvidia also has a lot more), and that another temporal version might come later (likely a collab on open source XeSS). Words like 'balance', 'good', 'existing simple upscaling solutions', etc give away that they know its not a replacement for the more complex and better solutions like DLSS or XeSS, but a backup. Which is perfectly fine, thousands of games exist in the past that will never get DLSS or XeSS support, but you can hack FSR to work on them.
ID: hbnf7wyI don't think XeSS is available yet, so how could AMD answer any questions about it? A temporal solution doesn't necessarily mean ML is used.
ID: hbnvukvWell, when you live and breathe Intel its absence more strongly felt!
ID: hboztyeIt's likely it will use ML to a degree given Intel's focus on AI. How much benefit that brings in terms of real world results is debatable.
ID: hbn4ojyRdna2 ain't great with ml inference, that's probably a huge reason. Newer gen gpu? Probably
ID: hbnur0lRDNA3 is rumoured to come with AMDs Matrix AI cores which perform like Tensor cores, right now Matrix is locked to CDNA but AMD should have no issues bringing it over to use for ML applications in gaming.
ID: hbn8oqfI dunno. They made the decision to highly specialize their GPU architectures with the RDNA and CDNA split, and adding tensor units to their gaming architecture would mean backpedaling on some of that. And unless they foresaw this turn of events many years ago, it's unlikely we'll see a real DLSS competitor any time soon.
ID: hbo9830Yeah this article sucked. Pure fluff. I learned nothing from it and only wasted 10 minutes of my life I will never get back.
ID: hboehnzI really hope that AMD joins Intel to develop their solution together and do it fast! It would be the best for everyone (and worst for Nvidia).
ID: hbqp775XeSS is first and foremost based on their Matrix Engines (XMX) and is a temporal solution close to DLSS in concept and execution (uses proprietary hardware for its use); The DP4a fallback (supported after RDNA) has already been mentioned that is not as performant or as capable quality wise as the XMX route.
Basically, if AMD expects a free ride from Intel, then they are going to be left in a distant third place in this regard. (And the open sourcing for XeSS is only happening after its "mature", whatever that means)
ID: hbozb7rIf there is very little difference in image quality in a certain game between DLSS, XeSS and FSR then most people aren't going to care how much fancy AI Nvidia and Intel threw at it, only whether they can use it to improve performance.
-
So, it's possible for FSR 2.0 to be different than 1.0, but both still exist as different methods of upscaling.
ID: hboqztqI'm going to guess FSR 2 is going to use temporal techniques, though will be slightly harder to implement than FSR 1's spatial methods. It'll be up to the developers how many resources they wish to devote to upscaling and which is more appropriate for their project. They'll probably exist side-by-side, and AMD might well become the only vendor with solutions for both types of upscaling.
-
Great interview, I found it very interesting how they developed FSR.
-
Has fxr been added to the PS5?
ID: hboxy7kThere is a PS5 game in development using it according to the article.
-
.... yeah that horizon dawn screenshot FX On looks complete trash to me
ID: hbq29leThey always look like crap in screenshots. How often are you standing still in a game?
ID: hbqw6ndThere are way more than over sharpness that suck with FSR in motion ! and all are visible in most of these games.
If FSR would be amazing/any good you wouldn't have Intel also jumping into the scene with their universal/open source upscaling.
-
FSR seems kind of useless to me considering it has to be done on a per game basis.
ID: hbnu7eaAll of these solutions, with the exception of FSR, need to be added on a game-by-game basis.
Now, to get the best out of FSR it takes it being added into the game directly. But that isn't a hard requirement like it is with these fancier upscalers.ID: hbqwj4dFSR is not the exception here ! It has requirements just like DLSS and require code changes.
Both are low complexity if the engine meets those requirements.
引用元:https://www.reddit.com/r/Amd/comments/pi28ko/we_were_a_bit_too_ambitious_how_amd_is_upping_the/
MVP.