JPMorgan Conference: Lisa Su confirms 3D stacking for AMD roadmap, will talk more in the next number of months

1 : Anonymous2021/05/24 22:40 ID: nkag6n

This is the transcript for the JPMorgan 49th Annual Global Technology, Media and Communications Conference where AMD CEO Lisa Su did a presentation.

3D stacking definitely on the roadmap Different technologies along the price curve Highest performance technology can afford different elements but cost sensitive products would be less complex 2.5D and 3D packaging key element to unlock next level of performance

Lisa Su

Absolutely, Harlan. Look, we've been sort of a leader in this idea of sort of advanced packaging and how do you use silicon for its best performance and feature set. Really, this is the key aspect of innovation. When you think about sort of all that's said about Moore's Law slowing, it means that you're getting performance gains by going to smaller geometries, but not necessarily the same gains that you got a few years ago.

We were very early in sort of the idea of using 2.5D packaging with high bandwidth memory together with our GPUs, as well as using a chiplet architecture to really get the incredible performance that we're seeing with each generation of EPYC. You'll see us continue to innovate on that roadmap.

3D chip stacking is definitely on the roadmap. We see it as another tool in the tool chest, as you think about how do you put these different pieces together. I think what you will also see is you might see different technologies used along the price curve. You can imagine the highest performance technologies can afford different elements, and then as you get into more cost sensitive, you might not be able to use all that complexity. Think about it as AMD will push the envelope on 2.5D and 3D packaging as we go forward because it's a key element to unlock that next level of performance. Again, we'll talk a little bit more about that as we go through the next number of months as we roll out the next phase of our of our roadmaps.

2 : Anonymous2021/05/25 05:52 ID: gzd1ksn

It'll be interesting to see how they leverage Xilinx's experience with 3D ICs into the Ryzen line (assuming they keep the Ryzen name for 3D CPUs).

I work with some Xilinx FPGAs and while I've never worked with any of the 3D Ultrascale architectures, reading up on their performance has been pretty cool.

ID: gzdc0bn

I would think this is a not so distant possibility, regarding Zen/FPGA integration:

And they are also already working on the software stack:

ID: gzd3kfo

I think they'll keep the Ryzen name. Ryzen and Epyc are what put AMD back on the map. Why would they not leverage those brands for their most innovative future product lines?

ID: gzd6v78

A new product name has no existing bias on price. Maybe the low end 3D IC stuff is around Ryzen 7 pricing and goes up from there. They wouldn't want Ruzen customers to see it as a price hike.

E.g. RTX 2060 and GTX 1660 divergence after GTX 1060. The 1660 is there to make the 2060 price jump more acceptable to 1060 customers.

3 : Anonymous2021/05/24 22:41 ID: gzbrvs6

Search for the source link on google, bot removes links in posts

4 : Anonymous2021/05/24 23:05 ID: gzbuqa8

Or in other words - "Dont expect this in consumer products anytime soon".

ID: gzc2e42

Cost sensitive makes me think APUs and embedded devices.

(Though I'd really like to see an APU with an HBM cache.)

ID: gzd4auj

I don't think they will do it anytime soon. They have other solutions for the bandwidth problem for now. But in the long run it would be cool to see an APU that could be configured with an InFO bridge to a low cost HBM chiplet. Just a single stack a 200GB/s would be a game change for APUs. Only problem is if you add enough GPU horsepower to take advantage of that bandwidth it would be wasted die area on a chip configured without HBM.

ID: gzegp26

Speaking of APUs, Ryzen 4000s are still nowhere to be seen on the DIY market.

ID: gzcjglz

Which is fine because AMD is quite competitive right now. Neck to neck with Nvidia in terms of pure GPU performance (minus the RT of course, hopefully next gen RT will be closer to Nvidia's next gen offerings), ahead of Intel in CPU performance.

I'd rather see this in consumer products once it has matured and the early issues are solved, instead of having it rushed out to market.

ID: gzd47ck

At low resolutions even ahead of Nvidia in pure GPU performance. But the real advantage they have is probably their architecture. They have an efficiency advantage in terms of area and power. And there will be some implications for scalability. Then again Nvidia also realizes Turing and Ampere were not their best work and are making uarch changes as well.

ID: gzcspxh

Neck to neck with Nvidia in terms of pure GPU performance

I wish this was true, but its not. AMD are using expensive packaging technologies (Infinity Cache) and cutting edge fabrication (TSMC's 7nm node) to barely match Nvidia's rasterization performance on the vastly inferior Samsung 8nm node.

They won't catch up to Nvidia on tensor processing performance because this has been Nvidia's bread and butter ever since they got into the Machine-Learning sector - high-dimensional tensor processing is basically Nvidia's core offering now.

For Nvidia to decide that AMD is a real threat on the Consumer GPU market they'd be looking to a total inversion of market share - 80/20 favour to AMD. That's highly unlikely, AMD have been ahead in price:performance many times in the past but Nvidia's marketing kept them competitive - AMD would need to be winning by a clear margin for several generations to alter the average consumers impression. The safety net Nvidia have is their absolutely astronomical lead in HPC that AMD don't have a single product that is even on the radar. Machine learning is the future and Nvidia utterly dominate in that space. Consumer GPUs are a sideshow.

ID: gzcpku0

No, its don't expect it on mobile (basically opposite strategy of intel)... we already have chiplet CPUs,stacking is basically one notch higher than that. I expect next gen GPUs may move some RT logic into the memory dies similar to how samsung has an HBM variant that can do compute now.

ID: gzdhl1z

Stacking a logic die on top of an interconnect is still a fair bit different than stacking a logic die on top of a logic die. We can say it's the next step in the same way that going to Mars is the next step after going to the Moon. It's not impossible, it's just likely to require some very expensive solutions.

I expect next gen GPUs may move some RT logic into the memory dies similar to how samsung has an HBM variant that can do compute now.

I would similarly not assume that memory chips with compute on them will be 'consumer' focused solutions anytime soon. These are going to be lower volume, more complex chips. For the same reason that HBM stopped being used for consumer GPU's, I dont see how this would be any different.

5 : Anonymous2021/05/25 05:36 ID: gzd08k8

what is 3d stacking/2.5d stacking?

is that like architecture of the die or is it literaly having more chips in layers like 2 entire boards/chips stacked ontop of eachother kinda like a crossfire but as one product? tyia

ID: gzd5z9b

Managing heat in such a device must be incredibly hard.

ID: gzd6sp6

Thanks for the link. Mostly what I was imagining from the name.

6 : Anonymous2021/05/25 01:56 ID: gzcefln

So that's a buy on the stock right?

ID: gze29vo

As it has been for the past 4 years, always buy AMD.

ID: gzd5n27
ID: gzfogtd

Its been known since last year AMD was coming with a 3D stacked product, but it only being stacked memory and exclusive to Epyc is not exciting at all. Especially when its Milan-X, aka zen 3 being reused in 2022 for Epyc, instead of launching with zen 4 cores. Its progress but the most half assed move.

7 : Anonymous2021/05/25 10:35 ID: gzdke89

I know this is probably nerdy of me, but when Lisa Su talks about "putting all the different pieces together" I just think about all the disparate technologies AMD has, and it's kind of fun to imagine where they might go.

Imagine a Thread Ripper sized chip with 12 high performance CPU cores, XXX high/medium performance GPU cores, and HBM to share across them. (I use XXX because I have no idea what an good number of GPU cores would be.) One of the biggest bottlenecks in APUs has always been that they have to use system memory, DDRX, which is much slower than GDDR graphics memory, perhaps with HBM AMD could start to close that performance gap a little bit.

How cool would it be to be able to go out and buy an AMD APU with the 2031 equivalent of a 5700X, a 6700XT, and 16GB of shared HBM on board be? Especially if developers took the time to add multi-gpu optimizations to their games, so that a user could go out and get a dGPU and use the iGPU as a graphics booster (or something like that, it wouldn't be 100% scaling, nothing ever is, but even getting 50% of a 6700XT would be a hell of a boost for most gamers.)

I think an enthusiast class APU would be cool, especially if game developers went the extra mile. Unfortunately for me and my idea, there's not really a market demand for enthusiast class APUs; there are engineering limitations for the chip, like power delivery and heat; and unless AMD's new chiplet based GPUs require similar coding from developers as multi-gpu setups do, it's unlikely that the software will advance to the point that an enthusiast APU would be very useful.

A mini-itx build with a 5700X CPU, a 6700XT iGPU, 16gb of HBM, 32gb DDR4, and a 6900XT dGPU would be really, really cool; or at least I think so.

ID: gzdw67g

Especially if developers took the time to add multi-gpu optimizations to their games

You're thinking on the wrong level. What you want is for this stuff to be implemented by middleware developers so that game studios don't have to deal with any of it (or, at least, just have to follow some simple guidelines around assets and their render pipeline). If Unreal Engine, Unity, and the like can all offer these kinds of optimizations "out of the box", everyone wins.

ID: gzfpm92

But you see what I'm getting at, right?

AMD has a lot of interesting tech, and that tech could play very, very well together, if all the stars aligned.

Or, put differently, AMD has all the hardware more or less sorted, it's just a matter of whether or not the software catches up.

8 : Anonymous2021/05/25 08:29 ID: gzdcbi3

I wonder if AMD should start looking into RISC-V adoption down the line. AMD64 is kind of reaching a point where it's getting harder and harder to eek out more horse power because of the complexity of the instruction set. It's going to be fine for maybe another 5 years but we already seen with Apple's use of ARM processors how much power and lower energy usage a tighter spec can bring. Just seems like a no brainer to put AMD technology into RISC-V and give us an interesting offering

ID: gzdsmac

With such huge cores the penalty of having a more complex instruction set becomes relatively small, Apple's cores are good because they're well designed cores for what they do, not because they're ARM.

ID: gzdtl0o

Well it's a bit of both, there is a lot of cruft in x86 and Apple's cores are well designed. The lower power consumption for sure is coming a lot from how ARM is designed though. There is a reason why Atom failed and ARM succeeded

ID: gzdjx06

Easy to say, harder to do. Apple's lockdown on the software and the OS make this far easier. Windows on the other hand..

ID: gzdlfxa

Well it would be a start offering a chip and mobo to target.

ID: gzdozlf

Nvidia is clearly up to something with ARM (aside from their acquisition) so it would follow that AMD is too.

ID: gzdpp9z

Well if they started offering first party CPUs from Nvidia (aside from Tegra which is more targetted at mobile) I'd say that would be the next step. If AMD pushed RISC-V as the alternative it would be fairly on brand for them with their Linux work as well

ID: gze3b3u

They are definitely looking into it, but I doubt they will heavily invest before they are threatened by RISC-V products somewhere.

9 : Anonymous2021/05/25 11:05 ID: gzdmmrq

Nothing new then.

10 : Anonymous2021/05/25 04:12 ID: gzcsu3b

Makes sense. There's nowhere else to go in the short term.

11 : Anonymous2021/05/25 04:47 ID: gzcw2ks

I assume the caches and memory subsystems and interconnects on the pcb? Cpu cores nearest the ihs?

12 : Anonymous2021/05/25 04:50 ID: gzcwd83

[deleted]

ID: gzd4zya

No one ever cancelled HBM in general, only for consumer products. It is still commonly used in professional and enterprise solutions, its biggest problem for consumer-oriented chips is cost.

ID: gzd6cia

apple added a radeon 5600m with 8gb hbm2 memory bto-option for the 16" mbp. that's an 700$ upgrade to base 5500m with 4gb gddr6. hbm is not dead, it's just expensive.

引用元:https://www.reddit.com/r/Amd/comments/nkag6n/jpmorgan_conference_lisa_su_confirms_3d_stacking/

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x