- Nvidia Ceo- Jensen on competition from Amd.
-
Wtf is he actually saying?!
ID: hl4c34xID: hl5nu09Hardware is not enough? AMD won the Exascale Supercomputer contracts, and the national labs are handling the software with AMD.
ID: hl621rjI mean the 2nd one is partially true
ID: hl6nkmlHis right
ID: hl55r69As little as possible in as many words as possible.
Edit: goodness gracious!
-Square-3716 PM'd me quite a roast regarding my lack of formal education in this space. I'm "a dumbfuck reddit user that holds no education or important position," according to them. I was merely making light of the fact that companies generally try to give up as little information as possible on these earnings calls, since they know their competition eagerly listen in for any clues.ID: hl6fakeAw, that's cute.
-Square-3716 is so afraid of downvotes and people seeing how horrible he is that he privately messaged you.ID: hl6oto8Hello - sorry to hear that they have privately messaged you. I would recommend reporting them to Reddit and muting them. They have been permanently banned here, though the account is only a few days old so it is likely an alt account.
ID: hl46oofExactly the point.
ID: hl4efn8He is saying that they are still market leaders, since people is out to kill them not vice versa
ID: hl4xtpnWhen you're #1 in front of the line, all anyone else can see is your back and the target on it.
ID: hl5dfqlI think he said, nvida doesn't have an answer to the mi200, but it doesn't matter.
On the hardware front, i would think if the a100 successor was a win vs mi200, he would have used different wording. Even if he wanted to keep their response secret.
On the software front, hes right, it doesnt matter. AMD has a lot of work to crack this nut. Hardware wins are an important step, but that is just showing up to play.
ID: hl88sp5He's right though.
He's just repeating the meme at this point which is that everytime AMD announces a new GPU the world calls it an "Nvidia killer" but Nvidia just set new records with over $7billion quarterly revenue and continues to grow
ID: hl77aizThe interviewer (11/16/2021 from OP's link below.) is asking Jensen about the competition. You have to know what the competition is bringing to the table.
Here is what the competition is bringing to the table.
Anandtech Intel AMD NVIDIA Product Ponte Vecchio MI250X A100 80GB Transistors 100 B 58.2 B 54.2 B FP64 Vector unknown 47.9 TFLOPS 9.5 TFLOPS FP64 Matrix unknown 95.7 TFLOPs 19.5 TFLOPS VRAM Cap 128 GB 128 GB 80 GB MFR. Intel 7, TSMC N7, N5 TSMC N6 TSMC N7From Anandtech's article on Sapphire Rapids and Ponte Vecchio article 11/15/2021.
Jensen talks about Datacenter GPU accelerators only need two things. FP64 flops performance and memory capacity. And cost of course.
By comparing the competition in the table above, we can see that it looks as if Nvidia maybe behind in a couple of those metrics. So competition is fierce. Nvidia also has to share TSMC capacity between AMD, Apple, Qualcomm, Intel, and others on TSMC's N7, N7+, N7P, and N5. It is not all available for Nvidia or they would need to pay a higher premium.
While we don't know what FP64 flops performance Ponte Vecchio will bring to the table. We can already see that it will be manufactured on advanced nodes and is expected to have near double the amount of Transistors. So we can expect high FPS64 flops from these when they release in 2022.
So that answers the first part of Jensen's response. The second part of what Jensen talked about is Moore's Law. Moore's law basically predicts that we will roughly double the amount of transistors available in a dense integrated circuit about roughly every two years.
But with this constant march of progress, we will eventually hit an actual physical limit to the amount of transistors we can pack in a dense integrated circuit. This is because an atom in a transistor is about the size of 0.1 nm. And I don't think we can create transistors smaller than the size of an atom. Let alone transistors the size of an atom.
Intel has announced 20A by 2025. With Intel 20A replacing previously Intel's 2nm naming scheme. What is after 2nm? And beyond? I think this is what Jensen is hinting at about a physical limit on transistor density and so then the only thing that can differentiate between hardware will be software. And currently Nvidia CUDA is the dominant software being used in the datacenter GPU.
edit: a lot of spelling and rephrasing.
-
Jenson is of course right. Kind of...
They have the software stack and the ecosystem.
But the interviewer is right too. They got to where they are because their hardware was without competition. That has changed now.
Believing the software and ecosystem side can not change is foolish. Mind you AMD or Intel will need to work hard to crack that nut. But not impossible.
ID: hl4emztFor HPC the nut is already cracked.. and the systems sold. MI200 already takes the cake... it isn't something they need to "work on" in that context they already did it... and saying otherwise is insulting to AMD's efforts in that sector for the last few years.
Yes ROCm sucks if you want to run it on your workstation... but they arent' selling workstations they are selling servers and super computers... smaller unit sales of those chips will take an uptick as that ecosystem improves (and they notably have integrated ROCm into the standard driver stack now so progress is being made even in that direction).
ID: hl5nn74AMD software is lacking but has gotten better over the years, still if someone told me I could have a 6800XT for £599 or the RTX 3080 for £649 I'd take the 3080 for it's features even if it is lacking in VRAM compared to the 6800XT.
ID: hl5oqedWhere I am getting a 3080Ti was almost the same price as getting a 6800XT.... plus the last 6800XT I got had terrible coil whine and couldn't overclock/undervolt at all.
Downgraded to a 3060TI, sold the 6800XT at a £200 loss over what I paid for it, then sold the 3060Ti to a miner for £450 profit on top of purchase cost, made back the £200 loss doing mining on the side before I sold it, ended up spending £890 for the 3080Ti minus the profits I made.
Really depends on the market.
-
"If you just give someone an accelerator, what are they going to accelerate?"
His point is that making the hardware itself is relatively simple, but making the software stack that enables the hardware to do things is what's actually hard. It's sort of a backhanded way of saying AMD did the easy part, but still can't compete in the software space compared to Nvidia.
ID: hl4u2fxWhich is somewhat true. Huang realised very early in the game (in the mid 90s) that software is the key. That's why he centralised drivers and bet big on wide adoption of DirectX.
NV isn't just a hardware company, they are a software and services company too.
ID: hl5oj7pI mean . . . Nvidia was almost killed off as a hardware company in their early days because their hardware didn't support the software being used (something about color support for 8bit vs 16bit or something to that effect). So they went around to companies and begged/convinced them to modify their software in order to support their hardware by eliminating certain colors from their software. Even from the very beginning I'd imagine Huang had a heavy appreciation that software was paramount.
ID: hl540ngI think i read somewhere that he said nvidia is a software company not hardware at some point.
ID: hl5znq7Actually NV sees themselves more as a SW company now.
ID: hl57bo2If that was the be all end all, then Nvidia would have won the "two exoscale class systems" and not AMD. But in reality, AMD won, even with "lacking software support" which makes you think, is AMD software really lacking?
ID: hl5hvjnThat's what Huang meant with:
Actually, I think this is the absolute easiest space
in the interview.
Software for supercomputers is always heavily modified, or written from scratch, to run on that specific hardware.These machines are one-off builds with different architectures and topologies; each organisation that operates one has its own team of low-level programmers to write optimised code for it.
Because of that, the drivers/libraries/frameworks provided by GPU makers are hardly relevant in supercomputing. Either they're not used at all - in favour of lower-level code - or the software stack will be written and optimised around their quirks.
By comparison, normal consumer and workstation apps are written to run generically on a wide range of hardware, across multiple generations, with as little special-case effort as possible. The focus is on minimising developer time and customer support/troubleshooting, not getting the absolute most out of the hardware.
That's where Nvidia have excelled with the CUDA ecosystem and things like PhysX and DLSS. By contrast AMD have been through a whole string of different compute solutions (OpenCL 1, 2, HSA, HIP, ...), none of which has ever been a reliable option for all recent AMD hardware or maintained for more than a few years.
ID: hl608uwGenerational Supercomputer contracts have been traditionally spread among vendors, they are a way to subsidence US tech companies and hedge technological bets using public funds.
Other exascale projects are going to use NVIDIA and Intel accelerators.
A few years ago, intel got some big wins for DOE/NSF contracts using Xeon Phis. And we see where that all went.
-
I have no idea what the hell he's trying to say lol? Seems like a real runaround answer to me. I mean he could have led into him saying yeah it's a great product but it doesn't have software, instead. This is just really nonsensical.
ID: hl4ea6eExcept that would be wrong, and insulting to HPC customers.... because at this point it is Nvidia that is behind because they don't provide an open source solution which has been a huge selling point for AMD, .... HPC users have been clamoring for this for at least a decade and even went so far as to write thier own HPC drivers for Nvidia...
ID: hl4zuw2You mean HPC customers such as google, that does not provide official support for AMD in their machine learning API. Or such as Meta (facebook) that also doesn't provide support non-Beta support for AMD GPUs in their machine learning API. Or maybe HPC customers such as Amazon, which doesn't have any instance with high-end AMD GPUs for heavy workloads.
Reading your comments you don't have any idea about the HPC market, open-source code is a nice feature to have, but first, you need software that works, has consistent performance and a big ecosystem around it. For example, if you want to do machine learning an AMD GPU is as useful as a brick, ROCm barely works for machine learning and when it works is terrible slow. Nobody is interested in a GPU that requires beta software that may or not work and be or may not be fast if it works, even if that software is open source, because people at google, amazon, meta... have better things to do than writing a CUDA clone that works for AMD, they can just buy Nvidia GPUs.
-
this is the first time I've seen Jensen acknowledge amd's competition in this way.
-
He basically thinks that we will eventually reach a limit as to how small transistors can get, how much performance we can get out of silicon and the rest will be only software. If that comes, the only way to get more performance is to just put more GPUs in something, not better or faster ones, and once that comes, the performance difference will be small, it's all gonna be software.
He is right in a way, every new GPU or CPU is basically an improvement on the last one, with a few new instructions maybe, smaller transistors, accompanied by other new tech like memmory, interfaces etc. The software side of things doesn't get enough time for polish because it's not needed, why optimise drivers and software for a piece of hardware for 5 years if next year something else comes already.
He is right, but also I think he is a bit naive if he thinks that we are close to that point in time.
ID: hl4egtdVery good explanation. But sadly node shrinks are coming to an end. No new tech has been invented yet that will replace current tech.
So, software and ai seems to be the only way for better performance in coming times.
ID: hl4gstkI am not so sure about that though. Even without node shrinks, there is still a lot of improvements possible with hardware design. Just recently AMD proved that they can improve their Zen cores on the same node, look at Zen2 vs Zen3, and that is in one or two years only, now imagine if all the focus is just on that.
Number of transistors and their speed has been pushing development to new heights, every 2 years we have been getting a huge performance boost, if no company was systematically not doing that (intel). However, the microcode, architecture and underlying systems have basically just been improved bit by bit, but not in a major way. We still use the x86 CPU instruction set for gods sake, and how old is it, more than 40 years? Now imagine a world where lets say 1nm is standard, everyone has it and yields are almost 100%, and we can't get lower. This is where actual hardware design come into play, and software and driver support. I would guess this is one reason why AMD is also interested in Xilinx and FPGAs, and they have been also dabling in quantum computers as well. I believe Jensen is a bit naive here, or maybe he is looking for excuses because for the first time in a long while he has real competition.
ID: hl5imheIt looks like there will be pretty significant process advances over the next 5 years. Add on top of that a further push into multi chiplet processors, and there should be very large increases in hardware performance over the short and mid term. I don't have any problem seeing a potential ~10x improvement from where we are now.
I mean it did look like process tech was hitting a wall a few years ago. But not currently, there is a lot on the roadmap that looks quite achievable in the near and mid term.
Long term. Ya, something is going to have to replace silicon. I think its a good bet that something will probably be found. It would be extremely hard for me to believe that humanity is anywhere close to the limit of what is possible. Breakthroughs can happen tomorrow, or 100 years from now, they are not predictable.
ID: hl6i34sLook at tsmcs roadmap... Amd are just hitting 5nm but it's been out for years.
4nm on track for next year. 3nm q1 2023. That's a full node ahead of where amd is heading
2nm (another full node) should be in 2025
Node shrinks are still here for atleast another 3 years.
-
He has a point. Rocm still is not mature and has little adoption.
ID: hl4f5ibIts not mature for workstations.... it IS mature for HPC... where those cards sell.
-
The "but it's two chips" excuse is lame.
Look at number of transistors.
MI200 is within 10% of number of transistors in A100.
It's just smarter design (and different performance focus too, heavier on fp64)
ID: hl4aevxThe "but it's two chips" excuse is lame.
That was said by the interviewer, why would it be an excuse?
ID: hl55rdeThe "but it's two chips" excuse is lame.
There was no 'excuse' being dealt out here. It was just descriptive.
As usual, y'all are in endless persecution mode.
ID: hl4u79tIsn’t wattage numbers whats more important. 560w versus 400w. So not really double but still an increase. Given that is also a new gen 1.5-2yrs later, doesnt look too impressive. But yes king in fp64 as of now indeed. Fp16 is barely 20% increase at 40% power increase meh.
Still a very good job AMD, they are getting much closer at least in HW.
-
Software is definitely Nvidia's strong point. CUDA nowadays has total dominance in many parts of the market. This is not an exaggeration, it's go CUDA or go home.
Nobody has managed to break this dominance so far, and several companies have tried, for many years.
ID: hl5pwhcBlender ditches OpenCL in the latest version and the only things left are CUDA and CPU rendering.
AMD did offer a rocM solution but that thing only runs on rdna2.
In research no one willingly uses AMD GPUs because CUDA was and still is the easiest, most documented API there is.
-
Ok yeah, so the competition is not knowing what your competitors are up to and how far they will jump with their next product. Nvidia doesn't want to get stuck in a rut like Intel has and they know that competition can catch up such as what AMD has proved with CPUs.
In terms of the AMD dig, kinda funny from the reporter. It is on the same die so it doesn't really matter if there are two chips on it. Really helps increase the transistor density you can fit in a single server.
-
MINOR INTENSE
-
Can confirm as the former owner of a 6900xt, the "Nvidia killers" never quite live up to the hype.
-
It sounds like the ramblings of a crazy person.
-
What did Intel and Nvidia do when they acheived a comfortable leadership in hardware tech? They switched focus to software.
People are saying that software is the hard part. That's laughable to me. The talent pool for chip hardware R&D/engineering is far smaller than the software dev pool.
What history has shown (with Intel, Nvidia, and AMD) is that software for your specific hardware won't matter if your hardware can't compete. That's why it's not surprising to see AMD starting to catch up in the software front now in both the CPU space and the GPU space. And it's pretty clear to me that ML is next on AMDs hit list by the amount they talk about it of their own volition in their conversations on the future.
Nvidia had the luxury of at least 3 GPU generations of hardware that allowed them to shift focus to software. But lately AMD seems to be doing excellent in all fronts and I suspect we will see them catch up in the software front.
Hardware is the bait and software is the hook that won't let go - that keeps you wanting to stay using their hardware.
-
First off Jensen speaks incredibly poorly... second, even if he was being crystal clear i doubt AMD diehard's would get the hint.
-
Old clip of Jensen talking about Moore's law.
-
i like the part where he says GPU-s should be like Gucci bags.
-
at least the price is already there
-
Lol, two GPUs. Denial is strong in that one.
-
Do you realize Jensen didn't speak about the 2 GPUs at all, but the guy who asked the question ?
-
Why but why was it even brought up it isnt' a valid critism of CDNA2 or RDNA3... neither of which will have much negative impact from the multi chip solution.
-
Do you realize I didn't say a word about the deer leader?
-
Why would an interviewer be in denial?
-
I don't know? Probably similar reasons pricketts at Intel were spewing bullshit about "glued" Zen. It was nice of INTC to give them a pink slip though.
-
Why else... interviewer is a shill.
Two GPUs is not a disadvantage for HPC its a density advantage though... they dont' care as much about performance per GPU as they do about performance per rack.
-
He's right
-
Here is a Petition to NVIDIA to continue providing driver support to GTX 600 and 700 series
-
still not buying nvidia cards, go figure....Jensen
-
Ya see what i mean? This dude shows up here to post only nvidia stuff and would say "idk what you mean i'm here for competition because it helps us!" and lie about the multiple accounts he runs on reddit to troll
-
Both my pc's are amd.
Did you even read it?
Nvidia Ceo said Amd is providing serious competition.
Also, this is a business not for weak hearted people as anything can happen.
-
Why are you posting this like it's some sort of surprise?
AMD, and formerly ATI, has always been a huge competitor in the consumer gpu market and have topped Nvidia multiple times.
Shit, I remember when I got my HD 5870 for the eyefinity, and Nvidia's top card at the time was the 9800 GTX, I think. Anyway, the 5870 destroyed it.
I remember articles talking about how Nvidia rushed out the x2, basically 2 9800s in sli but it was a single card, to compete with 5870 and 5890.
-
And you denied having multiple reddit accounts but there were proofs to show you lied. What's up with that? Weak hearted people can't admit that they are a long time troll with many accounts?
Did you even read it?
Did you see the proofs? Why'd ya lie if it's just business?
-
What
My brain is off its late I'll read it tomorrow cheers
-
The industry always says "who will have the Nvidia killer". The real question someone should ask will be, "Which alternative GPU will give Nvidia a good competition?" In my opinion, the Navi21 XTXH made them rethink some of their strategies, and Intel's GPUs will really present some marketing difficulties for Nvidia to stay dominant.
There is no killer. But there are market dominance disruptors.
-
I got real lucky finding an AMD gpu at a honest price, but it shocks me how prices for AMD poroducts are so high in some countries. Why is that? Is it because AMD CPUs are being snatched by the new CPU-based crypto farmers?
-
*your operations
-
This reads a lot better if you read it in Trumps voice. Really gives you the subtext of the words.
-
Jensen is so far from Trump you couldn't be more wrong. He built the most valued semi-conductor company on the planet.
-
wasn't saying he was like trump, just the wording sounded like it. Very choppy and almost like it jumped from thought to thought.
-
What would make AMD an NVidia Killer is if their drivers matched or were better than the competing card drivers on launch and throughout the lifespan.
I love my 6900 XT, but, it's had it's fair share of driver issues.
-
Sorry to all AMD fans the super computer market is Nvidia and won it many years ago. Why NVIDIA won it because it took it seriously.
Same happened with RTX and DLSS. Nvidia took it seriously. Proof of AMD no taking things real just see the fiasco of FSR. Nvidia make a driver update with Nvidia Scaling on all games while FSR requiere work from game publisher.
Nvidia is much better for the end user.
-
my 6900XT joins the chat: ayo bro I kill your top tier GPU with 80 watts less.
-
AMD is no slouch when it comes to software. Jensen should get a bit of reality check. When MS was unwilling to optimise DX for AMD's draw call heavy drivers AMD simply made their own frickin API. Then Microsoft was forced to update their directX. When it comes to user interface, nvidia's drivers are still stuck in 2003, while AMD pushed for impressive controll over hardware. That was before Zen server money started rolling in and AMD gpu software team was really small!
-
Yeah, AMD software looks good, but it’s actually pretty unstable and bad TuT
引用元:https://www.reddit.com/r/Amd/comments/qwlxla/nvidia_ceo_jensen_on_competition_from_amd/
Basically His main points are- * Every Year some people say there is a Nvidia Killer. Nvidia takes it as intense serious competition. But * Hardware is not enough if you are behind in Software & Ecosystem.