AMD confirms Aldebaran CDNA2 GPU for Instinct MI200 has ‘primary’ and ‘secondary’ die

1 : Anonymous2021/06/09 12:25 ID: nvuo0l
AMD confirms Aldebaran CDNA2 GPU for Instinct MI200 has 'primary' and 'secondary' die
2 : Anonymous2021/06/09 13:42 ID: h15h29b

Please make your other GPUs capable of Machine Learning too, AMD. Everyone can't purchase Server GPUs. Give people options. Don't forget to improve your Stack and maintain your Documentation as well.

A sincere request from someone who wishes to use your GPUs and write some codes.

ID: h15txb7

+1

On NVIDIA cards I can run the same CUDA workload on cheap laptop GPUs or on enterprise grade monster cards.
Sure one will run much slower then the other but I can still develope on a local chip.

I like my AMD card but this is a real downside.

ID: h197i80

This matters a TON. I bought 4x Vega 64's at launch to test them with my typical ML workflows on my workstation. They were gorgeous cards but the software and drivers were a mess. They were up on eBay within a week.

AMD would sell a hell of a lot of cards if they could get OpenCL / ROCM or whatever they're calling it nowadays to work right.

ID: h15tihb

The importance of this can't be understated.. the ability to play around with CUDA on consumer hardware and then move on up to enterprise has been instrumental in adoption.

AMD screwed the pooch on its OpenCL support, IMHO... they were doing so well.

ID: h160mc7

You would be amazed at the number of people that don't understand this. I have tried suggesting SR-IOV support for two partitions in consumer cards (at least higher end ones) and people just shit on the idea because THEY don't need the feature, despite the fact the hardware supports it for just this reason. It enables coding and debugging on lower end hardware for lab use, and production workloads on enterprise hardware, as well as familiarization. If you don't open up low-end hardware, you don't get the benefits of open source enhancements people code on their own time.

ID: h17hast

Lowering the barrier to entry on a hardware level is immensely attractive for a variety of reasons

ID: h16r7yc

I am an AMD diehard. For years they ran my multi threaded photogrammetry workloads like champs, but I just bought a Jetson Xavier XL for the ML and CUDA applications exactly as we all are mentioning. I need that compute workload ability and it has applications all over my research and work.

ID: h16f10j

This. Trying to build a watercooling solution for the MI25 I have laying around has been an absolute nightmare. Not everyone has the money to build an entire datacenter complete with hot/cold isles to make up for this nonsensical market segmentation.

Some of us want an SR-IOV capable card with video outs and no bizarre PCB layouts.

ID: h16mwrw

Yeah, this is so embarrassing. My last couple GPUs were AMD but I had to buy an Nvidia card recently for work because I can't for the life of me get OpenCL working on my RX 6800 in Linux.

ID: h17bcxt

OpenCL works on my 6800XT in Linux, it's the entire rocm stack that isn't fully supported.

ID: h16vjay

Just to chime in, this is the main reason I do not buy AMD graphic cards. I prefer to dabble with an old card to make sure it works before I scale up.

ID: h178pl4

It is crazy that probably the "cheapest" AI/ML platform at the moment is probably the M1 Mac mini. Apple has invested a TON of die space and resources into their ML tooling and it is freely available on a ~$600 full computer is really nice.

ID: h17lzoc

Bad Idea!

Apple had pretty much already demonstrated that optimized hardware is the way to go.

What AMD needs to do is to put a smaller CDNA2 cell onto their APU and discreet processor chips so that ML acceleration is available everywhere. You would only go to separate CDNA accelerators when high performance is needed.

ID: h16by9h

Yea, both Nvidia and iirc to some extent intel understand this. Intel's compute stack is very new so its not as mature, but it'll run on anything from a 20W laptop iGPU to their insane 48 tile supercomputer accelerator. Same for nvidia, an old 900 series card can run CUDA, and an A100 can run the same stuff.

ID: h16loz0

intel definitely understands this now after crippling and botching their attempt at an x86 accelerator add-in card. everything they've done on the GPU side recently has been trying to improve ease of adoption. imagine if they understood it over ten years ago when nvidia was just getting some market interest. letting that larrabee project crash and burn has to be one of their biggest mistakes of all time.

ID: h183z9i

It's not just about doing ML, it's about using GPU compute as a consumer. More and more regular applications are NVIDIA only because they rely on GPU compute to do image filtering, noise suppression, even game physics.

You want to edit video? NVIDIA. You want to use the new Blender renderer? CUDA only. Want your holiday picture edits to be snappy? Not if you have AMD. Remove noise during a zoom call? Take a guess.

It doesn't matter if the cards are good for games. GPUs are becoming necessary for general computer use, and AMD is unfortunately just not an alternative.

ID: h19fgny

Blender has non-CUDA paths, I don't know where you got that from. Denosiong and video applications are accelerated on AMD GPUs. Not that much software relies on locked in platforms like CUDA outside of the ML space.

ID: h16kdks

I can think of a certain pending acquisition that will help with that.

ID: h17m28f

Hi Will AMD Fideflity FX FSR Come to old gen xbox consoles without buying xbox series x console

ID: h18bxkl

Very likely not. The CPUs in last-gen consoles are really weak.

ID: h18bxko

Very likely not. The CPUs in last-gen consoles are really weak.

3 : Anonymous2021/06/09 19:39 ID: h16xn2z

I musn't be the only one who read Alderaan

ID: h18bosk

Run to Dagobah

ID: h18ngfo

I felt a great disturbance in the Reddit, as if millions of upvotes suddenly cried out in terror and were suddenly silenced.

ID: h17csbv

Literally thought that was the name until I saw your comment.

ID: h18v4a6

I did and didn't even realize that's not what it says until I read your comment.

ID: h18nxj6

I sense a disturbance in the FLOPs.

ID: h19qzy6

Giving the codename "Tatooine" to one of the first high-end dual-die GPUs for consumers would be a perfect fit.

4 : Anonymous2021/06/09 15:55 ID: h1604yb

This is useless if their consumer GPUs don't have the same functionality for compute because then the programmers who uses consumer GPUs can't learn to code for it. Not everyone just jumps into server GPUs. The biggest reason CUDA is so successful is their consumer GPUs support it the same as their server compute GPUs. So the programmers who learnt to program on Nvidia consumer GPUs will end up preferring to code for something they already know...Nvidia server GPUs that run CUDA the same.

ID: h16hh5u

I've worked at companies that used FPGAs and custom ASICs for our workloads. "Learn to code for it" is not bottleneck for adoption. If you're a company that needs this type of compute, getting your engineers new tool can be way more affordable then trying to use their competitor because they "already know it". Especially if the concepts are generally the same.

ID: h16vhm7

Right, bet that's why no one is considering ROCm over CUDA

ID: h16l4gd

What kind of functionality RDNA2 lacks?

ID: h175uzs

Try to do OpenCL development or even machine learning on a RDNA2 GPU.

Even with the powerful 6900XT, and even using latest rocm available on a supported Linux distro.. You can't.

Meanwhile anyone with a GTX 1050 can develop CUDA code and use machine learning before eventually upgrading to a RTX 3090 or some big server only NVIDIA GPU.

ID: h19bocg

Why? Server applications can be so different and can scale so differently that you don't even need to "code for it." It's still going to use OpenCL probably but with different coding techniques/libraries.

5 : Anonymous2021/06/09 20:54 ID: h178r2h

Advance node, that must be 5nm.

Also has HBM2E. ..I wonder how fast this will be.

6 : Anonymous2021/06/09 14:08 ID: h15kjx3

Is CDNA 1 even out yet? Haven't seen any info on it

ID: h15mpfk

Yes it was launched late last year, the white paper too:

ID: h1636xw

It's an enterprise-grade card. Enormous die size, HBM, RAS features, etc. This means it's not really a card offered in retail eshops.

7 : Anonymous2021/06/09 14:07 ID: h15kfh3

AMD 295X2 is calling....

Jokes aside I'm curious how it will scale and how big they'll build it.

ID: h17ipuh

[deleted]

8 : Anonymous2021/06/09 13:47 ID: h15hqgi

That's no moon...

9 : Anonymous2021/06/09 21:46 ID: h17g6gk

Isn't this just two conpute dies glued together with infinity fabric instead of a PCI-E switch?

Doesn't have anything to do with a GPU made up of independent dies.

ID: h19r9s1

The expectation is the dies share a substrate and have a unified memory pool.

10 : Anonymous2021/06/09 21:49 ID: h17gna4

Amusing how AMD's ML hardware comes up every few years. That ship sailed years ago, I was using OpenCL kernels in 2016 in an attempt to gap fill for the lack of library support; but even then OpenCL was a dying standard for ML.

As far as I'm concerned AMD's position on AI has only become more precarious. The software support is still bad (i.e. ROCM), and they have massively fallen behind on the hardware side due to Nvidia's Tensor cores. Don't get me wrong - I dislike Nvidia's business practices, but AMD need a major strategy shift if they want to capture any of the AI and business GPU market.

11 : Anonymous2021/06/09 23:54 ID: h17wk5c

I cant wait for Izlude.

12 : Anonymous2021/06/10 02:53 ID: h18i8le

Likely for Frontier Supercomputer, first, and will be paired with Trento. A good amount are already reserved for this.

A limited number might hit retail, but I'd expect them to be seriously expensive.

13 : Anonymous2021/06/10 06:34 ID: h192kcj

Hell yeah

14 : Anonymous2021/06/10 08:41 ID: h19b9ff

Thats a fat ass gpu

引用元:https://www.reddit.com/r/Amd/comments/nvuo0l/amd_confirms_aldebaran_cdna2_gpu_for_instinct/

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x