AMD Instinct MI200 with CDNA2 ‘Aldebaran’ GPU could launch later this year

1 : Anonymous2021/05/25 12:36 ID: nkopby
AMD Instinct MI200 with CDNA2 'Aldebaran' GPU could launch later this year
2 : Anonymous2021/05/25 12:45 ID: gzdvw8n

The entire AMD Instinct Datacentre GPU lineup doesn't get anywhere near as much attention as it deserves. They are incredibly powerful cards.

ID: gze22np

No synergy with desktop cards.

With nVidia you can run any CUDA code on the desktop cards it just won't be as fast with AMD you can't run ROCm on Windows at all and can't run it on desktop cards on Linux until maybe sometime next year.

College students who are the next generation of programmers won't be using AMD hardware to play around, since they can't use their radeon GPU at home.

Edit: it doesn't need to be the same arch, AMD just needs to extend support for ROCm.

ID: gze7udo

People really don't appreciate how important it is to enable features all the way down to hobbyist levels if you want market penetration. I recently posted how SR-IOV needs to be exposed in consumer cards, even if only for two partitions, just to provide exposure to people, and I was torn down as an idiot. Same thing with GPU compute functions--if people can't access the functions on the low-end, they won't use them on the high-end.

ID: gzei4vx

I was talking about this in another thread a few days ago, ROCm needs to hurry up and expand it's support. I've been dying to try out ROCm in my 6900xt, and I bet I am not the only one. Hobbyist will either go to normal CPU based ML, or CUDA based tensorflow/deep learning.

They even added ROCm support to pytorch, It's like they are teasing me 🙁

ID: gzf6xu8

this. also the cards are prohibitavely expensive for students or hobbyists

ID: gzffzjf

Well said. Versatility and broad platform access are important barriers to keep low

ID: gze3ufc

Not surprising when the only officially supported options to use their compute ecosystem are:

Vega 56/64 from 2017 (discontinued)

Radeon VII from 2019 (discontinued)

Instinct cards ($10k+ price tag, can only buy as part of OEM systems)

Not to mention Polaris support was recently 'deprecated' in ROCm. Meanwhile you can run CUDA stuff on everything from a GTX 460 from 2010, to an MX150 found on bargain bin laptops, to a $10k A100.

ID: gzeb1qb

Nvidia is a proprietary vendor lock in. Sure AMD and Intel are behind with the open stack but we will eventually get there. It will happen sooner than we see Nvidia hardware adopt open source. And to me that's all that matters.

ID: gzey5pr

Yo dawg...Radeon VII still isnt discontinued they just moved it into the pro lineup and raised the price with added ECC and blower cooler for enterprise use.

ID: gzf9pxe

Is a bit more than only those GPUs

ID: gze782d

Because amd's starting small and narrow in compute market for their new generation accelerators. They're investing in software and ecosystem but that's takin time

ID: gzefghc

AMD still has nothing to compete with NVs VDI dominance. Which is desperately needed...Nvidia wants an absolute fortune for VDI.

ID: gzel4v8

Hardware without software is useless. AMD's share in the GPU datacenter is minuscule.

ID: gzf0pjl

The Datacentre is based on 3 kinds of products which are GPUs, ASICs and FPGAs. I have made a list of them here.

The AMD Instinct is not mentioned as much because although it is capable of doing large calculations efficiently, the Stack necessary for designing algos and writing codes to solve a given problem is either not present or if present, not mature and well-maintained. If you go through the above link, you will see that these companies don't just have the chipsets but also a great Stack alongside them.

AMD can't even maintain a proper Documentation, Stack is way too far.

I have a Nvidia A30 GPU Server at home to write codes that use NumPy, Numba, CuPy and cuSignal. Nvidia helps me in making codes faster to compile, something that I can't expect from an equivalent AMD product.

ID: gzeauu2

They need to invest in the software and dev tools side of things. CUDA has a big first mover advantage and OpenCL seems to be pretty dead. ROCm has potential but they need to throw money and dev resources at it to get adoption. They need teams to work with partner companies directly like NVIDIA has to start driving adoption.

ID: gzez29p

OpenCL is as viable as it ever was and is actually up to date on AMD and Intel hardware.

ID: gzdw977

What does a datacenter use gpus for? I thought they only needed loads of ram and cores to push data around?

ID: gze5xev

In terms of cards like the Instinct, which are HPC cards:

AI workloads Data processing and analytics (e.g. big data, SQL) Simulation workloads (e.g. protein folding, astrophysical interactions) 3D rendering (e.g. fully animated films) As a stopgap for new codec decoding/encoding until FPGAs and later ASICs are financially viable

You also see GPUs used to host gaming sessions, for example Microsoft's xCloud and Google Stadia both use Radeon GPUs in their datacentres to render games. Edit: just to clarify, they're gaming architecture GPUs. Instinct (CDNA), A100 (Nvidia Ampere) and other HPC GPUs aren't capable of rendering real-time 3D games.

ID: gze5ivw

Workloads like "machine learning" that involve a lot of matrix operations are well suited for GPUs.

3 : Anonymous2021/05/25 17:04 ID: gzettmw

Dead in the water unless they can improve software support. When's the last time anyone here has seen the previous-gen MI50 in action?

4 : Anonymous2021/05/25 14:11 ID: gze641a

Only if AMD's Stack was that mature and well-maintained, these would be ruling everything. What a shame. Don't even start with their Documentation, perhaps one of the worst I have ever seen.

ID: gzge986

AMD can't shake it's freewheeling "hacker" culture, simply put they never were a software-focused company and they still aren't.

5 : Anonymous2021/05/25 14:18 ID: gze70xb

Yeah, already read about this in the JP Morgan transcript post yesterday.

Lisa Su also stated that they were hoping to reach .5B revenue with Compute GPUs over the next year...

So it's a growth vector for us. You'll see it grow in the second half of this year. And one of the major milestones that we talked about was getting this business to, let's call it, $0.5 billion or so from an annual revenue standpoint. And we see a good path to that over the next year to do that. So it is becoming more and more strategic, and you'll see it be a key layer on top of our CPU growth as we go forward later this year and into 2022 and 2023.

Go back the the JP Morgan article and read the transcript someone posted in a link.

6 : Anonymous2021/05/25 14:41 ID: gzea26p

Just name it Alderaan. It's what we're all thinking.

ID: gzfmuiw

if it were a chiplets gpu, it would fit like a glove.

7 : Anonymous2021/05/25 14:40 ID: gze9wjz

Love their names. Epyc and INSTINCT

8 : Anonymous2021/05/25 17:01 ID: gzetfty

Canadian DNA is the best

9 : Anonymous2021/05/25 20:08 ID: gzfk2e6

Who the fuck cares ???

I wanted a gaming + compute GPU like Radeon VII, but AMD doesn't care, then I don't care about them also.

These cards are extremely expensive for what I would need + AMD doesn't care about compute on Linux.

So, I couldn't care less !

10 : Anonymous2021/05/25 14:26 ID: gze816o

Great horn!!

i'm sorry, i´ll read the article now.

11 : Anonymous2021/05/25 15:26 ID: gzeg516

Well i won't be able to afford one of these badboys. I'm just going to stick with my S9050.

12 : Anonymous2021/05/25 19:12 ID: gzfc5ea

Holy fk.. Now this thing can mine

13 : Anonymous2021/05/25 19:12 ID: gzfc5nn

HBM2E? Sign me up, will get like 6 of this for a new mining rig.

14 : Anonymous2021/05/25 19:52 ID: gzfhtkm

Yeah i spelled it "Albanian" at first.,

15 : Anonymous2021/05/25 14:55 ID: gzebxdd

Aldebaran? Is that a Re:Zero reference?

ID: gzee7kn

It's the brightest star in the Taurus constellation.

ID: gzehkvz

It's a Saint Seiya reference :P.

In actuality, RTG is picking red giant stars as codenames. They started this trend with Arcturus and now Aldebaran is its successor. Gacrux, Betelgeuse and Antares might be candidates for the next code name.

ID: gzewlay

Vega is also a star name, same as Polaris with which AMD started those codenames. Before they used island names for GPUs (Fiji, Hawaii, etc.)

ID: gzef6le


16 : Anonymous2021/05/25 16:06 ID: gzelryj

To quote richard hammond "The first thing you need to know is i have an erection"


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x