- AMD Instinct MI200 with CDNA2 'Aldebaran' GPU could launch later this year
-
The entire AMD Instinct Datacentre GPU lineup doesn't get anywhere near as much attention as it deserves. They are incredibly powerful cards.
ID: gze22npID: gze7udoPeople really don't appreciate how important it is to enable features all the way down to hobbyist levels if you want market penetration. I recently posted how SR-IOV needs to be exposed in consumer cards, even if only for two partitions, just to provide exposure to people, and I was torn down as an idiot. Same thing with GPU compute functions--if people can't access the functions on the low-end, they won't use them on the high-end.
ID: gzei4vxI was talking about this in another thread a few days ago, ROCm needs to hurry up and expand it's support. I've been dying to try out ROCm in my 6900xt, and I bet I am not the only one. Hobbyist will either go to normal CPU based ML, or CUDA based tensorflow/deep learning.
They even added ROCm support to pytorch, It's like they are teasing me 🙁
ID: gzf6xu8this. also the cards are prohibitavely expensive for students or hobbyists
ID: gzffzjfWell said. Versatility and broad platform access are important barriers to keep low
ID: gze3ufcNot surprising when the only officially supported options to use their compute ecosystem are:
Vega 56/64 from 2017 (discontinued)
Radeon VII from 2019 (discontinued)
Instinct cards ($10k+ price tag, can only buy as part of OEM systems)
Not to mention Polaris support was recently 'deprecated' in ROCm. Meanwhile you can run CUDA stuff on everything from a GTX 460 from 2010, to an MX150 found on bargain bin laptops, to a $10k A100.
ID: gzeb1qbNvidia is a proprietary vendor lock in. Sure AMD and Intel are behind with the open stack but we will eventually get there. It will happen sooner than we see Nvidia hardware adopt open source. And to me that's all that matters.
ID: gzey5prYo dawg...Radeon VII still isnt discontinued they just moved it into the pro lineup and raised the price with added ECC and blower cooler for enterprise use.
ID: gzf9pxeIs a bit more than only those GPUs
ID: gze782dBecause amd's starting small and narrow in compute market for their new generation accelerators. They're investing in software and ecosystem but that's takin time
ID: gzefghcAMD still has nothing to compete with NVs VDI dominance. Which is desperately needed...Nvidia wants an absolute fortune for VDI.
ID: gzel4v8Hardware without software is useless. AMD's share in the GPU datacenter is minuscule.
ID: gzf0pjlThe Datacentre is based on 3 kinds of products which are GPUs, ASICs and FPGAs. I have made a list of them here.
The AMD Instinct is not mentioned as much because although it is capable of doing large calculations efficiently, the Stack necessary for designing algos and writing codes to solve a given problem is either not present or if present, not mature and well-maintained. If you go through the above link, you will see that these companies don't just have the chipsets but also a great Stack alongside them.
AMD can't even maintain a proper Documentation, Stack is way too far.
I have a Nvidia A30 GPU Server at home to write codes that use NumPy, Numba, CuPy and cuSignal. Nvidia helps me in making codes faster to compile, something that I can't expect from an equivalent AMD product.
ID: gzeauu2They need to invest in the software and dev tools side of things. CUDA has a big first mover advantage and OpenCL seems to be pretty dead. ROCm has potential but they need to throw money and dev resources at it to get adoption. They need teams to work with partner companies directly like NVIDIA has to start driving adoption.
ID: gzez29pOpenCL is as viable as it ever was and is actually up to date on AMD and Intel hardware.
ID: gzdw977What does a datacenter use gpus for? I thought they only needed loads of ram and cores to push data around?
ID: gze5xevIn terms of cards like the Instinct, which are HPC cards:
AI workloads Data processing and analytics (e.g. big data, SQL) Simulation workloads (e.g. protein folding, astrophysical interactions) 3D rendering (e.g. fully animated films) As a stopgap for new codec decoding/encoding until FPGAs and later ASICs are financially viableYou also see GPUs used to host gaming sessions, for example Microsoft's xCloud and Google Stadia both use Radeon GPUs in their datacentres to render games. Edit: just to clarify, they're gaming architecture GPUs. Instinct (CDNA), A100 (Nvidia Ampere) and other HPC GPUs aren't capable of rendering real-time 3D games.
ID: gze5ivwWorkloads like "machine learning" that involve a lot of matrix operations are well suited for GPUs.
-
Dead in the water unless they can improve software support. When's the last time anyone here has seen the previous-gen MI50 in action?
-
Only if AMD's Stack was that mature and well-maintained, these would be ruling everything. What a shame. Don't even start with their Documentation, perhaps one of the worst I have ever seen.
ID: gzge986AMD can't shake it's freewheeling "hacker" culture, simply put they never were a software-focused company and they still aren't.
-
Yeah, already read about this in the JP Morgan transcript post yesterday.
Lisa Su also stated that they were hoping to reach .5B revenue with Compute GPUs over the next year...
So it's a growth vector for us. You'll see it grow in the second half of this year. And one of the major milestones that we talked about was getting this business to, let's call it, $0.5 billion or so from an annual revenue standpoint. And we see a good path to that over the next year to do that. So it is becoming more and more strategic, and you'll see it be a key layer on top of our CPU growth as we go forward later this year and into 2022 and 2023.
Go back the the JP Morgan article and read the transcript someone posted in a link.
-
Just name it Alderaan. It's what we're all thinking.
ID: gzfmuiwif it were a chiplets gpu, it would fit like a glove.
-
Love their names. Epyc and INSTINCT
-
Canadian DNA is the best
-
Who the fuck cares ???
I wanted a gaming + compute GPU like Radeon VII, but AMD doesn't care, then I don't care about them also.
These cards are extremely expensive for what I would need + AMD doesn't care about compute on Linux.
So, I couldn't care less !
-
Great horn!!
i'm sorry, i´ll read the article now.
-
Well i won't be able to afford one of these badboys. I'm just going to stick with my S9050.
-
Holy fk.. Now this thing can mine
-
HBM2E? Sign me up, will get like 6 of this for a new mining rig.
-
Yeah i spelled it "Albanian" at first.,
-
Aldebaran? Is that a Re:Zero reference?
ID: gzee7knIt's the brightest star in the Taurus constellation.
ID: gzehkvzIt's a Saint Seiya reference :P.
In actuality, RTG is picking red giant stars as codenames. They started this trend with Arcturus and now Aldebaran is its successor. Gacrux, Betelgeuse and Antares might be candidates for the next code name.
ID: gzewlayVega is also a star name, same as Polaris with which AMD started those codenames. Before they used island names for GPUs (Fiji, Hawaii, etc.)
ID: gzef6leBro.
-
To quote richard hammond "The first thing you need to know is i have an erection"
引用元:https://www.reddit.com/r/Amd/comments/nkopby/amd_instinct_mi200_with_cdna2_aldebaran_gpu_could/
No synergy with desktop cards.
With nVidia you can run any CUDA code on the desktop cards it just won't be as fast with AMD you can't run ROCm on Windows at all and can't run it on desktop cards on Linux until maybe sometime next year.
College students who are the next generation of programmers won't be using AMD hardware to play around, since they can't use their radeon GPU at home.
Edit: it doesn't need to be the same arch, AMD just needs to extend support for ROCm.