AMD doesn't realise the wide penetration and availability of CUDA is what makes the ecosystem so strong. Developers can develop and test on their personal devices which are prevalent, and that's what creates such a big software ecosystem for the expensive chips.
When I raised this feedback with our AMD Rep, they said it was intentional and that consumer GPUs are primarily meant for gaming. Absolutely shortsighted.
I can forgive AMD for not seeing how important CUDA was ten years ago. Nvidia was both smart and lucky.
But failing to see it five years ago is inexcusable. Missing it two years ago is insane. And still failing to treat ML as an existential threat is, IDK, I’ve got no words.
That's besides the point. They are offering ML solutions. I believe pytorch and most other stuff works decently well on their datacenter/hpc GPUs these days. They just haven't managed to offer something attractive to small scale enterprises and hobbyists, which costs them a lot of midshare in discussions like these.
But they're definitely aware of AI/ML stuff, pitching it to their investors, acquiring other companies in the field and so on.
Meanwhile the complete lack of enthusiast ML software for their consumer grade cards mean they can put gobs of GPU memory on their GPUs without eating into their HPC business line.
I feel like that's something they would be explaining to their investors if it was intentional though.
Not sure which complete lack you're talking about. You can run the SotA open source image and text generation models on the 7900 xtx. They might be one or two iterations behind their nvidia counterparts and you will run into more issues, but there is a community.
One thing I wonder about with AMD is that they know the history of how CUDA got to it's current position, but even if you say trying to compete in that market is fighting yesterday's war and they don't want to dedicate much resources to it they don't seem to have much vision to start and do the long-term commitment to what could be the next big thing. What projects do they have to boost their strengths, and can't be copied easily? The examples I would point to though are Zen (a massive course correction after K10-Bulldozer) and early HSA after acquiring ATi
I suspect that it is legal fears tbh - it is almost certain that if AMD or anyone else tried to make some kind of CUDA compatibility, nVidia would pretty fiercely sue them into the ground. This is almost certainly why both Intel and AMD bailed on ZLUDA.
When I raised this feedback with our AMD Rep, they said it was intentional and that consumer GPUs are primarily meant for gaming. Absolutely shortsighted.