Microsoft is exploring ways to leverage the 'stack' of its AMD GPUs for inferencing workloads, as the company develops toolkits that convert NVIDIA CUDA models into ROCm-supported code.
Microsoft Sees Massive Demand For Inference Over Training, Which Makes AMD's AI Chips a Lot More Attractive
One of the reasons NVIDIA has managed to retain its dominance in the AI space is that the firm has a 'CUDA lock-in' mechanism in place, which essentially forces CSPs and AI giants to employ NVIDIA's AI chips to achieve optimal results with NVIDIA's CUDA software ecosystem. Efforts have been made in the past to break this barrier and allow cross-platform support, but we haven't seen a solution that has become mainstream. However, according to a 'high-ranking' Microsoft employee , it is reported tha

WCCFTECH News

Tech Times
FOX News Travel
PC World
Fast Company Technology
Fortune
AlterNet
@MSNBC Video
The American Lawyer