Top Builders
Explore the top contributors showcasing the highest number of app submissions within our community.
ROCm
ROCm (Radeon Open Compute) is AMD's open-source software platform for GPU-accelerated computing. It is the AMD equivalent of NVIDIA's CUDA and provides a complete stack for running AI, machine learning, and HPC workloads on AMD GPUs. ROCm supports major ML frameworks including PyTorch, TensorFlow, JAX, and ONNX Runtime, and includes the HIP (Heterogeneous-compute Interface for Portability) programming model for writing GPU code that runs on both AMD and NVIDIA hardware.
| General | |
|---|---|
| Author | AMD |
| Type | Open-source GPU Computing Platform |
| Documentation | ROCm Docs |
| Repository | github.com/ROCm |
| Installation | ROCm Installation Guide |
| Current Version | ROCm 7 |
| License | MIT and Apache 2.0 |
Start building with ROCm
ROCm gives you a complete software stack to run AI training and inference workloads on AMD GPUs. It integrates directly with PyTorch, TensorFlow, and JAX so most standard pipelines run with minimal changes from a CUDA environment. Hugging Face Optimum-AMD and vLLM both support ROCm, making it straightforward to run transformer inference and fine-tuning jobs on AMD hardware. Check out the community-built AMD Use Cases and Applications to see what developers are running on ROCm today.
ROCm Tutorials
Documentation and Resources
- ROCm Documentation Full reference for installation, APIs, and libraries
- ROCm Installation Guide Step-by-step setup for supported Linux distributions
- ROCm GitHub Open-source repositories, examples, and issue tracking
- HIP SDK SDK for writing portable GPU code for both AMD and NVIDIA hardware
- AMD Developer Hub Guides, training videos, and cloud credits
Framework Support
- PyTorch Full support for training and inference, including integration with Hugging Face Accelerate and PEFT
- TensorFlow GPU-accelerated training and inference on AMD hardware
- JAX Supported via the ROCm JAX build
- ONNX Runtime Cross-framework model deployment on AMD GPUs
- Hugging Face Optimum-AMD Optimized inference and fine-tuning pipelines for transformer models
- vLLM High-throughput LLM serving with a ROCm backend
Libraries
amd AMD ROCm AI technology Hackathon projects
Discover innovative solutions crafted with amd AMD ROCm AI technology, developed by our community members during our engaging hackathons.


