AI Platform Alliance

Accelerators

Untether icon

AI accelerator for automotive and vision

The new speedAI®240 Slim AI accelerator card features the next-generation speedAI240 IC for superior performance and enhanced accuracy in CNNs, attention networks, and recommendation systems. Its low-profile, 75-watt TDP PCIe design sets the standard as the most efficient edge computing solution, delivering optimal performance and reduced power consumption for various end applications.

NETINT icon

NETINT Quadra Video Server – Ampere Edition + Whisper AI VTT

The Quadra Video Server – Ampere Edition was built in response to customers requesting a more powerful CPU to do more work on the same machine. This saves money and reduces the technical complexity of needing to spread a complicated video processing function across multiple servers while keeping streams synchronized.

Kinara icon

AI accelerator for genAI – ARA-2

Meet the Kinara Ara-2 AI processor, the leader in edge AI acceleration. This 40 TOPS powerhouse tackles the massive compute demands of Generative AI and transformer-based models with unmatched cost-effectiveness.

Kinara icon

AI accelerator for computer vision – ARA-1

Kinara Ara-1 edge AI processors are the engines powering the next generation of smart edge devices. Built around a flexible and efficient dataflow architecture and supported by a comprehensive SDK, Ara-1 processors deliver the performance and responsiveness needed for real-time AI computing and decision-making.

Ampere icon

Ampere Optimized PyTorch

Ampere’s inference acceleration engine is fully integrated with Pytorch framework. PyTorch models and software written with PyTorch API can run as-is, without any modifications.

Ampere icon

Ampere Optimized TensorFlow

Ampere’s inference acceleration engine is fully integrated with Pytorch framework. PyTorch models and software written with PyTorch API can run as-is, without any modifications.

Ampere icon

Ampere Optimized ONNX Runtime

Ampere’s inference acceleration engine is fully integrated with ONNX Runtime framework. ONNX models and software written with ONNX Runtime API can run as-is, without any modifications.

Ampere icon

Ampere Computing AI Docker Images

Ampere® Processors, with high performance Ampere Optimized Frameworks in Docker images, offer the best-in-class Artificial Intelligence inference performance for standard frameworks including TensorFlow, PyTorch and ONNXRT and llama.cpp. Ampere optimized containers come fully integrated with their respective frameworks.

Scroll to Top

Join the Alliance

Partner with us as we build an ecosystem of leading AI solutions powered by industry-leading cloud native technologies.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.