Categories
Categories

Solution Marketplace

The Alliance solutions marketplace allows our mutual customers to easily discover and deploy solutions purpose-built for easy adoption and seamless usage across the spectrum of AI use cases including developing LLMs and generative AI, computer vision, to human interaction and autonomous devices at the edge.​​​

View icon

View AI on HPE RL300 Gen11 Servers

by View
View AI provides an end-to-end solution that transforms raw data into AI-ready assets with built-in, deployable conversational experiences, helping enterprises gain valuable insights from their data while maintaining data sovereignty and compliance.
Adlink icon

Ampere Altra Developer Rugged

by ADLINK
Ampere Altra Developer Rugged (AADR) is an Arm SystemReady SR-compliant 3-in-1 AI developer desktop, rackmount, and vehicle computer featuring the Ampere Altra SoC.
View icon

View AI on OCI

by View
View AI provides an end-to-end solution that transforms raw data into AI-ready assets with built-in, deployable conversational experiences, helping enterprises gain valuable insights from their data while maintaining data sovereignty and compliance.
System76 icon

Thelio Astra AI Developer Desktop

by System76
Thelio Astra is meticulously engineered for the demands of AI and autonomous driving developers: arm-native development on Ampere, PCIe expansion for AI accelerators, preinstalled drivers, and lifetime in-house support from a team of highly-rated experts.
System76 icon

Starling Ampere Server

by System76
The Starling server with Ampere processors is the ideal platform for high-performance ARM build CI and AI inference workloads. Unleash scalable performance with up to 192 cores on AmpereOne.
Adlink icon

AI Dev Kit

by ADLINK
Ampere® Altra® Developer Kit (AADK) enables quickly prototyping new edge AI and embedded solutions. It includes COM-HPC Ampere Altra module and carrier board. Custom carriers with specialized I/O and very small sizes can be developed.
Adlink icon

AI Developer Desktop

by ADLINK
Ampere® Altra® Developer Platform (AADP) is an arm64 developer desktop for AI and software development, especially within the automotive industry due to “arm native” parity with automotive ECUs.
Wallaroo icon

Wallaroo AI inference platform on OCI

by Wallaroo
Wallaroo’s breakthrough platform facilitates the last mile of the machine learning journey - getting ML into your production environment and monitoring ongoing performance - with incredible speed, scale, and efficiency. Companies across all industries including retail, finance, manufacturing, and healthcare are turning to Wallaroo to easily deploy and manage ML models at scale.
Wallaroo icon

Wallaroo AI inference platform on Azure

by Wallaroo
Wallaroo’s breakthrough platform facilitates the last mile of the machine learning journey - getting ML into your production environment and monitoring ongoing performance - with incredible speed, scale, and efficiency. Companies across all industries including retail, finance, manufacturing, and healthcare are turning to Wallaroo to easily deploy and manage ML models at scale.
Untether icon

AI accelerator for automotive and vision

by Untether AI
The new speedAI®240 Slim AI accelerator card features the next-generation speedAI240 IC for superior performance and enhanced accuracy in CNNs, attention networks, and recommendation systems. Its low-profile, 75-watt TDP PCIe design sets the standard as the most efficient edge computing solution, delivering optimal performance and reduced power consumption for various end applications.
SuperMicro icon

Ampere® Altra® AI Inference Servers

by Supermicro
Supermicro offers customers access to Ampere® Altra® family of processors and AmpereOne® family of processors in its MegaDC server series supporting configurations from 32 to 192 cores per CPU socket.
Responsible icon

Scalable Solutions for AI Inference

by Responsible Compute
Responsible Compute offers the Ampere® Altra® family of processors allowing customers to gain access to high-performance computing capabilities for sustainable and secure AI/ML services.
Prov net icon

Alpha3Kube Service Powered by Ampere

by Prov.net
Alpha3Kube is a flexible Kubernetes (K8s) service offering both managed and self-managed options designed to modernize application infrastructure and streamline operations. Built on the Sidero Talos and Sidero Omni platforms, Alpha3Kube delivers a secure, scalable, and high performance Kubernetes environment, tailored to meet diverse business needs while providing seamless integration with cloud environments.
opsZero icon

Container-enabled AI/ML Service Integration

by opsZero
OpsZero partners with cloud providers, such as Alpha3/Prov.net, to bring more sustainable and cost-effective solutions to enterprises seeking lower power and more sustainable infrastructure while preserving high performance for AI use cases ranging from traditional AI/ML to the most modern Generative AI models.
Next icon

Edge AI Portable Workstation and Short-Depth Systems

by NextComputing
Portable workstation and versatile short depth rack solutions that offer the smallest form factor with the highest performance. Additional AI hardware accelerators can be added in various form factors including M.2, U.2, and PCIe cards.
Next icon

512 Core Petabyte Edge AI Carry-on

by NextComputing
The Nexus “Fly-Away Kit” from NextComputing is packed with multiple Ampere® Altra® servers for a total of 512 cores and a petabyte of storage. With multiple system configurations and a rolling operational hard case, the Nexus offers powerful professional computing in an unprecedented portable form factor.
NETINT icon

NETINT Quadra Video Server – Ampere Edition + Whisper AI VTT

by NETINT
The Quadra Video Server – Ampere Edition was built in response to customers requesting a more powerful CPU to do more work on the same machine. This saves money and reduces the technical complexity of needing to spread a complicated video processing function across multiple servers while keeping streams synchronized.
Lampi icon

AI software from Lampi.ai

by Lampi AI
AI software to perform your tasks and workflows with AI agents.
Kinara icon

AI accelerator for genAI – ARA-2

by Kinara
Meet the Kinara Ara-2 AI processor, the leader in edge AI acceleration. This 40 TOPS powerhouse tackles the massive compute demands of Generative AI and transformer-based models with unmatched cost-effectiveness.
Kinara icon

AI accelerator for computer vision – ARA-1

by Kinara
Kinara Ara-1 edge AI processors are the engines powering the next generation of smart edge devices. Built around a flexible and efficient dataflow architecture and supported by a comprehensive SDK, Ara-1 processors deliver the performance and responsiveness needed for real-time AI computing and decision-making.
Kamiwaza icon

Kamiwaza Enterprise on-prem

by Kamiwaza
Kamiwaza’s GenAI stack solution focuses on two novel technologies to enable Private Enterprise AI anywhere, inference mesh and locality-aware distributed data engine.  These two in combination provide locality-aware data for RAG capable of inference processing where the data lives regardless of location, across on-prem, cloud and edge.
Kamiwaza icon

Kamiwaza on Azure

by Kamiwaza
Kamiwaza’s GenAI stack solution focuses on two novel technologies to enable Private Enterprise AI anywhere, inference mesh and locality-aware distributed data engine.  These two in combination provide locality-aware data for RAG capable of inference processing where the data lives regardless of location, across on-prem, cloud and edge.
Gigabyte icon

GIGABYTE AI Inference Servers

by Gigabyte
The new ARM64-based servers are purpose built for the Ampere® Altra® family of processors to offer more platform choices beyond x86. These processors offer greater performance and power-efficiency, and fit easily into GIGABYTE's server design. The new evolution in chip design allows for greater AI performance at lower costs.
Furiosa icon

RNGD for high performance AI inference of LLMs

by Furiosa
RNGD running on Ampere delivers high-performance LLM and multimodal deployment capabilities while maintaining a very low power profile.
Canonical icon

Ubuntu optimized for AI on Ampere

by Canonical
Simplify your enterprise AI journey with trusted open source. Discover Ubuntu optimized for AI on Ampere.
ASRock icon

Edge AI Micro ATX Motherboards

by ASRock Rack
Small form factor Micro ATX motherboards by ASRock Rack for edge AI, CDNs, portable computers, high-end embedded, powerful workstations, and cost-optimized systems. Available with dual 10GbE or dual 25GbE SPF28.
ASA icon

Ampere AI Inference Servers

by ASA Computers
The new Ampere servers configured by ASA Computers feature Cloud Native Processors that offers industry-leading core density, server efficiency, and per-rack performance, with up to 192 cores providing the best performance/$ compute for AI inferencing.
NETINT icon

NETINT Quadra Video Server – Ampere Edition

by NETINT
Consolidate your transcoding process, accelerate core functionality and integrate multiple production processes into this high-performance server.
Ampere icon

Ampere Optimized PyTorch

by Ampere
Ampere's inference acceleration engine is fully integrated with Pytorch framework. PyTorch models and software written with PyTorch API can run as-is, without any modifications.
Ampere icon

Ampere Optimized TensorFlow

by Ampere
Ampere's inference acceleration engine is fully integrated with Pytorch framework. PyTorch models and software written with PyTorch API can run as-is, without any modifications.
Ampere icon

Ampere Optimized ONNX Runtime

by Ampere
Ampere's inference acceleration engine is fully integrated with ONNX Runtime framework. ONNX models and software written with ONNX Runtime API can run as-is, without any modifications.
Ampere icon

Ampere Computing AI Docker Images

by Ampere
Ampere® Processors, with high performance Ampere Optimized Frameworks in Docker images, offer the best-in-class Artificial Intelligence inference performance for standard frameworks including TensorFlow, PyTorch and ONNXRT and llama.cpp. Ampere optimized containers come fully integrated with their respective frameworks.
Ampere icon

Ampere Computing OCI Images

by Ampere
Ampere instances on OCI are some of the most cost-effective instances available in the Cloud today. OCI Ampere A1 started this with extremely high-performance shapes that with the ability to use the OCI Flex Shapes feature to provision at the single core resolution making this infrastructure very efficient.
Ampere icon

Ampere Computing Azure Images

by Ampere
Azure series VMs in B, D and E series instances are available and offer some of the best price-performance for workloads on the Azure cloud. Ampere publishes 3 optimized frameworks for ease of access on Azure cloud marketplace, tested and proven AI inference for any model compatible with the framework.
Ampere icon

Ampere Computing GCP Images

by Ampere
Ampere instances on Google GCP can be found in a variety of instance sizes and configurations. Ampere instances on GCP are available now and offer some of the best price-performance for workloads on the GCP cloud. Ampere publishes 3 optimized frameworks for ease of access on Google cloud marketplace, tested and proven AI inference for any model compatible with the framework. 
Scroll to Top

Join the Alliance

Partner with us as we build an ecosystem of leading AI solutions powered by industry-leading cloud native technologies.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.