Cart

  • Cart Empty!
NVIDIA A30 Module 24GB HBM2 with ECC 3072-bit Pci-e 4.0 x16

NVIDIA A30 Module 24GB HBM2 with ECC 3072-bit Pci-e 4.0 x16

 Without VATWith VAT
Base Price:€5.185,51€6.274,47
Total:€5.185,51€6.274,47
Part Number: TCSA30M-PB
Language: UK
Real time Stock Check

Product Details

Introduction

Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG),it delivers speedups securely across diverse workloads,including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor optimized for mainstream servers,A30 enables an elastic data center and delivers maximum value for enterprises.
The NVIDIA A30 Tensor Core GPU delivers a versatile platform for mainstream enterprise workloads,like AI inference,training,and HPC. With TF32 and FP64 Tensor Core support,as well as an end-to-end software and hardware solution stack,A30 ensures that mainstream AI training and HPC applications can be rapidly addressed. Multi-instance GPU (MIG) ensures quality of service (QoS) with secure,hardware-partitioned,right-sized GPUs across all of these workloads for diverse users,optimally utilizing GPU compute resources.

Features

  • Deep learning training : NVIDIA A30 leverages groundbreaking features to optimize inference workloads. It accelerates a full range of precisions,from FP64 to TF32 and INT4. Supporting up to four MIGs per GPY,A30 lets multiple networks operate simultaneously in secure hardware partitions with guaranteed quality of service (QoS). Structural sparsity support delivers up to 2x more performance on top of A30's other inference performance gains. NVIDIA's market-leading AI performance was demonstrated in MLPerf Inference. Combined with the NVIDIA Triton Inference Server,which easily deploys AI at scale,A30 brings this groundbreaking performance to every enterprise.
  • High-performance computing : NVIDIA A30 features FP64 NVIDIA Ampere architecture Tensor Cores that deliver the biggest leap in HPC performance since the introduction of GPUs. Combined with 24 gigabytes (GB) of GPU memory with a bandwidth of 933 gigabytes per second (GB/s),researchers can rapidly solve double-precision calculations. HPC applications can also leverage TF32 to achieve higher throughout for single-precision,dense matrix multiply operations. The combination pf FP64 Tensor Cores and MIG empowers research institutions to securely partition the GPU to allow multiple researchers access to compute resources with guaranteed QoS and maximum GPU utilization. Enterprises deploying AI can use A30's inference capabilities during peak demand periods and then repurpose the same compute servers for HPC and AI training workloads during off-peak periods.
  • High-performance data analytics : Data scientists need to be able to analyze,visualize,and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers. Accelerated servers with A30 provide the needed compute power - along with large HBM2 memory,933 GB/s of memory bandwidth,and scalability with NVLink - to tackle these workloads. Combined with NVIDIA InfiniBand,NVIDIA Magnum IO and the RAPIDS site of open-source libraries,including the RAPIDS Accelerator for Apache Spark,the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency.
  • Enterprise-ready utilization : A30 with MIG maximizes the utilization of GPU accelerated infrastructure. With MIG,an A30 GPU can be partitioned into as many as four independent instances,giving multiple users access to GPU acceleration. MIG works with Kubernetes,containers,and hypervisor-based server virtualization. MIG lets infrastructure managers offer a right-sized GPU with guaranteed QoS for every job,extending the reach of accelerated computing resources to every user.

Specifications


General 
Device Type GPU computing processor - fanless
Bus Type PCI Express 4.0 x16
Graphics Engine NVIDIA A30
CUDA Cores 3804
Features 5.2 Tflops peak double-precision floating point performance,10.3 Tflops peak single-precision floating point performance,dual slot width,NVIDIA Tensor Core,NVIDIA Ampere GPU technology,Multi-Instance GPU (MIG) technology
Memory 
Size 24 GB
Technology HBM2
Bus Width 3072-bit
Bandwidth 933 GBps
Miscellaneous 
Power Consumption Operational 165 Watt
Included Accessories 8 pin PCIe power cable
Depth 26.77 cm
Height 11.115 cm
Weight 1.024 kg
.