Explore Lambda's A100 cloud instance specifications and benchmarks. Compare hardware configurations and performance metrics to optimize your AI and ML workloads.
LLM Benchmark Comparison
Hardware Specifications
GPU Configuration | Value |
---|---|
GPU Type | A100 |
GPU Interconnect | SXM4 |
GPU Model Name | NVIDIA A100-SXM4-40GB |
Driver Version | 535.129.03 |
GPU VRAM | 40 |
Power Limit (W) | 400.00 |
GPU Temperature (°C) | 31 |
GPU Clock Speed (MHz) | 210 |
Memory Clock Speed (MHz) | 1215 |
Pstate | P0 |
CPU Configuration | Value |
---|---|
Model Name | AMD EPYC 7J13 64-Core Processor |
Vendor ID | AuthenticAMD |
CPUs | 30 |
CPU Clock Speed | 4899.99 |
Threads Per Core | 1 |
Cores Per Socket | 1 |
Sockets | 30 |
Memory | Value |
---|---|
Total | 216Gb |
Disks Specifications
Storage | Value |
---|---|
Total | 512.00GB |
Available Disks
Property | Value |
---|---|
Disk 1 | |
Model | vda |
Size | 512Gb |
Type | HDD |
Mount Point | Unmounted |
Software Specifications
Software | Value |
---|---|
OS | Ubuntu |
OS Version | 22.04.3 LTS (Jammy Jellyfish) |
Cuda Driver | 12.2 |
Docker Version | 24.0.7 |
Python Version | Python 3.10.12 |
Benchmarks
Benchmark | Value |
---|---|
ffmpeg | 158 |
Coremark (Itterations per sec) | 29310.471 |
llama2Inference (Tokens per sec) | 44.25 |
Tensorflow Mnist Training | 2.007 |
Nvidia-smi output
Nvidia-smi topo -m outpu