lambdalabs logo

Lambda

A100x8

| 320GB VRAMs | gpu_8x_a100

Explore Lambda's A100 cloud instance specifications and benchmarks. Compare hardware configurations and performance metrics to optimize your AI and ML workloads.

LLM Benchmark Comparison

Compare performance metrics between different language models

Hardware Specifications

GPU ConfigurationValue
GPU TypeA100
GPU InterconnectPCIE
GPU Model NameNVIDIA A100-SXM4-40GB
Driver Version550.127.05
GPU VRAM320 GB
Power Limit (W)400.00
GPU Temperature (°C)33
GPU Clock Speed (MHz)210
Memory Clock Speed (MHz)1215
PstateP0
CPU ConfigurationValue
Model NameAMD EPYC 7542 32-Core Processor
Vendor IDAuthenticAMD
CPUs124
CPU Clock Speed5800.00
Threads Per Core1
Cores Per Socket62
Sockets2
MemoryValue
Total1.7Tb

Disks Specifications

StorageValue
Total6467.60 GB

Available Disks

PropertyValue
Disk 1
Modelvda
Size5.9Tb
TypeHDD
Mount PointUnmounted
Disk 2
Modelvdb
Size426K
TypeHDD
Mount PointUnmounted

Software Specifications

SoftwareValue
OSUbuntu
OS Version22.04.5 LTS (Jammy Jellyfish)
Cuda Driver12.4
Docker Version27.4.0
Python VersionPython 3.10.12

Benchmarks

powered byLLM Benchmark Logo
BenchmarkValue
ffmpeg251ms
Coremark (Iterations per sec)25342.119
llama2Inference (Tokens per sec)42.72
Tensorflow Mnist Training2.557

Launch instance

CloudLambda
GPU TypeA100
Shadeform Instance TypeA100x8
Cloud Instance Typegpu 8x a100
Spin Up Time5-10 mins
Hourly Price$10.32

By clicking launch you agree to our Terms of Service.

Feedback