C
data

CoreWeave

AI cloud with on demand NVIDIA GPUs, fast storage and orchestration, offering transparent per hour rates for latest accelerators and fleet scale for training and inference.
Beginner Level
On demand from $42 per hour GB200 NVL72, B200 from $68.80 per hour
Starting Price
Try CoreWeave
Category
data
Setup Time
< 2 minutes
data
Category
Beginner
Difficulty
Active
Status
Web App
Type

What is CoreWeave?

Discover how CoreWeave can enhance your workflow

CoreWeave provides specialized GPU infrastructure across regions with options from A100 to B200 and GB200 class systems. The public pricing page lists hourly rates for select SKUs with on demand access, while larger reservations and multi node clusters are handled by sales. Users deploy via Kubernetes, scale jobs elastically and attach high performance storage. The platform targets model training, fine tuning and real time inference, and is used by enterprises and labs that need capacity without building data centers. Billing is usage based with per hour GPU prices, plus storage and networking. Documentation covers templates and best practices for throughput and reliability.

Key Capabilities

What makes CoreWeave powerful

On Demand GPUs

Launch B200 or GB200 class instances on demand with per hour pricing and scale capacity up or down for experiments or production.

Implementation Level Professional

Kubernetes & Storage

Run jobs on managed Kubernetes with high performance storage so data pipelines keep GPUs saturated.

Implementation Level Professional

Right Sizing & Regions

Choose GPU, memory and region combinations that meet SLA and cost targets for each workload.

Implementation Level Intermediate

Reservations & Support

Work with sales for long term reservations, multi node clusters and support plans that match roadmap and budgets.

Implementation Level Professional

Professional Integration

These capabilities work together to provide a comprehensive AI solution that integrates seamlessly into professional workflows. Each feature is designed with enterprise-grade reliability and performance.

Key Features

What makes CoreWeave stand out

  • On demand NVIDIA fleets including B200 and GB200 classes
  • Per hour pricing published for select SKUs
  • Elastic Kubernetes orchestration and job scaling
  • High performance block and object storage
  • Multi region capacity for training and inference
  • Templates for LLM fine tuning and serving
  • Private networking and security options
  • Support for reservations and larger clusters

Use Cases

How CoreWeave can help you

  • Spin up multi GPU training clusters quickly
  • Serve low latency inference on modern GPUs
  • Run fine tuning and evaluation workflows
  • Burst capacity during peak experiments
  • Disaster recovery or secondary region runs
  • Benchmark new architectures on latest silicon
  • Cost model comparisons across GPU SKUs
  • Hybrid setups with on prem plus cloud overflow

Perfect For

ml teams, research labs, SaaS platforms and enterprises needing reliable GPU capacity without building their own data centers

Pricing

Start using CoreWeave today

On demand from $42 per hour GB200 NVL72, B200 from $68.80 per hour

Starting price

Get Started

Quick Information

Category data
Pricing Model Paid
Last Updated 1/8/2026

Compare CoreWeave with Alternatives

See how CoreWeave stacks up against similar tools

Frequently Asked Questions

What is the starting price per hour?
Published examples show GB200 NVL72 from forty two dollars per hour and B200 from sixty eight dollars eighty per hour, other SKUs vary by region and configuration.
Can I reserve capacity for months?
Yes, contact sales for committed reservations, dedicated clusters and custom networking.
How do I run training jobs?
Use templates and Kubernetes, attach storage and scale workers, then monitor throughput to keep GPUs busy.
Is there a free tier?
There is no free tier, billing is usage based for compute, storage and egress.
Do you support multi region redundancy?
Yes, customers deploy in multiple regions for resilience and traffic distribution.
Can I bring my own images and frameworks?
You can run common ML stacks or custom containers with your dependencies.
How is security handled?
Private networking, isolation and controls are provided, along with enterprise agreements for regulated workloads.
Where do I see all GPU prices?
A live pricing page lists select SKUs, contact sales for configurations not shown.

Similar Tools to Explore

Discover other AI tools that might meet your needs

Akkio logo

Akkio

data

No code AI analytics for agencies and businesses to clean data, build predictive models, analyze performance and automate reporting with team friendly pricing.

Free trial / Starts $49 per month Learn More
Algolia logo

Algolia

data

Hosted search and discovery with ultra fast indexing, typo tolerance, vector and keyword hybrid search, analytics and Rules for merchandising across web and apps.

Free / Usage based Learn More
Alteryx logo

Alteryx

data

Analytics automation platform that blends and preps data, builds code free and code friendly workflows, and deploys predictive models with governed sharing at scale.

Starts $250 per user per month Learn More
Baseten logo

Baseten

specialized

Serve open source and custom AI models with autoscaling cold start optimizations and usage based pricing that includes free credits so teams can prototype and scale production inference fast.

Free credits, usage based pricing Learn More
BentoML logo

BentoML

coding

Open source toolkit and managed inference platform for packaging deploying and operating AI models and pipelines with clean Python APIs strong performance and clear operations.

Free (OSS) / By quote Learn More
C

Cerebras

specialized

AI compute platform known for wafer-scale systems and cloud services plus a developer offering with token allowances and code completion access for builders.

Starts $50 per month (developer) / contact sales Learn More