Skip to product information
1 of 6

Novatech Apex MAX WS9995X — 96-Core Threadripper PRO + RTX PRO 6000 96GB — AI Training, Inference & 3D Rendering Workstation

Novatech Apex MAX WS9995X — 96-Core Threadripper PRO + RTX PRO 6000 96GB — AI Training, Inference & 3D Rendering Workstation

Ships Within 7 Business Days

The Novatech Apex WS9995X is the pinnacle of workstation performance, engineered for enterprise professionals, researchers, and creators who demand uncompromising power. At its heart is the AMD Ryzen Threadripper PRO 9995WX, a 96-core, 192-thread CPU designed to tackle the heaviest workloads with unmatched efficiency.

Paired with the NVIDIA RTX PRO 6000 GPU with 96GB of ECC VRAM, the WS9995X delivers exceptional acceleration for AI training, machine learning, HPC, 3D rendering, engineering simulations, and scientific computing.

With a massive 512GB of DDR5 ECC memory and 10TB of blazing-fast NVMe Gen 5 SSD storage (2TB + 4TB + 4TB), this system ensures seamless multitasking, rapid data access, and the capacity to handle the largest datasets and projects.

Cooling is managed by an ASUS ProArt 420mm liquid cooling system, while a 1200W 80+ Gold PSU provides stable, efficient power delivery. The full-tower SilverStone SETA H2 chassis offers professional expandability and airflow for future growth.

Every Novatech workstation is assembled and stress-tested in the USA, backed by lifetime technical support and a 3-year limited hardware warranty.

Whether you’re driving AI innovation, running advanced simulations, producing 8K video, or visualizing complex 3D models, the Apex WS9995X stands as the ultimate platform for professionals who refuse to compromise.


[Extreme AI & Machine Learning Performance]
Built with AMD Ryzen Threadripper PRO CPUs (up to 96 cores) and NVIDIA RTX GPUs including the RTX PRO 6000, RTX 5090, and RTX 5080. Designed for deep learning, neural networks, and AI model training with CUDA acceleration.

[Scalable Memory & Storage]
Choose configurations with up to 512GB DDR5 ECC RAM and 10TB of ultra-fast NVMe Gen 5 SSD storage. Perfect for data science, predictive analytics, and large-scale simulations requiring maximum speed and stability.

[Professional 3D Rendering & Design]
Handle demanding CAD, architecture, 3D modeling, and real-time rendering with workstation-class GPU power. Optimized for creative professionals and design studios requiring high-fidelity visuals and reliable compute.

[Gaming & Content Creation Powerhouse]
Play the latest AAA titles in 4K with ultra settings while seamlessly handling 8K video editing, streaming, and digital content creation. Balanced for both professional workloads and elite gaming performance.

[Buy With Confidence – Assembled & Supported in the USA]
All Novatech workstations are assembled and stress-tested in the USA, backed by lifetime technical support and a 3-year limited hardware warranty for guaranteed peace of mind.

processor
Threadripper PRO 9995WX
Ram
512GB 4800Mhz
Graphic card
RTX PRO 6000
Storage
10TB
Operating System
Windows/Linux
Power Supply
1200 Gold
SAVE $1,500.00
Regular price $38,499.00
Regular price $39,999.00 Sale price $38,499.00
Sale Sold out
4% OFF
1-Year Warranty
Free Expedited Shipping*
Free Returns

Specs

  • GPU: NVIDIA RTX PRO 6000 – 96GB GDDR6 ECC VRAM

  • Processor: AMD Ryzen Threadripper PRO 9995WX – 96 Cores, 192 Threads
  • RAM: 512GB DDR5 ECC Registered Memory (Expandable)
  • Storage: 10TB NVMe Gen 5 SSD (2TB + 4TB + 4TB)
  • PSU: 1200W ATX 3.1 – 80+ Gold, Fully Modular
  • Cooling: ASUS ProArt LC 420mm AIO Liquid Cooler
  • Connectivity: Wi-Fi 6E + Bluetooth (Integrated with ASUS WRX90E-SAGE SE)
  • Operating System: Windows 11 Pro Pre-Installed (TPM 2.0 Compliant)
  • Case: SilverStone SETA H2 – Full-Size Workstation Tower
  • Warranty: 3-Year Limited Hardware Warranty + Lifetime Technical Support

Shipping & Assembly

  • Assembly: Fully assembled, tested, and ready to use out of the box.

  • Plug & Play: Just connect peripherals and power, no setup required.

  • Shipping: Usually ships within 5-7 business days 

  • Free shipping across the USA (contiguous 48 states)

  • Packaging: Foam-protected internals, secure outer box 

Financing & Support

  • Payment Options: Credit/Debit, Amazon Pay, Shop Pay, PayPal 

  • Financing: Available via third-party payment partners (e.g. Shop Pay Installments, Affirm, etc.)
  • Support: U.S.-based technical assistance
  • Returns: 30-day return window
View full details

Performance & Capability

Enterprise-aligned estimates for model fit, inference throughput, diffusion performance, and cloud ROI framing.

Capability

Model Capability Overview

Model / Class Quantization Inference Fine-tuning
Llama 3 70B FP16 / 4-bit
Mixtral 8x22B FP16 / 4-bit
13B–34B class FP16 🚀
405B (sharded) 4-bit ⚠️

Legend

= Supported / recommended
🚀 = Excellent / best performance
⚠️ = Possible, but with constraints

Actual capability varies by framework, context length, batch size, and optimization stack.

Throughput

Token Throughput Estimates

Model Precision Estimated tok/sec
7B FP16 600–900 tok/sec
13B FP16 350–600 tok/sec
34B FP16 180–300 tok/sec
70B FP16 120–200 tok/sec
70B 4-bit 220–350 tok/sec

Estimates assume an optimized inference stack (e.g., vLLM / TensorRT-LLM). Batch inference can materially increase throughput.

Generative

Image & Diffusion Performance

Workflow / Model Estimated performance
Stable Diffusion XL ~1–2 sec/image (optimized)
SD Turbo Sub-second generations (pipeline dependent)
Video diffusion (optimized) Near real-time depending on resolution

Render time depends on resolution, steps, scheduler, and pipeline optimizations.

ROI

Cloud Cost Comparison

Scenario Estimate
H100-class cloud instance $90–$110/hr (varies by region/commit)
8 hrs/day utilization ~$20k–$26k/month
Directional break-even Often ~4–6 months (utilization dependent)

Cloud rates vary by region, commitment, and availability. Use as directional procurement framing.

Private AI Infrastructure, Without Cloud Tradeoffs

Deploy an on-prem AI server or enterprise AI workstation designed for LLM training hardware, fine-tuning, and secure internal workflows—without giving up performance or control.

Max configuration highlight

Reference build for LLM training & inference

Designed for teams that want predictable throughput and private data workflows—scalable from a strong base config to a fully-loaded build.

Dual RTX Pro 6000 Blackwell512GB RAM (scalable)8TB NVMe storageThreadripper platformLLM training hardware & inference

Base configurations start smaller (e.g., 1× GPU / 128GB RAM / 1TB), then scale to match your model size, context, and concurrency requirements.

Use cases

Enterprise Workloads

Designed for teams that need private AI infrastructure—where governance, predictable throughput, and dedicated GPU capacity matter.

AI inference server for internal copilots

Host a private LLM for engineering, support, HR, or ops—behind your firewall.

  • Predictable throughput for daily usage
  • Integrates with SSO/VPN/VPC
  • Private AI infrastructure for sensitive prompts

On-prem fine-tuning & evaluation

Iterate on domain-specific models with controlled datasets and repeatable benchmarking.

  • LLM training hardware optimized for iteration speed
  • Reproducible runs for model governance
  • Keep weights and datasets local

Vision, video, & multimodal pipelines

Run GPU-heavy workflows for QA, inspection, security review, and media analytics.

  • High VRAM headroom for large vision models
  • Batchable inference workloads
  • Better control vs. per-minute cloud billing

Dual GPU AI server for concurrent teams

Serve multiple users or multiple models with headroom for peak usage.

  • Partition workloads across GPUs
  • Higher concurrency for chat + retrieval
  • Supports larger contexts and tools

Why on-prem?

Control, Security & Predictable Performance

If workloads are steady—or data can’t leave your environment—on-prem can be the simplest path to consistent performance and privacy.

  • Private AI infrastructure: keep prompts, documents, and weights inside your environment.
  • Predictable performance: dedicated GPU capacity for inference and fine-tuning.
  • Cost stability: avoid hourly cloud spikes when usage is steady or growing.
  • Compliance alignment: simpler governance for regulated data flows.

Need hybrid? Many teams keep sensitive retrieval and inference on-prem while bursting less sensitive training jobs to cloud.

Important

Who This Is NOT For

We’ll tell you if you’re better served by cloud or a smaller system. This is the right fit when you need dedicated GPU infrastructure and plan to use it.

  • Occasional AI usage only (a few hours per month)
  • Need instant 50+ GPU burst scaling for short spikes
  • Not ready to manage deployment and model ops (we can help, but it’s still a responsibility)
  • Looking for a “gaming PC” experience — this is enterprise AI server/workstation hardware

Contact

Talk to Sales

Get a quick recommendation and a quote based on your model, throughput needs, and on-prem requirements.

Ready for sizing & pricing?

Email us your model, expected users/concurrency, and whether data must stay on-prem.

Email Sales Copy email
Email sales@novatechsolutions.ai

Fast path: include your model name, expected users/concurrency, and whether your data must remain on-prem.

3D & Video Production Performance

Directional, production-aligned expectations for 3D rendering, simulation, and professional video workflows. Customize the tables per product configuration.

Works great with

CPU: 9995WX GPU: Dual RTX Pro 6000 Memory: Up to 512GB Storage: Up to 10TB NVMe

3D Rendering

Rendering & Simulation (Directional)

Software Workload Expected performance class
Blender (Cycles) GPU path tracing / final-frame rendering ✅ Designed to handle complex scenes; performance varies by scene fit and settings
Cinema 4D + Redshift GPU rendering + lookdev ✅ Strong RTX throughput; scaling and results depend on scene/renderer
OctaneRender Photoreal path tracing ✅ Built for heavy RTX workloads; multi-GPU behavior depends on your scenes
Unreal Engine 5 Nanite + Lumen viewport ✅ Designed for large projects; GPU + RAM help complex assets and editor responsiveness

Use these as directional expectations. Actual performance varies by renderer settings, scene fit in VRAM, and effects complexity.

Legend

🚀 = Excellent / best-in-class
= Supported / recommended
⚠️ = Possible with constraints

Video Production

Editing, Color, Delivery (Directional)

Software Workflow Typical workflows supported
DaVinci Resolve 4K–8K editing + grading (GPU accelerated) ✅ Built for GPU-accelerated editing and effects; codec/effects dependent
Adobe Premiere Pro Multi-stream 4K editing + GPU effects ✅ Excellent workflow support; depends on codecs and effects stack
After Effects Large comps + RAM-heavy projects ✅ Designed to handle large comps with heavy caching (project dependent)

Results depend on codec, effects stack, GPU acceleration settings, and storage throughput.

Large Project Handling

Built for Heavy Scenes & Complex Timelines

Resource Benefit
192GB RAM Massive scenes, large After Effects comps
Up to 10TB NVMe storage Fast project and asset loading
RTX 5090 / RTX Pro 6000 VRAM Large textures, heavy scenes
Ryzen 9950X3D High viewport and CPU simulation performance

Designed to handle large textures, heavy scenes, and RAM-intensive comps—especially when projects stay on fast NVMe storage.

Render Speed Comparison

Quick Value Comparison (Directional)

System Relative render speed
Typical RTX 5080 workstation
Single RTX 5090 or RTX Pro 6000 workstation ~1.2–1.4×
Dual high-end GPU workstation (same class) ~2.0–2.6×

Directional comparison for GPU-rendered workloads. Actual results vary by renderer support, scene fit, and multi-GPU scaling behavior.

Note: All guidance is directional. Actual results vary by scene complexity, codecs, effects, renderer settings, and software versions.

NOVATECH Apex WS9995X: Enterprise Power. AI-Driven. Future-Ready.
Experience extreme performance with the AMD Ryzen Threadripper PRO 9995WX (96 cores, 192 threads) and NVIDIA RTX PRO 6000 with 96GB VRAM. Built for AI, HPC, data science, 3D rendering, simulation, and content creation, the Apex WS9995X is the ultimate workstation for professionals who demand it all.
icon
AI & Data Science Optimized
Harness 96 cores and NVIDIA RTX PRO 6000 acceleration to power through AI model training, deep learning, and big data analytics without compromise.
icon
Scalable Power
With 512GB DDR5 ECC RAM and 10TB of NVMe Gen 5 storage, this workstation is engineered for massive datasets, predictive modeling, and enterprise workloads.
icon
Fast & Efficient
The Threadripper PRO 9995WX delivers industry-leading multithreaded performance, ensuring smooth multitasking, real-time rendering, and responsive workflows at every stage.
icon
Cool & Reliable
Advanced ASUS ProArt 420mm liquid cooling and a 1200W 80+ Gold PSU keep the system quiet, efficient, and stable under sustained heavy loads.
icon
Unleash Enterprise-Grade Productivity With Novatech Apex Workstations
Transform your company’s workflows with faster AI model training, real-time rendering, and reliable performance — designed for professionals who need maximum power without compromise.
Accelerate Innovation

Cut model training times and speed up simulations with the AMD Threadripper PRO 9995WX (96 cores, 192 threads) and NVIDIA RTX PRO 6000 with 96GB VRAM.

Reliable For Business Workloads

Run AI, HPC, data analytics, 3D rendering, and engineering simulations without bottlenecks — keeping teams productive and projects on schedule.

Scale With Confidence

With 512GB DDR5 ECC memory, 10TB of Gen 5 NVMe storage, and a future-ready WRX90E platform, your workstation grows with your company’s needs.

Why Choose the Novatech AI Workstation?
Built for AI professionals, researchers, and creators who need uncompromising speed, reliability, and future-ready performance.
main-image
Is it powerful enough for machine learning?
Yes — with up to 96 cores / 192 threads and the NVIDIA RTX PRO 6000 (96GB VRAM), the Apex WS9995X is purpose-built for AI training, deep learning, and neural networks at scale.
Can it handle big datasets and multitasking?
Absolutely. With up to 512GB of DDR5 ECC memory and 10TB of Gen 5 NVMe storage, it’s designed to process massive datasets while running multiple applications simultaneously.
Is it reliable for long workloads and 24/7 operation?
Yes — advanced liquid cooling, enterprise-grade ECC memory, and 1000W+ Gold-rated PSUs ensure stability and reliability during continuous workloads and mission-critical tasks.
Will it stay relevant as technology advances?
Definitely. The ASUS Z790 motherboard supports PCIe 5.0, Wi-Fi 6E, and expansion options, making it a future-proof platform for years to come.
Is this workstation only for AI?
Not at all. It’s also perfect for 3D rendering, video editing, CAD, simulations, and high-end content creation, making it versatile for any professional workflow.
main-image