Introduction
As the rise of generative AI and GPU-intensive computing reshapes modern IT, traditional data centers are being forced to scale both compute and power delivery at unprecedented rates. Liquid cooling offers a path to manage the heat, but none of it matters without the electrical infrastructure to support it.
In this article, we cover:
- How to work with local utilities like Exelon/ComEd
- Typical power upgrade paths and approval processes
- Environmental stats comparing air vs. liquid-cooled power consumption
- Ideal rack and row design for AI environments
Why You Need More Power (A Lot More)
AI workloads powered by GPUs (like NVIDIA GB300/H100/H200) can consume 30kW to 60kW+ per rack, compared to just 6–12kW in a traditional air-cooled rack.
Traditional Rack (Air-Cooled): 8–12kW
AI Rack (Liquid-Cooled): 30–60kW
To support these demands, your facility will likely need:
- Higher service capacity (1MW → 3MW+)
- New transformers or substations
- Redundant A/B power feeds and busways
- Precision power distribution units (PDUs)
Working with Exelon / ComEd in Chicago
Step 1: Load Forecast & Demand Planning
- Work with vLava and your MEP engineer to prepare a 24-month power demand forecast.
- Include density per rack, number of racks, and redundancy assumptions.
Step 2: Submit a Capacity Request to ComEd
- File a request for upgraded service or secondary feed via ComEd’s Business Customer portal.
- You may need site drawings and a certified engineer’s load analysis.
Step 3: Pre-Construction Coordination
- Coordinate trenching, conduit runs, pad mount locations, and meter relocation.
- Plan around permit lead times (4–12 weeks) and supply chain for transformers.
Step 4: Commission & Verification
- Once infrastructure is in place, ComEd will test, inspect, and energize the new circuits.
- Prepare for possible peak shaving or demand response discussions.
Environmental Impact: Air vs. Liquid Cooling
Metric | Air-Cooled Rack | Liquid-Cooled Rack |
Avg. Power (per rack) | 8–12kW | 30–60kW |
Cooling Efficiency (PUE) | 1.6–2.2 | 1.1–1.3 |
Cooling Energy (annual/rack) | ~70,000 kWh | ~25,000 kWh |
Floor Space (racks/MW) | ~80 racks | ~20 racks |
Liquid cooling not only saves power on HVAC systems, but allows you to run higher density in a smaller footprint—reducing the real estate and operational cost per compute unit.
Environmental Impact: Air vs. Liquid Cooling
Typical Design for Liquid-Cooled AI Cluster:
- Rack Power Rating: 30–45kW
- Rows Per Zone: 4–6 rows with 8–10 racks per row
- Cooling Topology: Rear Door Heat Exchangers or direct-to-chip CDU loop
- Power Setup: A/B feeds, metered PDUs, UPS + generator-backed
- Environmental Monitoring: Leak detection, flow sensors, cabinet-level temperature probes
Final Thoughts
Transitioning to a liquid-cooled AI-ready environment is not just about compute—it’s about power. And in cities like Chicago, that means working hand-in-hand with Exelon/ComEd to ensure your facility can scale reliably and efficiently.
vLava Data has helped clients navigate utility engagement, MEP coordination, and AI workload deployment from edge to hyperscale.
Need help expanding your data center’s power and cooling profile?
📩 Email: power@vlavadata.com
🌐 www.vlavadata.com/blog