Data centers:
a power contract with a roof.
The asset class everyone wants exposure to and almost no individual investor can actually buy. Here's how data centers actually work, and where individual capital realistically fits.
TL;DR
A data center isn't a building. It's a power contract with a roof. The real estate is the wrapper around an entitlement to megawatts, and the megawatts are the entire investment.
AI demand pulled the industry into a supply crisis. Power queues run past 2030 in the major markets. Cap rates compressed. And almost none of this is actually accessible to individual investors except through REITs and a few DST wrappers. Here's what's real.
What's in here
- 1. What a data center actually is
- 2. Hyperscale vs colocation
- 3. The power constraint
- 4. Northern Virginia and Loudoun County
- 5. The AI demand surge
- 6. Tier classifications
- 7. PUE, MW, and the metrics that matter
- 8. Cap rates by tenancy and submarket
- 9. Lease structures (not normal NNN)
- 10. The 1031 fit
- 11. Common mistakes
- 12. If you're underwriting one right now
- 13. FAQ
1. What a data center actually is (and isn't)
A data center is a building purpose-built to house servers, networking equipment, and storage hardware, and to keep all of it powered, cooled, and connected to the public internet. That's the surface description.
The real description is: a data center is an entitled connection to the electrical grid, paired with a connection to long-haul fiber, wrapped in a building that provides cooling, redundancy, security, and physical access controls. The building is the cheapest part. The grid interconnect and the fiber are the expensive, slow parts.
The construction of the building takes 12-18 months. The power interconnect can take 5-7 years. So the question isn't "can we build a data center here?" — it's "do we already have power?"
This is why data center economics look more like utility economics than industrial real estate economics. The asset's value is the megawatts of contracted power, not the square footage. A 250,000 SF dry industrial box is worth maybe $30M. The same 250,000 SF wrapped around 50 MW of power capacity is worth $400-600M. The building hasn't changed. The entitlement has.
2. Hyperscale vs colocation — two different businesses
There are two very different data center business models and you need to know which one you're talking about.
Hyperscale. A facility built for and leased entirely to one tenant. The tenants are AWS, Microsoft (Azure), Google Cloud, Meta, Oracle, Apple, and a handful of large enterprise users. These are 30-100+ MW facilities, sometimes campuses with multiple buildings. The lease is a 15-20 year triple net or modified net structure with the tenant providing the IT equipment and operating the inside. The landlord delivers the powered shell and the cooling infrastructure.
Hyperscale is functionally a sale-leaseback or build-to-suit business. The cap rate compression has been dramatic — institutional capital wants this exposure and there's almost no inventory.
Colocation. A multi-tenant facility where the operator (Equinix, Digital Realty, CyrusOne, CoreSite) provides power, cooling, and connectivity, and rents space to many enterprise customers. Customers might rent a single rack, a private cage, or a fully isolated suite. The operator runs the facility — security, network operations, hands-and-eyes service.
Colocation is an operating business with a real estate component. It's not just real estate. It's why Equinix and Digital Realty are operating REITs, not pure landlord REITs. The operating margin matters as much as the rent roll.
3. The power constraint that drives everything
Power is the binding constraint. Always.
To build a hyperscale data center you need a substation interconnect from the regional transmission utility. In Northern Virginia that's Dominion Energy. In Texas that's Oncor or CenterPoint, with ERCOT as the grid operator. In Phoenix that's APS. In Chicago that's ComEd.
As of 2026, the active interconnect queues in the major data center markets look like this:
- Loudoun County, VA (Dominion): 5-7 years for new large-load interconnects. Some sites have been re-prioritized but the queue is real.
- Phoenix, AZ (APS): 4-6 years for new substations.
- Dallas / Fort Worth, TX (ERCOT region): 3-5 years, faster than ERCOT-coastal because of available transmission capacity.
- Atlanta, GA (Georgia Power): 3-5 years and tightening fast.
- Columbus, OH (AEP): 3-5 years, the new "second-tier" market everyone's racing into.
The consequence: a powered shell — a building that already has its substation connection — trades at a massive premium to an unpowered building. In some markets the powered shell is worth 3-5x what the same building would be worth without the interconnect. The price is the power, not the building.
4. Northern Virginia and why every data center is there
Loudoun County, Virginia is the largest data center market in the world. By a lot. Roughly 70% of global internet traffic touches a server in Northern Virginia at peak. The cluster sits along the Dulles Technology Corridor — Ashburn, Sterling, Leesburg.
Three converging things created this:
- 1990s fiber. AOL was in Reston. MAE-East, the original major internet exchange point, was in Tysons Corner. Long-haul fiber was already built out before the cloud existed.
- Dominion Energy. Historically had cheap power, willing to build substations for hyperscale customers, and a regulated rate structure that worked for the load profile.
- Loudoun County permitting. The county got friendly to data centers early, set up tax abatements, and built the political relationships. Once the cluster started, the network effects made it self-reinforcing.
Today the saturation is causing problems. Power constraints are real. Local pushback on land use is intensifying. Dominion is building out new transmission but it's slow. The growth is now spilling into secondary markets — Columbus, Atlanta, Phoenix, Reno, Salt Lake City. But Loudoun is still the price-setter for global hyperscale.
5. The AI demand surge and what it changed
From 2023 to 2025, AI training and inference workloads doubled and re-doubled the demand for compute. That demand translated directly into demand for power-dense data center capacity. The numbers got weird fast.
- Pre-AI typical rack: 5-10 kW.
- AI training rack (NVIDIA H100/H200): 30-80 kW.
- NVIDIA Blackwell GB200 reference rack: 120 kW+.
This 5-15x density increase broke the cooling model. Air cooling can't handle a 100 kW rack, so liquid cooling moved from "future" to "now." It also broke the construction model — facilities being built today are getting redesigned mid-construction to handle higher density.
The cap rate response was equally dramatic. Stabilized hyperscale cap rates compressed 50-75 bps between 2023 and 2026. Powered shell pricing went up. New market entrants (private equity, sovereign wealth, infrastructure funds) flooded in. The sector that was already capital-intensive became outright competitive.
6. Tier classifications and what they mean
The Uptime Institute's Tier classification (I-IV) is the standard taxonomy.
- Tier I: Single-path power and cooling, no redundancy. 99.671% expected uptime. Roughly 28 hours of downtime a year. Effectively obsolete for any institutional use.
- Tier II: Single-path with redundant components. 99.741% uptime. About 22 hours/year downtime. Small business or non-critical use.
- Tier III: Multiple power and cooling paths, only one active, but maintenance can happen without downtime. 99.982% uptime. About 1.6 hours/year. The standard for most enterprise colocation.
- Tier IV: Fully fault-tolerant, multiple active paths. 99.995% uptime. About 26 minutes/year. Required for mission-critical financial, government, and some healthcare workloads.
Most modern hyperscale builds are designed to Tier III standards but the customer (e.g., AWS) typically operates them to higher reliability through their own redundancy at the application layer. So you'll see "Tier III equivalent" or "Tier III+" in marketing materials.
7. PUE, MW, and the metrics that matter
MW (megawatts). The total power capacity of the facility, measured in megawatts of IT load. A "10 MW data center" can support 10 MW of server power draw. This is the headline number.
Critical IT load vs total facility load. Critical IT load is what the servers actually use. Total facility load includes cooling, lighting, security systems. The ratio between them is...
PUE (Power Usage Effectiveness). Total facility power divided by IT equipment power. A perfect PUE is 1.0 — every watt entering the building is doing compute work. Industry average is around 1.55. Best-in-class hyperscale runs 1.10-1.25. Tropical or hot-climate facilities run higher because cooling is more expensive.
Density (kW per rack or per sq ft). How much power a single rack can draw, or how many watts per square foot the floor supports. Pre-AI: 100-150 W/SF was typical. AI-ready: 250-500+ W/SF.
Redundancy notation: N, N+1, 2N, 2N+1. N is the minimum capacity needed. N+1 means one extra. 2N means two completely independent systems. 2N+1 is paranoia-grade. More redundancy costs more capital and changes the cap rate.
WUE (Water Usage Effectiveness). Liters of water per kWh of IT load. Increasingly relevant in dry markets where local communities are pushing back on data center water use.
8. Cap rates by tenancy and submarket
2026 ranges:
- Stabilized hyperscale, investment-grade tenant (AWS/MSFT/GOOG), 15-20 year lease: 5.0-5.75%.
- Mid-size colocation, multi-tenant, established operator: 5.5-6.5%.
- Powered shell, pre-lease: Priced per MW of contracted power, $8-15M per MW.
- Edge / second-tier markets: 6.5-7.5%, depending on credit and lease term.
- Older Tier II facility, value-add play: 8.0-10%+, often priced at land value plus power-rights value.
The compression from pre-AI levels is real. To compare against other asset classes, see our cap rates guide. Data centers historically traded wider than core multifamily and now they trade tighter. That's the AI repricing.
9. The lease structures (and why they're not normal NNN)
Hyperscale leases are not standard NNN leases even though brokers will sometimes call them that. The differences:
- Power pass-through. The tenant pays for the actual power they consume, often metered separately and reimbursed at cost. Some leases have a base rent plus power, some have a "wholesale colocation" structure with a fixed kW rate.
- Maintenance obligations split. The tenant typically maintains everything inside their cage or suite (servers, internal cabling, sometimes UPS at the rack level). The landlord maintains the building shell, the central cooling plant, the substation interconnect.
- Termination rights tied to power delivery. If the landlord can't deliver the contracted megawatts within an SLA, the tenant has remedies up to and including termination. This is real and it's been litigated.
- Renewal mechanics. Often staged at 5-year intervals with market-rate resets, or with options that depend on whether the tenant has expanded into adjacent capacity.
If you're looking at a deal labeled "data center NNN," read the lease. The capital obligations split between landlord and tenant is the entire deal.
10. The 1031 fit
Data centers qualify for 1031 exchange, but the inventory problem is severe. Hyperscale facilities almost never trade individually — they move at the portfolio level between sponsors, or they're held long-term by REITs. Colocation facilities are held by operators who don't sell.
Realistic 1031 paths into data centers:
- DST with data center exposure. A handful of sponsors include data center holdings in industrial or specialty DSTs. See our DST guide — the fee structure matters and not all sponsors are equal.
- Powered land. Buying entitled industrial land in a data center submarket and either selling to a developer or executing a ground lease to a hyperscaler. Requires local knowledge and a lot of patience.
- Smaller edge or colocation facility. Sub-5 MW facilities sometimes trade individually, especially in secondary markets. Cap rates wider, operating risk higher.
What doesn't work: expecting a clean institutional-grade hyperscale shell to appear on the open market during your 45-day ID window.
11. Common mistakes
- Confusing powered shell with operating data center. A powered shell is real estate. An operating data center is a business. The cap rates and the diligence are different.
- Ignoring the power interconnect status. Always confirm whether the interconnect is contracted, energized, or merely "in queue." These are wildly different states.
- Underwriting at hyperscale cap rates for a colocation building. The credit and the operating risk are not comparable.
- Not accounting for cooling tech transitions. A pre-2023 air-cooled facility may need significant capital to support liquid cooling for AI workloads. That's a real reserve item.
- Treating the lease as standard NNN. It isn't. Read the actual document.
- Ignoring local political risk. Loudoun, Phoenix, and Atlanta have all seen data center pushback. Zoning rules can change. Tax abatements can expire.
12. If you're underwriting a data center right now
- Confirm the power. Get the substation interconnect agreement. Confirm energized capacity vs contracted capacity. Verify any expansion rights.
- Read the lease carefully. Power pass-through, maintenance split, termination rights, renewal mechanics. All of it.
- Get the PUE history. 24+ months. Trending up is bad news.
- Verify cooling capacity for AI workloads. Can the facility support 30+ kW per rack? 80? 120? This determines who can be the next tenant.
- Pull tenant financials. Even hyperscalers have credit nuances at the entity level. AWS the parent is one credit; the leasing subsidiary may be another.
- Check fiber providers. How many carriers are in the building? Diverse routes? This affects re-leasing risk.
- Confirm tax abatement status and expiration. Many data center markets have abatements that materially affect underwriting.
- Compare cap rates to the institutional comp set. If you're inside hyperscale pricing on a colo deal, you're probably overpaying.
13. FAQ
What's the difference between hyperscale and colocation data centers?
Hyperscale facilities are built for and leased entirely to one customer — usually AWS, Microsoft, Google, Meta, or Oracle. They run 30-100+ megawatts and the lease is essentially a power purchase agreement with a building attached. Colocation (colo) is multi-tenant — companies like Equinix and Digital Realty operate facilities where many enterprise customers rent rack space, cabinets, or cages. They're different businesses entirely, with different cap rates, different lease structures, and different buyer pools.
Why is Northern Virginia the data center capital of the world?
Three reasons converged. First, AOL and MAE-East built the original internet exchange there in the 1990s, so fiber routes were established early. Second, Dominion Energy historically had cheap, reliable power and was willing to build out for hyperscale. Third, Loudoun County's tax structure and permitting made it the path of least resistance. Today Loudoun handles roughly 70% of global internet traffic at peak. Power constraints are now the binding limit — Dominion's queue extends past 2030.
What does a data center cap rate look like in 2026?
Stabilized hyperscale shells with investment-grade credit tenants and 15-20 year leases trade 5.0-5.75%. Mid-size colocation facilities run 5.5-6.5%. Powered shells (entitled, built, but pre-lease) trade based on power capacity rather than cap rate — typically $8-15 million per megawatt of contracted power. The AI demand surge compressed institutional cap rates 50-75 bps from 2023 levels.
What is PUE and why does it matter?
PUE stands for Power Usage Effectiveness. It's the ratio of total facility power to IT equipment power. A perfect PUE is 1.0 (every watt goes to compute). Industry average is around 1.55. Best-in-class hyperscale runs 1.10-1.25. Lower PUE means more of the power your tenant pays for actually does compute work, which means lower effective cost per kilowatt and a more competitive facility. It's also the single most-quoted metric in any data center marketing deck.
Are data centers a good 1031 fit?
They qualify for 1031 like any commercial real estate, but the inventory problem is severe. Hyperscale assets trade at portfolio level between sponsors, not on the open market. Colocation facilities almost never trade individually. Realistic paths for a 1031 buyer are either a DST sponsor with data center exposure (a few exist, fees are not cheap), or a private sale-leaseback you source through a relationship. Don't plan to find one on LoopNet.
Should an individual investor own a data center?
Probably not directly. The capital scale is wrong (a single 30 MW hyperscale shell is $300M+), the operating expertise is specialized, and the deal flow is institutional. Individual investor exposure realistically comes through public REITs (Digital Realty, Equinix), private REITs from major sponsors, or DSTs with data center allocations. The exception is small powered land plays — buying entitled, powered industrial land in a data center submarket and selling to a developer. That's a legitimate strategy if you have local knowledge.
Subscriber-only · The Upleg Playbook
Want the full data center deep-dive — free?
Power interconnect queue map by market, hyperscale lease structure breakdown, current cap rate comp set, and the realistic playbook for individual capital exposure. Built for people who are tired of being told to "just buy DLR."
Free. Unsubscribe any time. We don't sell your email. One weekly briefing per week, nothing else.
- InsidePower interconnect queue lengths by market
- InsideHyperscale lease structure breakdown (line by line)
- InsideCurrent cap rate comp set, hyperscale vs colo
- InsideAI cooling capex framework (air vs liquid)
- InsidePowered land valuation model ($/MW)
- InsideDST sponsors with real data center exposure
- InsideSubmarket map: Loudoun, Phoenix, Columbus, ATL, DFW