
Table of Contents
5 Largest Underwater Data Centers in the World
As demand for data continues to grow, some of the world’s most ambitious infrastructure projects are moving beneath the surface. Underwater and offshore data centers offer a new approach—placing servers in the ocean or on floating platforms to lower power use, cut cooling costs, and operate closer to coastal users.
This overview covers the leading efforts shaping this trend: Microsoft’s early subsea tests, China’s fully operational AI-focused cluster, Nautilus’s barge-based colocation sites, Subsea Cloud’s pressure-equalized pods, and Google’s now-retired floating concept. These projects differ in design and scale, but all share the goal of rethinking where and how data infrastructure can be deployed.
Key Takeaways
- Underwater data centers reduce energy use and cooling costs through natural ocean cooling.
- Microsoft’s Project Natick achieved high reliability with fewer server failures underwater.
- Nautilus Data Technologies operates barge-based data centers with efficient closed-loop cooling.
- China’s Hainan project provides large-scale underwater AI computing with minimal environmental impact.
- Subsea Cloud’s pressure-equalized pods offer simpler deployment, easier maintenance, and strong security.
What is an Underwater Data Center?
An underwater data center is a sealed computing facility placed beneath the surface of a body of water, such as an ocean or river. It hosts servers and related equipment and performs the same tasks as a land-based data center, while using the underwater environment to help manage cooling, power use, and space limitations.
Some of the key benefits of an underwater data center include:
- Cool ocean water removes heat naturally and reduces energy costs.
- Eliminating HVAC systems cuts power use and lowers carbon emissions.
- Underwater sites avoid the need for land in crowded urban areas.
- Prefabricated modules allow for fast installation and redeployment.
- Coastal locations benefit from low-latency edge computing.
- Natural cooling and renewable energy support sustainability goals.
- Isolated setups are well-suited for backup, testing, and recovery needs.
5 Largest Underwater Data Center Projects
Let’s look at the largest underwater data center projects—some have been built, some haven’t, some are ongoing, and others have ended. Here’s a closer look at each one:
1. Microsoft‘s Undersea Data Centers – Project Natick
Microsoft’s Project Natick was a groundbreaking research initiative that explored the feasibility and potential benefits of deploying data centers on the ocean floor. The project, which ran from 2015 to its conclusion in 2024, aimed to deliver more sustainable, cost-effective, and responsive cloud computing services by harnessing the natural advantages of a subsea environment.
The core idea behind Project Natick was to house servers in large, sealed, cylindrical containers and submerge them in the ocean. This innovative approach sought to address several key challenges faced by traditional land-based data centers, including cooling costs, energy consumption, and proximity to population centers.
Location & Deployment
Microsoft deployed two undersea data centers as part of Project Natick. Let’s take a closer look at each one:
Phase I – Leona Philpot Prototype (2015)
Microsoft launched Project Natick in late 2014, beginning with a planning session at its Redmond, Washington, campus.
The team deployed the first prototype, named Leona Philpot (after a character from the Xbox Halo series), on August 10, 2015, 30 feet underwater off the coast of California.
It operated for 105 days, and the team later retrieved it for inspection and testing. This experiment confirmed that underwater data centers could function in calm conditions.
The success of this trial led the team to move ahead with a second, more advanced phase. They planned to deploy a larger unit, expose it to harsher environmental conditions, and run it entirely on renewable energy.
Phase II – Northern Isles Deployment (2018–2020)
For Phase II, Microsoft invited marine technology firms to submit proposals. Naval Group, a French defense contractor known for its work with submarine systems, won the contract to lead the design and deployment.
On June 1, 2018, the team deployed the new underwater data center capsule off the coast of Orkney, Scotland, near the European Marine Energy Centre. It stayed 36 meters (117 feet) underwater for over two years.
The capsule drew power through a cable linked to the Orkney grid, which runs entirely on wind and solar energy. This setup proved that renewable sources could reliably power underwater operations without relying on fossil fuels.
The vessel remained in operation during the COVID-19 pandemic and contributed to efforts such as Folding@home, which processed workloads for vaccine research. In July 2020, the team brought the system back to the surface for analysis, which showed high reliability and minimal hardware degradation.
Cooling System Design
- The capsule was a sealed pressure vessel filled with dry nitrogen gas to prevent corrosion.
- Heat from the servers was transferred to the surrounding sea through external heat exchangers, based on submarine cooling technology.
- Cold ocean water absorbed the heat, allowing passive cooling without chillers or evaporative loss.
- The stable, low sea temperature allowed highly efficient operation, contributing to a PUE around 1.2—well below average land-based data centers.
- Naval Group also engineered a reliable “umbilical” for power and fiber communication through the hull.
(Sources: datacenterdynamics.com, news.microsoft.com, naval-group.com)
Operational Performance (2018–2020)
- The Northern Isles capsule operated continuously for 25 months on the seafloor, running Azure cloud workloads with no human maintenance.
- It was retrieved in the summer of 2020 for analysis.
Hardware Reliability Results
The results showed exceptional reliability:
- Out of 855 servers, only 6 failed over two years.
- The land-based control group had 8 failures in just 135 servers.
- Microsoft attributed this to:
- The oxygen-free nitrogen environment
- Lack of human disturbance (no dust, no physical interventions)
- Stable cold temperatures and no vibration
- When the capsule was opened, servers and cables showed minimal wear and were in near-pristine condition.
(Sources: datacenterdynamics.com, naval-group.com)
Environmental Impact
Environmental monitoring during and after the deployment showed no harm to marine life. The capsule was surfaced covered in algae and barnacles, indicating biological acceptance but with no ecological disruption or leaks.
(Source: naval-group.com)
2. Google’s Underwater Data Center – The Barge Project
Google’s early exploration into offshore data centers was less about real-world deployment and more about shaping future possibilities for data center design. While it never brought a water-based facility into production, its 2008 patent and experimental barge constructions laid conceptual groundwork that continues to influence the field.
The central idea was to build modular, floating data centers on barges or ships. These would be positioned offshore near coastal cities, powered by renewable marine energy sources, and cooled directly by seawater.
Patent and Design Concept
In 2008, Google filed U.S. Patent No. 7,525,207 for a “water-based data center.” The design outlined the use of wave energy converters (such as Pelamis devices) to generate electricity. Cold seawater would serve as the primary cooling medium, circulating through heat exchangers to manage the temperature of servers onboard.
The envisioned system consisted of stacked shipping containers retrofitted as server enclosures. Each container would rest on a floating platform, allowing scalability and rapid deployment. Renewable energy generation and direct water cooling formed the core of Google’s sustainability pitch.
Google Barge Project (2011–2013)
Between 2011 and 2013, four large barges were built and docked at various U.S. ports, including San Francisco and Portland, Maine. These mysterious vessels—stacked with container-like structures—sparked speculation that Google had begun testing floating data centers in secret.
Each barge was outfitted with its own onboard power and HVAC systems. However, in 2013 Google confirmed that these were not computing facilities. Instead, they were intended as high-tech showrooms for products like Google Glass. The project was eventually scrapped due to rising costs and difficulties complying with maritime and building regulations.
Operational Status and Legacy
By 2014, Google had abandoned the barge project and sold off the vessels. The floating data center vision was put on indefinite hold. No offshore or underwater computing facilities have since been launched by the company.
Though it never materialized in practice, the 2008 concept helped spur industry-wide thinking about water-based computing infrastructure. Many later ventures—such as Subsea Cloud and Denv-R—cite Google’s early patent as inspiration. Ideas like wave energy harvesting, seawater cooling, and offshore colocation remain part of current deployments by other firms.
3. Nautilus Data Technologies – Floating Data Centers in Ports
Nautilus Data Technologies is a U.S.-based company advancing the idea of floating data centers positioned on barges, cooled using nearby natural water sources.
Rather than constructing land-based buildings, Nautilus deploys waterborne facilities that reduce energy consumption, eliminate evaporative water loss, and offer site flexibility by operating in ports or harbors. Its commercial success has helped push floating data centers from prototype into real-world deployment.
Initial Deployment: Stockton 1 (California)
In 2021, Nautilus launched its first operational floating data center at the Port of Stockton on the San Joaquin River. The unit, called Stockton 1, occupies a converted 90-meter barge and delivers up to 7 MW of IT capacity. The server infrastructure is located in modular enclosures on the upper deck, while power distribution and cooling systems are installed below deck.
The Stockton site became the first commercial proof of Nautilus’s water-cooled colocation model. Early tenants included government entities and cloud storage provider Backblaze, drawn by the sustainability, rapid deployment, and high-density rack support.
Cooling System Design
Stockton’s design is built around a closed-loop water cooling system called TRUE (Total Resource Usage Effectiveness). The process works as follows:
- Water is drawn from just below the river’s surface.
- It passes through external heat exchangers to cool a secondary fresh water loop.
- This fresh water then flows to rear-door heat exchangers on the server racks.
- The used river water is filtered and returned just a few degrees warmer.
No chillers or cooling towers are required. Because the system doesn’t evaporate water, consumption is nearly zero—a major benefit compared to traditional facilities. Environmental regulators approved the design, confirming it meets ecological safety standards. Stockton 1 operates with a PUE of ~1.15, competitive with large hyperscale cloud facilities.
Performance and Capabilities
The Stockton barge supports high-performance computing and AI use cases, with racks capable of densities up to 100 kW—far higher than the typical 5–10 kW for land-based data centers. It delivers Tier III-level reliability and connects to the local utility grid via power purchase agreements. The barge also connects to onshore fiber, ensuring robust data throughput.
One major selling point is flexibility. Any port with grid and fiber access can host one of Nautilus’s floating facilities. This removes the need for expensive real estate in crowded metro areas and allows fast site rollout.
Expansion Plans: Los Angeles and Marseille
Following the success of Stockton, Nautilus announced two new sites in late 2022: Los Angeles, California and Marseille, France. Both will feature custom-built barges, each supporting 7.5 MW of IT load.
- Los Angeles Site: Partnered with AltaSea, the barge will sit at the Port of L.A. with direct links to One Wilshire—one of the world’s major internet hubs. Undersea fiber cables will deliver low-latency connectivity.
- Marseille Site: The vessel will float near subsea cable landing stations, placing it in a strategic spot for Mediterranean data exchange.
Both sites have completed environmental reviews and secured power connections. Like Stockton, they will use closed-loop water cooling and target PUE values at or below 1.15. Nautilus has shifted to purpose-built barge platforms (rather than retrofitting old ships), improving deployment efficiency and long-term cost management.
Environmental and Operational Considerations
Nautilus’s floating data centers reduce land use and ease demand on local infrastructure. The company reports up to 30% energy savings by removing chillers, fans, and evaporative systems.
In drought-prone regions like California, water savings are critical. Traditional centers can use a swimming pool’s worth of water every two days. Nautilus returns all intake water with minimal heat impact.
Each barge, built on military-grade vessels, is designed to last 40–50 years. Maintenance and access match typical colocation standards, with gear housed in steel containers above deck.
As of 2025, Nautilus remains active in Stockton and reports strong interest in upcoming sites. Investors include Keppel Data Centres in Singapore.
4. Highlander – China’s Underwater Data Center Cluster (Hainan)
China’s most advanced underwater data center project has been developed by Beijing Highlander Digital Technology Co., in partnership with its subsidiary HiCloud and several state-backed organizations. Officially known as the Hainan Underwater Intelligent Computing Center, this facility represents the first fully operational and commercially scaled subsea data center cluster in the world.
Located off the southern coast of Hainan Province, the project demonstrates China’s rapid move from concept testing to production-ready, AI-focused infrastructure under the sea.
Location and Initial Deployment
The facility is situated in the South China Sea, near Lingshui County off Hainan Island. After two years of testing with pilot modules (2020–2021), the first commercial underwater capsule was deployed in March 2023. The structure—a 1,300-ton sealed pressure vessel—was lowered approximately 35 meters onto the seabed.
In February 2025, a second unit was installed, expanding total server capacity and signaling a shift from experimental to scaled operations. The new capsule, measuring 18 meters in length, is designed specifically for high-performance and AI workloads. Each unit connects to land via undersea fiber-optic and power cables, linking directly to the local terrestrial grid and backbone network infrastructure.
Engineering and Cooling Design
Each capsule uses a pressure-resistant steel hull, similar to a submarine, designed to prevent water ingress undersea. The design comes from Highlander and COOEC, drawing on China’s offshore engineering expertise.
Servers likely run in an inert gas environment to limit corrosion, though specifics remain undisclosed. Cooling is handled by passive seawater flow around the hull, acting as a large ocean-based heat exchanger without chillers or pumps.
This design uses the sea’s natural thermal properties for efficient cooling. The system achieves a PUE of 1.1—about 30% better than China’s average of 1.5—and meets the national requirement to stay below 1.4.
Capacity and Performance
As of 2025, the Hainan cluster consists of two capsules with a combined total of ~1,200 servers. The performance is substantial:
- 7,000 AI inference queries per second (via the in-house DeepSeek framework)
- 4 million high-resolution images processed in 30 seconds
- Computing power equivalent to 30,000 high-end gaming PCs
This positions the cluster as a de facto supercomputing hub. Use cases include large-scale AI model training, inference, game engine simulations, marine science analytics, and cloud services.
Clients include China Telecom, Tencent, SenseTime, and at least seven other enterprise users, all operating AI or data-intensive applications through the underwater modules.
Power Infrastructure and Energy Use
The cluster is powered by a dedicated onshore substation operated by the Lingshui Power Branch. The total power capacity for the first three phases is 24,000 kVA, enough to support up to ~24 MW in eventual deployment.
While current power comes from the mainland grid, project leaders plan to integrate offshore wind farms for direct renewable supply. This forms part of the cluster’s goal to meet national carbon neutrality targets. In terms of water use, the passive seawater cooling eliminates evaporative loss, preserving freshwater resources.
Environmental Impact and Sustainability
Environmental monitoring confirms minimal disruption to surrounding marine ecosystems. Water used for cooling is returned to the ocean with only a slight temperature increase, and the system has no leaks or contamination risk. The capsules rest on the seafloor without foundations or excavation, reducing ecological footprint.
Projections suggest that the full buildout—up to 100 capsules across a 68,000 m² seabed—would save:
- 68,000 m² of land
- 122 million kilowatt-hours of electricity annually
- 105,000 tons of fresh water per year (vs. equivalent land-based operations)
Status and Future Plans
With two capsules in operation and more underway, the Hainan underwater data center now functions as a full production site. Highlander and HiCloud plan to grow the cluster to 100 capsules within five years, forming a large subsea campus near the coast.
The project has support from the Sanya government and provincial capital groups. Highlander, recognized as a national “little giant” for innovation, uses a modular approach—each capsule is built and tested on land, then deployed at sea within 90 days.
Its coastal location allows for low-latency service and potential links to marine renewable energy, positioning it as a key part of China’s green data push.
5. Subsea Cloud – Pressure-Equalized Pods for Underwater Colocation
Subsea Cloud is a U.S.-based startup founded in 2021 with a focused mission: to make underwater data centers commercially viable as a colocation service. Inspired by Microsoft’s Project Natick but built with a distinct technical strategy,
Subsea Cloud has designed a pod system that prioritizes modularity, cost efficiency, and deeper deployments. Instead of sealed high-pressure capsules, Subsea Cloud’s systems use pressure-equalized containers, simplifying construction and enabling greater flexibility in depth and siting.
Location and Initial Deployment: “Jules Verne” Pod (Port Angeles, Washington)
Subsea Cloud’s first unit, Jules Verne, is a cylindrical pod roughly the size of a 20-foot shipping container, positioned 9 meters below the Pacific Ocean near Port Angeles, Washington.
It holds 16 server racks—about 800 servers—and delivers 1 MW of IT capacity. A 100 Gbps fiber optic line and a power cable link it to shore, supporting fast, low-latency connectivity.
Originally scheduled for late 2022, the pod reached hardware completion and testing by mid-2023. It serves as a demo and compliance unit, open to client and regulator inspection before handling commercial workloads.
Pod Design and Cooling System
Subsea Cloud takes a different approach from traditional undersea data centers. Instead of using a sealed pressure vessel, it relies on a pressure-equalized chamber. The interior matches external water pressure, removing structural stress and the need for thick walls.
Inside, servers sit in a dielectric immersion fluid that pulls heat from components. This fluid stays sealed and never contacts seawater. Heat moves from the internal coolant to the ocean through the container walls.
The system runs on natural convection, with no mechanical pumps. Temperature gradients keep fluid moving, making the setup passive and energy-efficient. This dual-immersion method offers stable thermal control, no risk of shorts, and extended hardware life.
Performance Expectations and Maintenance
Subsea Cloud advertises several key advantages:
- Up to 40% energy and CO₂ savings due to free cooling and elimination of HVAC equipment
- Up to 98% lower latency for nearby coastal users compared to inland data centers
- 90% reduction in deployment cost per MW compared to traditional land builds
The pods are factory-built, avoiding real estate acquisition and major construction. Maintenance involves surfacing the pod using winches or cranes, swapping components, and redeploying—all of which can be done in 4–16 hours. The sealed, dust-free, and oxygen-free environment is expected to deliver reliability improvements similar to Microsoft’s Project Natick.
There are no on-site personnel during operation. The system is remotely monitored, with telemetry and control managed from shore.
Expansion Plans: Njord01 and Manannan
Following Jules Verne, Subsea Cloud has announced two additional deployments:
- Njord01 – Scheduled for the Gulf of Mexico, this unit will demonstrate deployment in warmer, deeper waters.
- Manannan – Planned for the North Sea, this pod will operate at depths of 200–300 meters. At such depths, the company expects high levels of physical security, as access would require specialized remotely operated vehicles (ROVs).
Environmental and Security Notes
The system avoids water waste entirely. The sealed dielectric cooling loop keeps hardware dry, and the surrounding ocean provides virtually unlimited heat rejection. There is no evaporative loss or contamination risk.
Security is also a selling point. Once submerged, physical access becomes extremely difficult. This feature has reportedly attracted attention from defense-related clients.
Underwater Data Centers Comparison Table
Project / Company | Type | Location & Years | Cooling & Power | Performance & Impact |
---|---|---|---|---|
Microsoft – Project Natick | Undersea data centers | California (2015), Orkney (2018–2020) | Nitrogen-filled capsules with seawater heat exchangers, powered by renewable grid | PUE ~1.2, 25 months uptime, 6 of 855 server failures, no harm to marine life |
Google – Barge Project | Floating barge concept | U.S. ports (2011–2013) | Concept: seawater cooling and wave energy | Never in production, barges used as showrooms, no operational impact |
Nautilus Data Technologies | Floating data centers | Stockton (2021), LA & Marseille (planned) | Closed-loop river cooling, grid power | PUE ~1.15, high-density racks (100 kW), zero water loss, eco-approved |
Highlander – Hainan Cluster | Undersea data center cluster | Hainan, China (2023–2025) | Passive seawater cooling, grid (future wind) | PUE ~1.1, ~1,200 servers, AI supercomputing scale, minimal marine disruption |
Subsea Cloud | Pressure-equalized pods | Port Angeles (2023), Gulf & North Sea (planned) | Dielectric immersion cooling, shore power | 40% energy savings, lower latency, secure deep-water design |
Final Thoughts
Underwater data centers have moved from concept to reality, offering new ways to manage cooling, space, and energy use. These projects show promise in lowering costs and supporting sustainable operations.
Challenges like maintenance and long term durability still exist, but early results are strong. With rising demand for computing power, the ocean now offers a practical option, not just an experiment, for future infrastructure.
FAQs
1. What exactly is an underwater data center?
An underwater data center houses servers in watertight, container-like pods submerged in the sea or ocean. It uses cold seawater for natural cooling, eliminating or reducing traditional air conditioning systems. These data centers house racks, switches, and other equipment inside sealed, pressurized enclosures.
2. Why are companies building underwater data centers?
Natural cooling from seawater significantly cuts energy usage and freshwater consumption. It also allows placement near coastal areas without needing expensive land, while helping protect marine life from additional heat emitted by conventional cooling systems.
3. Which was the first major project?
Microsoft’s Project Natick, launched in 2015 near California and later near Scotland, tested a pod with 864 servers. It demonstrated eight times lower hardware failure rate and strong viability. The system used underwater vessels designed to remain sealed for years without human access.
4. What is the largest commercial underwater data center today?
China has launched the world’s first commercial-scale facility off Hainan Island. It includes 100 modules spread over ~68,000 m²—equivalent to ten soccer fields—and delivers processing power comparable to six million PCs. Each pod helps absorb heat using the surrounding seawater, lowering internal temperatures.
5. Are there other major projects in the works?
Yes. A parallel initiative off Shanghai’s Lingang area will power a demonstration 2.3 MW pod with offshore wind and seawater cooling, later scaling to 24 MW with PUE below 1.15. Systems will include sensors to monitor sound emissions and detect anomalies. Saudi Arabia is also exploring a design powered by ocean currents.
6. How is power delivered to undersea data centers?
They’re typically connected to nearby renewable sources—like offshore wind—or ocean-current generators, and then linked to the grid via subsea cables. This short distance transfer helps minimize energy loss during transmission, especially near densely populated coasts.
7. What are the key benefits of underwater data centers compared to land-based centers?
- Energy efficiency: Natural cooling reduces power consumption by up to 40–60%
- Reliability: Submersion and stable conditions led to eight-fold reductions in hardware failures in Microsoft’s test.
- Land preservation: Ocean deployment frees up land for other uses. These systems can protect both terrestrial and marine environments by reducing heat output and land disruption.
8. What are the main challenges or risks?
- Environmental impact: Warm discharge could affect marine ecosystems and there are concerns about regulatory permits. Marine heat waves could also compromise the efficiency of seawater-based cooling systems.
- Scalability: Linking enough underwater pods to match large land-based data centers remains untested.
- Maintenance: Requires specialized retrieval and handling systems to swap or upgrade modules, which adds air exposure risks during surface-level repairs.
9. Can underwater data centers support AI workloads?
Absolutely. China’s Hailanyun/Sanya system reportedly hosts hundreds of servers and is capable of training large AI models within a day. These setups are now being optimized for machine learning and deep-learning tasks requiring high compute density and vast amounts of storage.
10. What does the future hold for underwater data centers?
The industry is scaling from proof-of-concepts to commercial deployments. With growing edge computing demands, we may see modular undersea pods near coastal cities, powered sustainably and managed via robotic servicing.
That said, careful environmental regulations, ecosystem studies, and infrastructure integration are critical before global adoption. International researchers are now conducting studies to monitor performance and ecosystem interaction during each phase of deployment.
11. What is the largest underwater data center by capacity?
The largest known underwater data center by capacity is Microsoft’s Project Natick, which housed 864 servers and 27.6 petabytes of storage. The container, submerged off the Orkney Islands in Scotland, operated successfully for two years, proving underwater centers can deliver scalable compute power.
12. How much energy efficiency improvement was recorded in Project Natick?
Project Natick demonstrated a power usage effectiveness (PUE) of 1.07, compared to an industry average of 1.67 for traditional data centers. This translates to approximately 36% greater energy efficiency, largely due to passive seawater cooling and a sealed, unmanned environment.
13. What was the hardware failure rate in underwater data centers vs. land-based ones?
Microsoft reported a hardware failure rate of only 1/8th that of equivalent land-based servers during Project Natick’s trial. Over the two-year period, fewer than 1 in 100 servers experienced failure, compared to 8 per 100 in traditional facilities.
14. How deep are underwater data centers typically deployed?
Deployments vary, but Project Natick’s vessel was submerged at a depth of 117 feet (36 meters). Similar trials and designs propose depths between 30 to 200 meters, where temperatures are stable and external pressures are manageable for sealed infrastructure.
15. What is the potential scalability of underwater data centers globally?
According to Microsoft’s feasibility study, over 50% of the global population lives within 120 miles of the coast. If fully developed, underwater data centers could offer low-latency edge computing to billions of users, potentially matching or exceeding the global cloud capacity of current land-based systems in the coming decades.
Tamzid writes about technology with a focus on SEO, content marketing, and data centers. He has worked with over 120 clients across SaaS, cybersecurity, and blockchain. Tamzid breaks down complex topics like colocation, cloud architecture, and network connectivity. At Brightlio, he covers data center trends and IT infrastructure.
Recent Posts
295 Cloud Computing Statistics (September – 2025)
What is an AI Data Center? Everything You Need to Know!
6 Data Center Market Trends for 2025
5 Largest Underwater Data Centers in the World
Let's start
a new project together