Artificial intelligence and high-performance computing (HPC) are advancing at a breakneck pace, but their energy demands are spiraling toward unsustainable levels. Training a frontier AI model can consume gigawatt-hours of electricity, rivaling the energy needs of entire towns. Exascale computing systems capable of performing more than one quintillion operations per second have already arrived, but post-exascale ambitions will multiply energy requirements further. Incremental gains in transistor scaling and architectural optimization are insufficient to close this gap. What is needed are radical improvements: reductions in energy consumption per computation on the order of 1,000x to 1,000,000x. Erik Hosler, a strategist in semiconductor sustainability, highlights that extreme efficiency gains are not optional but essential for the future of computing. His perspective points to a fundamental reality: energy efficiency is not merely an engineering challenge but a strategic imperative.
Achieving such ambitious targets requires breakthroughs across the computing stack. From materials and devices to architectures and cooling systems, every layer must contribute. A single innovation cannot solve energy efficiency, but must emerge from coordinated advances in multiple domains. This scale of challenge and opportunity demands investment, patience, and global collaboration. It is not just about sustaining Moore’s Law, but about rethinking how computing is powered in an era where demand grows exponentially.
The Scale of the Energy Challenge
Modern computing already strains energy infrastructure. Data centers are projected to consume nearly 10% of global electricity by the end of the decade. Training state-of-the-art AI models requires vast clusters of GPUs running for weeks, consuming power equivalent to thousands of households.
Exascale computing, once a moonshot, is now operational in facilities like Oak Ridge National Laboratory’s Frontier system. But post-exascale systems needed for simulations in climate modeling, defense, and advanced materials will require orders of magnitude more energy unless radical efficiency improvements are achieved.
The trajectory is unsustainable. Without breakthroughs, the expansion of AI and HPC risks colliding with grid limitations and climate goals. Efficiency targets of 1,000x to 1,000,000x are not aspirational, but are necessary.
Device-Level Innovation
Improvements in transistors and materials are critical at the device level. Conventional CMOS technology has slowed energy-per-switch scaling, forcing researchers to explore alternatives. Low-leakage transistors aim to reduce standby power consumption, a major contributor to inefficiency in modern chips.
Spintronics, which leverages electron spin rather than charge, promises lower-energy memory and logic operations. Superconductors, with their zero-resistance properties, could slash energy costs for both AI and quantum computing, though scalability remains a challenge.
Materials science also holds the key. New two-dimensional materials such as graphene and transition metal dichalcogenides could enable ultra-low-power devices. High-k dielectrics and novel interconnects may further reduce energy leakage. Each breakthrough brings incremental gains, but achieving the 1,000x target will require stacking multiple advances together.
Architectural Shifts
Device-level improvements alone will not deliver the necessary efficiency. Architectural innovation is equally important. Neuromorphic computing, inspired by the brain, promises orders-of-magnitude improvements in energy efficiency for AI workloads. By mimicking neural structures and using event-driven computation, neuromorphic chips can process information far more efficiently than conventional architectures.
In-memory computing addresses another inefficiency, which is data movement. Today, much of the energy in computing is consumed not by computation but by shuttling data between memory and processors. Architectures that combine storage and logic within the same framework could dramatically reduce energy waste.
Domain-specific accelerators also play a role. AI chips optimized for specific workloads can perform tasks at far lower energy costs than general-purpose CPUs or GPUs, from matrix multiplication to graph processing. Architectural shifts will not replace CMOS overnight, but they can augment it, carving out domains where efficiency gains are maximized.
System-Level Strategies
System-level improvements provide another dimension of efficiency. Cooling, interconnects, and system design all shape the energy footprint of computing.
Cooling efficiency is one major challenge. As chips pack more transistors into smaller areas, heat dissipation becomes a limiting factor. Advanced cooling technologies, including liquid cooling and cryogenic systems, can reduce overhead and improve energy utilization.
Photonics offers another pathway. Optical interconnects transmit data with far less energy than electrical wiring, reducing one of the largest sources of inefficiency in data centers. When integrated with advanced packaging, photonics can enable faster, more efficient communication between chips.
Packaging innovations themselves are critical. Three-dimensional stacking reduces the physical distance between components, cutting data movement costs. Heterogeneous integration allows for combining specialized chips into a single package, improving overall efficiency. System-level strategies ensure that gains achieved at the device and architecture levels are not lost in integration.
The Strategic Imperative
The pursuit of extreme efficiency is not just a technical challenge but a national and global priority. Energy efficiency intersects with economic competitiveness, climate policy, and national security. Nations that lead in efficiency breakthroughs will define the trajectory of AI, HPC, and advanced computing.
Erik Hosler explains, “Finally, the solution to keeping Moore’s Law going may entail incorporating photonics, MEMS, and other new technologies into the toolkit.” His remark underscores the multi-sector nature of the challenge. No single device or architecture will deliver the 1,000x to 1,000,000x gains needed. It will require convergence: photonics for interconnects, MEMS for sensing and actuation, novel materials for devices, and system-level design for integration.
This strategic imperative also extends to allies. Shared investment and coordinated research across trusted networks can accelerate progress while reducing duplication. The stakes are too high for fragmented efforts.
Efficiency as Destiny
The future of computing will be determined not by raw speed alone but by energy efficiency. Without radical reductions in energy consumption, the growth of AI, HPC, and emerging compute paradigms will collide with physical and environmental limits. Targets of 1,000x to 1,000,000x efficiency gains are ambitious, but they are the only path to sustainable progress.
Meeting these goals will require breakthroughs at every level: materials, devices, architectures, and systems. It will also require patient capital, public-private partnerships, and global coordination. Efficiency is no longer just a performance metric, but the defining factor in competitiveness. Efficiency is destiny. By treating it as a strategic priority, the U.S. and its allies can ensure that computing continues to advance without overwhelming the energy systems that sustain it.

