As AI continues to advance, its integration across industries has revealed significant infrastructure and sustainability challenges. These foundational issues will play a defining role in AI’s future, determining how far and how sustainably AI can scale. For Alexandr Wang, CEO of Scale AI and one of Silicon Valley’s youngest billionaires, addressing these challenges is critical for AI’s responsible growth. As Wang pointed out in his WebexOne interview, “the infrastructure to support AI at scale is still in its infancy,” underscoring that AI is only beginning to realize its transformative potential.
Editor's Note: This is part two in a three-part series. Read the first one here.
Data Limitations: The End of Public Data and the Rise of Frontier Data
One of the most immediate challenges facing AI is a data bottleneck. Publicly available data, which has powered AI’s growth so far, is reaching its limits. Wang explained that “we’ve exhausted much of the readily available public data,” meaning that AI systems are running out of new, valuable information to process. Without access to new data, AI’s capabilities will advance only as far as the data it’s fed.
Wang pointed to two emerging solutions to address this data ceiling:
- Frontier Data: This involves obtaining new, unique datasets that go beyond what’s available publicly. Frontier data can come from proprietary sources or highly specific areas not covered in traditional datasets, offering a more tailored and higher-quality source of information. Accessing this data can unlock AI’s potential in domains that require specialized knowledge or real-time updates, like healthcare diagnostics or financial forecasting.
- Private Data: Another solution lies in private or enterprise data that is securely sourced from within organizations. This type of data is particularly valuable because it is often highly specific and context-rich, enabling AI to deliver insights that are directly relevant to the business’s operations. However, working with private data raises critical ethical and privacy considerations, making responsible data management and compliance a top priority for any organization pursuing this route.
As Wang noted in an Index Ventures interview, “better data results in better AI.” Organizations that invest in quality data sources, whether frontier or private, position themselves to derive more accurate, meaningful insights from their AI systems.
Related Article: Technology's Limits Could Be a Barrier to AI's Advancement
Energy Consumption: The Double-Edged Sword of Powering AI
Energy consumption presents a second, major challenge. AI models, especially large-scale models like language models and deep neural networks, demand enormous computational resources, translating into substantial power requirements. Wang noted that, at the current pace, AI could soon require up to 20 gigawatts of power — an amount comparable to the energy needs of several large cities. Beyond financial costs, AI’s energy demands carry significant environmental implications, particularly as companies consider sustainability goals.
An article in MIT Technology Review puts it plainly: “AI is an energy hog.” Current projections suggest that electricity consumption from AI and other high-power applications could double in the coming years, potentially adding the energy equivalent of a new country to the global demand. For executives who are also focused on climate responsibility, this challenge presents a difficult choice: scale AI operations to improve business performance, or limit AI’s growth to reduce environmental impact.
In response, some AI leaders are exploring alternative energy solutions, such as nuclear power, to offset AI’s energy footprint. However, Wang and others recognize that adopting large-scale energy solutions may still not fully reconcile AI’s rapid growth with environmental sustainability. Some companies are looking at ways to optimize algorithms and create more energy-efficient models to reduce power needs while maintaining high performance. For executives, the reality may be that the benefits of scaling AI will come with trade-offs, particularly when balancing sustainability goals with operational growth.
Related Article: Environmental Concerns May Push Companies to Rethink How They Use GenAI
Compute and Hardware Constraints: Scaling AI With a Resilient Infrastructure
As AI becomes more integral to business operations, the need for high-performance computing hardware grows, particularly in the form of advanced processing chips designed specifically for AI tasks. However, the production of these chips is highly concentrated geographically, with a significant portion manufactured in Taiwan. This concentration creates a supply chain vulnerability, as geopolitical tensions or disruptions in this region could threaten the supply of critical components, halting AI’s progress for many companies reliant on these specialized chips.
Addressing this challenge requires investing in more diversified production and exploring innovations in chip design that make AI hardware both resilient and efficient. Companies are also turning to cloud infrastructure and distributed computing, which can decentralize processing and reduce dependency on any one supplier or region. This approach enables scalable, flexible solutions to support AI workloads, particularly for companies that may lack the resources to build and maintain in-house infrastructure.
Agentic AI: Toward Autonomous Systems
A major future direction Wang emphasized is agentic AI — AI systems that can operate independently and autonomously handle complex workflows. Rather than performing isolated tasks, agentic AI systems can act as "agents" within an organization, to execute multi-step processes, make decisions and adapt to evolving conditions with minimal human input. In this way, agentic AI could be a leap forward from current AI capabilities — creating a system that operates more like a collaborative partner than a static tool.
For example, agentic AI could be applied in sectors like customer service, where AI agents could independently resolve queries across multiple platforms, or in supply chain management, where AI could adaptively respond to disruptions or shifts in demand. However, agentic AI requires a strong foundation of high-quality data, reliable infrastructure and substantial computational power, making it essential to address the challenges outlined above to enable this next-generation AI capability.
Related Article: Will Your Next Hire Be an AI Agent?
Preparing Your Business for AI’s Foundational Challenges
Understanding these foundational growth challenges is essential for executives as they consider how AI will fit into their organization’s strategy. By addressing data limitations through frontier and private data, meeting energy needs sustainably, ensuring resilient compute infrastructure, and preparing for the rise of agentic AI, businesses can develop a strong, sustainable foundation for AI innovation. Taking proactive steps in these areas will position organizations to harness AI’s transformative power responsibly and effectively.
Editor’s Note: Check back in tomorrow to read the final installment of this three-part series, looking at the implications of AI on global policy.
Learn how you can join our contributor community.