a visualization of imbalance
Editorial

Why AI's Economic Promise Depends on What We Build Around It

5 minute read
Malvika Jethmalani avatar
By
SAVED
Technology reliably creates wealth, but it does not reliably create welfare. This explains why AI feels both exhilarating and destabilizing.

By almost any measure, we are living through one of the most technologically fertile periods in human history. AI is advancing at a pace few predicted, and entire categories of work, science and discovery are being reshaped in real time. And yet, confidence feels fragile.

At Davos this year, business leaders described a world “sitting on a precipice.” Inequality has emerged as the most interconnected long-term risk. AI’s adverse outcomes have vaulted from the margins to the center of global concern. Geo-economic fragmentation, polarization and institutional mistrust loom large.

This tension between extraordinary technological promise and deep social unease reflects a structural gap that economists Isabella Loaiza and Roberto Rigobon describe incisively in their paper From Wealth to Welfare: technology reliably creates wealth, but it does not reliably create welfare. That distinction is fast becoming the defining business and economic challenge of our time.

Wealth Is Automatic. Welfare Is Not.

General Purpose Technologies (GPTs) like electricity, automobiles, vaccines and now AI, share three traits: they improve over time, spread across sectors and reshape economies. But history shows that their social impact depends far less on the technology itself than on the institutions that surround it.

Electricity required grids. Cars required roads and financing systems. Public health breakthroughs required delivery infrastructure and social insurance. Where these complements existed, productivity gains translated into higher living standards. Where they did not, benefits concentrated and backlash followed. AI is no different except that it is moving faster than any GPT before it.

The uncomfortable truth is that many of our economic and social institutions were designed for an earlier technological epoch. Labor standards, education systems, antitrust regimes, welfare mechanisms and norms around ownership and privacy have not kept pace. The result is a widening gap between value creation and value distribution. This is why AI feels simultaneously exhilarating and destabilizing.

Bill Gates’s Optimism (with Footnotes)

In his recent The Year Ahead letter, Bill Gates offered “optimism with footnotes” — a framing that mirrors this moment perfectly.

Gates remains bullish on innovation and points to irreversible breakthroughs in health, climate technology, education and AI-enabled productivity as reasons the long arc of progress still bends upward. Once a disease becomes preventable, or a cost curve collapses, humanity does not forget how to do those things. But the footnotes matter.

Progress, Gates argues, now hinges on three questions:

  1. Will growing wealth translate into greater generosity?
  2. Will innovations that improve equality be scaled rather than left to market forces alone?
  3. Will societies actively manage AI’s disruptions rather than react to them after the fact?

Each question is economic at its core, and each reveals where institutional design, not technological capacity, will determine outcomes.

Inequality Is Not a Side Effect. It’s a Signal.

The World Economic Forum’s Global Risks Report identifies inequality as the most interconnected global risk over the next decade. That framing is critical; inequality is not simply a moral concern; it is a systemic economic risk.

When productivity gains accrue primarily to capital rather than labor, consumer spending weakens (in the U.S., consumer spending accounts for 70% of GDP). When opportunity concentrates geographically or socially, political polarization intensifies. When trust erodes, coordination becomes harder, precisely when coordination is most needed to manage climate risk, AI governance and global shocks.

Loaiza and Rigobon argue that inequality is not caused by technology itself, but by institutional choices that govern adoption, pricing, access and labor standards. Markets are exceptional at generating efficiency; they are far less reliable at generating equity without scaffolding.

AI and the Missing Labor Compact

Few issues crystallize this gap more clearly than work. AI promises to dramatically increase output per worker. In theory, this creates room for higher wages, shorter workweeks and greater leisure. In practice, without new labor standards, it risks producing job displacement, wage pressure and weaker economies.

History offers a cautionary tale; industrialization initially produced grueling conditions and extreme inequality. It was only when productivity gains were paired with new norms such as the 40-hour workweek that economic growth translated into human welfare. Today, we are running 21st-century AI systems on 20th-century labor assumptions.

Gates acknowledges this openly. He notes that mathematically, AI-driven abundance can be shared, but doing so requires deliberate policy choices about how much we work, how gains are distributed, and where we draw boundaries around automation. 

Adoption Is an Institutional Choice

One of the least appreciated lessons from economic history is that affordability is not automatic. Technologies do not diffuse simply because they are productive but because societies design systems that make them accessible. Cars became ubiquitous not only because they were useful, but because they could be financed and insured. Refrigerators, by contrast, diffused more slowly in many regions because they were rarely treated as collateral. The difference was institutional and structural. AI faces a similar crossroads.

In healthcare, Gates argues that AI-powered medical advice could become universally available if governments lead implementation and ensure global access. Left purely to market dynamics, advanced systems will reach wealthy populations first, reinforcing disparities rather than narrowing them. The same logic applies to education, where AI-enabled personalization could either democratize learning or further stratify it depending on how it is deployed.

Education as the Long-Term Equalizer

Both Gates and the Wealth to Welfare paper converge on education as the most consequential long-term lever. But this is not a narrow “reskilling” argument. Loaiza and Rigobon emphasize that the most valuable complements to AI are human capabilities that machines cannot replicate: empathy, ethical judgment, creativity, presence and hope. These traits are unevenly cultivated at home, making public education the only scalable institution capable of developing them broadly. In other words, education is not just about employability; it is about preserving human comparative advantage in an AI-saturated economy. If we fail to modernize education systems now, we risk reinforcing inequality for generations.

Markets Need Moral Geometry

Technological progress expands what is possible. Institutions determine what is permissible. Values decide what is desirable. Gates grounds his optimism in two human capacities: foresight and care. The economists ground theirs in complementary investments. The World Economic Forum frames the risk as fragmentation and loss of cooperation. Different vocabularies, same diagnosis.

The AI era demands not just faster innovation, but better moral geometry, i.e, clearer boundaries, stronger norms and institutions that align private incentives with public good. This is not an argument against markets or technology; it is an argument against institutional complacency.

For business leaders, the reminder is that AI risk is not primarily a technical problem. It is a governance and distribution problem. Companies that ignore this reality may enjoy short-term gains but face long-term instability including regulatory backlash, talent erosion, reputational damage and shrinking markets.

Learning Opportunities

Those that engage proactively in shaping labor standards, adoption models and ethical norms will help define a more sustainable equilibrium. The next era of competitive advantage will be determined by who helps build the systems that allow those models to improve lives at scale.

From Optimism to Architecture

Innovation continues to push the frontier of what humanity can achieve. But AI optimism without institutions is fragile.

The central question of the AI economy is no longer whether we can create extraordinary wealth. It is whether we are prepared to build the complementary systems that convert that wealth into shared welfare. History suggests the answer is not automatic, but it is available to those willing to design for it.

Related Articles:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Malvika Jethmalani

Malvika Jethmalani is the Founder of Atvis Group, a human capital advisory firm driven by the core belief that to win in the marketplace, businesses must first win in the workplace. She is a seasoned executive and certified executive coach skilled in driving people and culture transformation, repositioning businesses for profitable growth, leading M&A activity, and developing strategies to attract and retain top talent in high-growth, PE-backed organizations. Connect with Malvika Jethmalani:

Main image: adobe stock
Featured Research