The Next Great Transformation

Artificial intelligence has rapidly become a central arena of geopolitical competition. The United States government frames AI as a strategic asset on par with energy or defense and seeks to press its apparent lead in developing the technology. The European Union lags in platform power but seeks influence over AI through regulation, labor protections, and rule-setting. China is racing to catch up and to deploy AI at scale, combining heavy state investment with administrative control and surveillance.
Each of these rivals fears falling behind. Losing the AI race is widely understood to mean slower growth, military disadvantage, technological dependence, and diminished global influence. As a result, governments are pouring money into chips, data centers, and national AI champions, while tightening export controls and treating compute capacity as a strategic resource. But this familiar race narrative obscures a deeper danger. AI is not just another general-purpose technology. It is a force capable of reshaping the very meaning of work, income, and social status. The states that lose control of these social effects may find that technological leadership offers little geopolitical advantage.
History suggests that societies unable to absorb disruptive economic change become politically volatile, strategically erratic, and ultimately weaker competitors. The central question, then, is not only who builds the most powerful AI systems, but who can integrate them into society without triggering a societal backlash or an institutional breakdown.
Karl Polanyi’s The Great Transformation, published in 1944, explains why the capacity to “socially embed” new market forces determines national strength. By “embeddedness,” Polanyi meant that markets have historically been subordinate to social and political institutions, rather than governing them. The nineteenthcentury idea of what he called a “self-regulating market” was historically novel precisely because it sought to “disembed” the economy from society and organize social life around price and competition rather than social obligation. As Polanyi put it in his most succinct formulation, “instead of economy being embedded in social relations, social relations are embedded in the economic system.”
Writing in the shadow of the Great Depression, Polanyi argued that the attempt in the nineteenth century to create a self-regulating market society that treated labor, land, and money as commodities generated social dislocation so severe that it provoked authoritarian backlash and geopolitical collapse. Stable orders, he insisted, required markets to be re-embedded in social and political institutions. Where they were not, societies sought protection by other means, which often translated into support for fascist or communist regimes that promised to tame the market. Today, it often means electing populist leaders who promise to break the entire existing order, both domestic and international.
Polanyi insisted that the idea of a “self-adjusting market implied a stark utopia” because such a system could not exist “for any length of time without annihilating the human and natural substance of society.” The interwar gold standard, for example, disciplined states in the name of efficiency, but it did so by transmitting economic shocks directly into social life. When democratic governments proved unable to shield their populations, they either abandoned the liberal economic order or turned authoritarian (or both).
In the early 1930s, Germany’s adherence to the gold standard transmitted the Great Depression directly into social life, as democratic governments chose deflation and austerity to preserve international credibility. When society could no longer bear those costs, Polanyi observed, political authority shifted away from democracy altogether, paving the way for Nazi rule and the collapse of the liberal order itself. By contrast, western states in the postwar period gradually managed to re-embed the market in their societies through the creation of the modern welfare state. Social democracy sufficiently cushioned dislocation and redistributed the wealth and power generated by industrial capitalism throughout the society to allow for social peace.
Polanyi captured this dynamic of social protection in his classic formulation of the “double movement,” noting that the nineteenth century saw markets expand while simultaneously generating a countermovement of social protections: “Social history … was thus the result of a double movement …. Society protected itself against the perils inherent in a self-regulating market system.”
Polanyi’s essential insight is that economic orders that do not provide social protection against dislocation will not survive for very long. Technologically-induced changes to the social order will generate counter-movements, sometimes progressive, often illiberal, that reshape both domestic politics and international alignments. The exact nature of the disruption is unpredictable, but the fact of disruption is a near certainty.
The Disembedding Next Time
Globalization, technological change, and immigration have already produced a nascent Polanyian “disembedding shock” in much of the world. Now AI appears set to radically deepen that challenge to current social structures. The ambition of AI’s champions is not simply to automate tasks, but to restructure the relationship between work, income, and social status.
Leading AI proponents such as Sam Altman and Erik Brynjolfsson argue that it will decouple productivity from labor and perhaps “eliminate jobs in the way we think about them today.” AI thus threatens to commodify not only labor, but cognition, creativity, and decision-making itself. Jobs may disappear or fragment even as productivity rises. Skills may depreciate faster than institutions can adapt. Status hierarchies may collapse without clear replacements.
From a Polanyian perspective, this is precisely the kind of transformation that markets cannot manage on their own. “To allow the market mechanism to be the sole director of the fate of human beings and their natural environment,” Polanyi warned, “would result in the demolition of society.”
This type of disruption is already becoming visible. In January 2024 the International Monetary Fund estimated that around 60 percent of jobs in advanced economies are exposed to AI, with roughly half of those facing potential negative impacts through job loss, task erosion, or downward pressure on wages rather than productivity enhancement. AI, warned Kristalina Georgieva, the head of the IMF, is “hitting the labor market like a tsunami.” Unlike earlier waves of automation, exposure is concentrated not only in routine manual work but in professional, clerical, and service occupations that have traditionally provided stability and social status.
A concrete example can be found in professional services. Tasks that once anchored junior and mid-level careers in law, accounting, consulting, and finance such as document review, contract drafting, due diligence, research, and basic analysis are increasingly performed by AI systems. Large law firms and consultancies report deploying generative AI to automate work previously done by junior staff, cutting both time and headcount. The social effect is subtler and more disruptive than outright job loss: key rungs of the career ladder are being removed. Entry-level positions shrink, progression paths narrow, and the connection between education, effort, and advancement weakens, even where overall employment remains stable.
Early evidence suggests that AI adoption raises output without proportionate increases in employment, weakening the historical link between productivity and job creation. The OECD notes that a growing share of jobs in advanced economies are highly exposed to AI at the task level, particularly in white-collar services, raising the risk that the productivity gains from AI may not create broad-based labor demand.
The distributional effects of AI compound the problem. AI systems tend to concentrate value in firms and regions that control data, compute, and platforms, while shifting risk onto workers and contractors. As a result, gains accrue narrowly while adjustment costs are widely dispersed. This effect is already visible in the expansion of algorithmic management and task-based work. The social consequence is therefore not only material dislocation, but erosion of the role that work plays in structuring identity, dignity, and social belonging.
In Polanyian terms, AI is beginning to disembed economic activity from the social institutions that have absorbed change, creating precisely the conditions under which political counter-movements emerge. Such risks are already visible in rising populism, political volatility, and declining trust in institutions across advanced economies. AI may well accelerate the technological and economic trends that are already straining the social fabric.
The West has faced a similar challenge before. During the Cold War, the competition with the Soviet Union was not only military or technological; it was also a contest over social organization. Western success in that struggle owed far more to the welfare state than is often acknowledged. Postwar social democracy stabilized capitalist society by de-commodifying key risks, such as unemployment, disability, or old age. It legitimized markets by ensuring that growth translated into broadly shared prosperity and it allowed Western governments to sustain long-term geopolitical competition without constant domestic crisis.
In this sense, the Marshall Plan, full employment policies, strong unions, public housing, and universal social insurance were not merely moral achievements; they were vital instruments of geopolitical power. They allowed Western societies to absorb industrial change, technological disruption, and global competition without succumbing to political extremism.
By contrast, the Soviet system relied on coercive integration. It could mobilize resources quickly, but it struggled to generate legitimacy or adapt socially. Over time, that lack of legitimacy created a societal brittleness that proved fatal.
The new geopolitics of social endurance
AI reopens this historical question of how to re-embed disruptive technologies under new conditions. The emerging geopolitical contest is not simply between national technological systems, but between (at least) three distinct models of social embedding.
The United States
The first is the American model of innovation without social insulation. The United States remains the global leader in frontier AI development. Its firms dominate model training, cloud infrastructure, and venture capital, but it has taken few steps to embed AI into existing social structures.
From a policy perspective, the United States approaches AI primarily as a strategic and industrial competition problem, with policy focused on innovation, production capacity, and technological leadership rather than social absorption. This is visible both in the scale of public investment and in the relative weakness of social protections designed to cushion disruption. The US faces disruption from an already unequal baseline, with income inequality well above that of most European countries. Existing social tensions over inequality make AI-driven disruption politically combustible, even when aggregate productivity gains are positive.
US policy responses have nonetheless concentrated overwhelmingly on the supply side. The CHIPS and Science Act allocates $52.7 billionto semiconductor manufacturing and R&D, administered by the National Institute of Standards and Technology. Similarly, the Inflation Reduction Act commits roughly $369 billion to clean energy and industrial incentives, framed explicitly by the US Treasury as a competitiveness and industrial-policy instrument rather than a social one.
At the same time, there has been little or no effort to regulate AI or to tame its effect on the labor market. Indeed, the Trump administration promulgated an executive order in December 2025 banning regulation of AI at the state level, while similar legislation is working its way through the US Congress. Meanwhile, workplace adoption is advancing faster than institutional adaptation. Reporting by the Associated Press in late 2025 found that around 12 percent of US workers already use AI daily, with nearly a quarter using it several times a week, highlighting the speed of diffusion relative to labor-market adjustment.
Taken together, the US model prioritizes speed and scale over social embedding. Weak labor protections and fragmented welfare provisions mean that AI productivity gains will likely translate into yet greater inequality and social dislocation. From a Polanyian perspective, this creates the familiar risk of innovation outpacing legitimacy. Without stronger social protection, the AI countermovement in the US will likely take ever more illiberal forms rather than constructive re-embedding.
Europe
The European Union represents a contrasting model: slower adoption and weaker firm-level dominance, but far denser systems of social protection and explicit attempts to govern AI through regulation and labor-market institutions. The EU’s approach to AI governance reflects a Polanyian instinct to re-embed markets through rules, rights, and social protections.
The scale of Europe’s social embedding is substantial. According to Eurostat, EU social protection expenditure reached €3.3 trillion in 2023, equivalent to19.2 percent of GDP, covering unemployment insurance, pensions, health care, and family support. Europe has also invested directly in labor-market transition capacity. The European Social Fund Plus (ESF+) allocates €142.7 billion for 2021–2027, with a mandate that explicitly includes re-skilling, employment transitions, and social inclusion in response to digitalization and automation.
On governance, Europe has moved fastest to codify AI risks. The EU AI Act entered into force in August 2024, banning certain high-risk uses and imposing obligations on employers and deployers of AI systems, with key provisions applying from 2025 onward.
Europe’s model therefore offers greater social stability, but struggles to translate that stability into control over AI rents and platforms. It embeds the AI market more effectively than the US does, but captures less power from the technology it governs. Europe will therefore struggle to capture the economic rents generated by AI, if those do become critical for geopolitical competition. Without sufficient investment, fiscal capacity, and markets scale, protection risks becoming defensive rather than empowering. This problem is well-diagnosed in the Draghi report and other European efforts to improve European economic competitiveness. But progress on this agenda has been fitful and the European policy focus remains on taming technology rather than producing and deploying it.
Polanyi’s lesson here cuts both ways: Social protection without productive capacity preserves stability but not influence. Europe’s challenge is not to abandon re-embedding, but to pair it with greater AI-generated value.
China
China offers a third model of re-embedding AI. Rather than relying on markets tempered by welfare or regulation, China embeds AI through administrative control, surveillance infrastructure, and direct state management of social risk. But while coercion and surveillance are central features of the Chinese system, Beijing’s approach to AI is better understood as state-managed sequencing rather than simply laissez-faire automation buttressed by repression.
The scale of China’s social control infrastructure is unprecedented. Research by Georgetown University’s Center for Security and Emerging Technology estimated in 2021 that China already operated more than 200 million surveillance cameras, integrated into nationwide programs such as Sharp Eyes and Skynet. Reporting by the Associated Press suggests camera density may approach one camera for every two people. These systems are operationalized through data-fusion platforms. In Xinjiang, the Integrated Joint Operations Platform aggregates data from cameras, smartphones, financial transactions, and local officials to enable predictive policing and preemptive intervention. While that region is particularly extreme, the underlying architecture reflects a broader governance model.
China also enforces social compliance through mechanisms described as “social credit,” in which the government imposes travel restrictions, including blocked train and airline ticket purchases, as penalties against individuals who fail to comply with legal obligations. Official reports indicate millions of annual travel restrictions.
At the same time, China does not rely exclusively or even primarily on its coercive capacity to achieve social acceptance of AI disruption. In several key sectors, Chinese authorities have slowed or limited the deployment of AI technologies where rapid automation would generate large-scale social dislocation, as the case of autonomous driving demonstrates. Despite technological readiness in many urban systems, widespread rollout has been postponed in part because of its implications for millions of taxi and ride-hail drivers. This is not market hesitation. It is administrative choice. The state is explicitly weighing productivity gains against social stability.
Meanwhile, China is aggressively deploying AI in areas where automation substitutes for missing labor rather than displacing it. Manufacturing, logistics, elder care, and public administration are all targets of accelerated AI adoption. This reflects a clear demographic calculus. With a shrinking workforce and rapidly aging population, the Chinese government is gambling that automation can offset labor scarcity and sustain growth without generating a surplus population of economically redundant workers.
From a Polanyian perspective, this effort is a form of re-embedding. Markets are not left to determine the pace and distribution of technological change. The state actively manages where, when, and how AI is introduced, seeking to align automation with social absorptive capacity. This is a more sophisticated strategy than simple repression of the backlash, and it addresses directly the core problem Polanyi identified, which is that societies will not tolerate market transformations that undermine social stability.
That said, this model remains brittle. China’s re-embedding relies on administrative control rather than negotiated legitimacy. Social protection is delivered through discipline and direction rather than rights and consent. As long as growth continues and substitution succeeds, this may prove effective. But if automation fails to compensate for demographic decline, or if social expectations outpace state welfare provision, suppressed counter-movements could re-emerge in destabilizing ways. The current struggles of the Chinese economy around increasing youth unemployment and lack of economic security leading to low consumption imply that China’s talented state managers are not fully on top of this problem.
China’s approach therefore highlights both the possibilities and limits of authoritarian re-embedding. It demonstrates that pacing technological change is a source of power, not a drag on it. But it also underscores Polanyi’s deeper warning and the lesson of the Soviet experience: Re-embedding that depends on control rather than social consent may stabilize markets in the short term, while storing up longer-term risks.
The Geopolitical Power of Social Embeddedness
Each of these models has strengths, but none of them are well-equipped to embed the social changes that AI will likely wreak on their societies. That incapacity implies not only that the US, Europe, and China will all struggle to find geopolitical leverage and social peace in the age of AI, but also that the result of their geopolitical competition will depend in no small measure on which model manages social embeddedness most effectively. States that strongly embed disruptive technologies can tolerate faster technological change, sustain industrial policy over decades, absorb economic shocks, and maintain public support for costly foreign policy commitments.
The prevailing rhetoric of an AI “race” misses this dimension entirely. For example, the U.S. National Security Commission on Artificial Intelligence warned that “the pace of AI innovation is not flat; it is accelerating. If the United States does not act, it will likely lose its leadership position in AI to China in the next decade.” This frames AI primarily as a competition for innovation and strategic advantage rather than a transformation requiring social cohesion.
Polanyi would recognize this as a dangerous illusion. Speed without protection accelerates backlashes. Backlashes undermine political capacity. And weakened states lose geopolitical contests regardless of their technological prowess. Traditional measures of AI power such as compute, data, chips, and innovation ignore the deeper metric: the capacity of a society to integrate technological change without tearing itself apart.
The real competition is therefore not who builds the largest language models first. It is who can integrate AI into society while preserving legitimacy, social meaning, and political endurance. This is why social policy amid technological disruption is not a side issue but a core element of geopolitical competition.
The West has won this competition before, precisely because it learned to re-embed markets through welfare states, labor institutions, and public investment. It can again if it remembers the lessons of that victory.
It’s not your father’s industrial revolution
Of course, re-embedding markets in the AI age is not simply a matter of restoring the welfare state. There are two fundamental differences today that complicate the process of re-embedding. The first is the increased mobility of capital relative to labor, which makes it harder to socially embed markets at the national level. The second is the nature of work and employment in the twenty-first century.
Mobile Capital
Polanyi’s original account assumed that markets could be re-embedded largely at the national level. That assumption was plausible in the mid-twentieth century. Capital mobility was limited, firms were territorially rooted, labor markets were national, and states could tax and regulate economic activity without immediate fear of exit. Under those conditions, welfare states, labor law, and public investment could successfully reconnect markets to social obligations.
In principle, the closest contemporary analogue to that settlement would be international coordination, particularly on corporate taxation, profit allocation, data governance, and social standards, designed to reduce incentives for capital mobility and regulatory arbitrage. In practice, however, the feasibility of such coordination is limited. Despite modest progress on minimum corporate tax rates and information sharing, the global political economy remains highly competitive. States continue to use tax regimes, regulatory flexibility, and market openness as tools of attraction. For frontier technologies such as AI, where governments see innovation and leadership as strategic assets, the incentives to defect from coordinated restraint are particularly strong. Even among close allies, fears of losing investment, talent, or technological edge make deep coordination difficult to sustain.
This constraint matters because it creates a structural mismatch. Markets are global, but social protection remains largely national. Firms can shift profits, relocate assets, or threaten exit while maintaining access to consumer markets. Governments seeking to tax AI rents or impose social obligations face an immediate credibility problem. Even well-designed domestic re-embedding strategies risk erosion if firms can arbitrage jurisdictions.
This gap helps explain the political logic of contemporary populism. When international coordination proves elusive and national regulation appears ineffective, voters gravitate toward border-based instruments such as tariffs, trade restrictions, and migration controls not necessarily because these are optimal, but because they are enforceable. Borders remain one of the few places where states can still impose conditions with confidence.
From a Polanyian perspective, this turn toward border measures is not simply economic nationalism. It is a predictable response to failed re-embedding. When markets escape social control, societies seek protection through whatever levers remain operational. Those levers increasingly involve conditioning market access rather than regulating production directly.
For AI governance, this implies a sobering conclusion. If comprehensive international coordination on taxation and social standards remains politically constrained, then re-embedding efforts may not have the tax base they need to redistribute the gains and provide social protection. Success will depend on a mix of partial and fragile coordination among like-minded states, some bloc-level rule-setting, and access-based enforcement mechanisms that link participation in large markets to compliance with social obligations such as taxation.
Employment Today
The second reason it is harder to apply a Polanyian solution today is that the tools of successfully re-embedded markets in the mid-twentieth century were designed for a production-based economy, organized around large firms, stable employment, and collective bargaining. Many of those tools began to fail well before the arrival of AI as advanced economies shifted toward services, fragmented business models, and flexible work.
The current populist backlash is in part a consequence of this earlier failure. The transition from manufacturing to services weakened unions, eroded firm-based benefits, and hollowed out local economic ecosystems, much in the same way that nineteenth century industrialization hollowed out the social protections of the agricultural economy. Indeed, gig-economy workers often operate under conditions that resemble nineteenth-century capitalism more than postwar social democracy. In that sense, AI is not the source of the current disembedding. It intensifies an existing one.
One of Polanyi’s core points was about the impossibility of treating labor as a normal commodity. “Labor,” he wrote, “is only another name for a human activity which goes with life itself.” When markets attempt to govern it as if it were an ordinary input, the result is not efficiency but social breakdown.
AI worsens this problem by threatening to decouple economic output from labor altogether. If productivity growth increasingly derives from capital-intensive models, data, and compute rather than human effort, then labor-centered embedding tools lose traction. Wage bargaining, employment protections, and firm-based benefits cannot anchor social integration if large shares of value creation no longer pass through employment relationships at all.
This creates a risk for Polanyian arguments that rely too heavily on restoring industrial-era institutions. Simply rebuilding unions or strengthening labor law, while necessary, will be insufficient. A Polanyian settlement for the AI age therefore requires a shift in the focus of embedding. The goal remains social integration and protection from market volatility, but the mechanisms must extend beyond labor markets alone. Embedding must increasingly target income, rather than jobs; firms and platforms, rather than individual workers; and status and participation, rather than employment per se.
This approach does not mean abandoning work as a source of meaning. It means recognizing that markets may no longer supply enough socially valued roles on their own. In such a context, public employment, health care, care work, education, and community-based activities become central pillars of social integration. Likewise, redistribution cannot rely solely on payroll-based systems if payrolls shrink relative to output. In a post-industrial, AI-intensive economy, re-embedding must be re-imagined away from the factory floor and toward the broader social organization of income, contribution, and recognition.
Toward a Polanyian AI settlement
So what does a Polanyian response to AI look like? The question is not whether states will shape AI-powered markets but what kind of intervention can best stabilize societies without undermining innovation or state capacity.
In the long term, only a progressive agenda of social protection can accomplish the goals of reconciling AI with a liberal state. Such an agenda would not attempt to halt technological change or return to the industrial economy of the mid-twentieth century, nor would it rely primarily on redistribution to compensate “losers” from AI-induced disruptions. Instead, it would focus on re-embedding AI-driven markets in institutions that shape how risks, gains, and status are distributed in the first place. In fact, the productivity gains that AI promises offer some opportunities to accomplish these goals without losing competitiveness.
Three principles are central to achieving such embeddedness. First, social protection must be automatic and structural, not discretionary or crisis-driven. Second, protection must preserve social participation, community values, and status, not merely income. Third, the gains from AI must visibly accrue to society as a whole, not only to a narrow set of firms and regions. Without these elements, protection will lack legitimacy and fail to stabilize politics.
These principles can be applied along several dimensions:
The Labor Market
The most immediate pressure point remains labor-market volatility. AI is likely to increase productivity while making employment less predictable, more polarized, and more task-based. A Polanyian response would therefore aim to partially decouple income security from continuous employment, while maintaining a strong link between social protection and contribution.
These goals can be achieved through robust income floors, delivered through refundable tax credits, negative income taxes, minimum wages, or minimum income guarantees that rise automatically during downturns or sectoral shocks. The response should also involve wage insurance and transition support for workers displaced into lower-paid jobs, reducing the political salience of downward mobility. Work-sharing and reduced working-time arrangements, without loss of income, will allow productivity gains to translate into fewer hours rather than fewer jobs. Finally, automatic stabilizers should be linked to technological disruption or sectoral shocks, so that support expands as displacement increases without requiring new political decisions each time. The objective is not to withdraw people from work, but to reduce the experience of market participation as an existential risk.
Beyond reducing inequality, Polanyi points us toward the need to avoid excessive power asymmetries in the labor market. AI risks dramatically widening those asymmetries through algorithmic management, opaque evaluation systems, and winner-takes-most labor markets. A Polanyian settlement therefore prioritizes collective voice and institutional counterweights in the labor market. These might include sectoral bargaining or wage-setting mechanisms that cover AI-exposed industries; rights for workers and their representatives to be consulted on the deployment of AI systems that affect hiring, pay, scheduling, or dismissal; transparency and contestability requirements for algorithmic decision-making in the workplace; and portable benefits systems that follow workers across firms and thus enhance labor mobility and bargaining power.
Shared Prosperity
The postwar settlement succeeded not only because it protected workers, but because it redirected some portion of productivity gains into shared institutions. If the hype is true, AI will generate substantial rents, driven by data concentration, scale effects, and public investment in research. A Polanyian approach insists that these rents cannot remain entirely private if political legitimacy is to be maintained.
This points toward targeted taxation of excess profits and monopoly rents in AI-intensive sectors, public stakes in critical AI infrastructure, and data governance regimes that treat large-scale data as a collectively generated resource, with returns flowing back to society. Some of these efforts will be difficult if firms can simply exit the domestic markets. But much of it can be realized by social or sovereign wealth funds that invest in AI and channel returns into public services, education, and transition support. The strategic logic is straightforward: Societies that visibly share in AI’s gains will tolerate its disruptions more readily.
Maintaining Meaning
One of the most serious risks of AI-driven disruption is not material deprivation but loss of status, purpose, and social recognition. Universal basic income proposals that provide cash without roles would fail on this dimension.
A Polanyian settlement therefore aims to guarantee access to socially valued forms of contribution where markets fall short. This might involve purposeful subsidization and expansion of the education, health care, care work, and climate-adaptation sectors as sources of stable, meaningful employment. It might also include recognition and remuneration of family care and child rearing as productive activities as well as support for local institutions, such as municipal services, cooperatives, and civic organizations that anchor work in place and community. This could all be supported by public or community-based employment guarantees in regions facing concentrated disruption.
This is not about inventing work for its own sake. It is about ensuring that technological progress does not erode the social basis of dignity and purpose, even as it maintains consumption.
Speed
Finally, a Polanyian approach accepts that markets sometimes need to be slowed down. The postwar order imposed limits on capital mobility, working hours, and workplace safety. The point was and is not to reduce growth, but to make it socially sustainable. Applied to AI, this suggests phased deployment requirements, as in the Chinese model, in sensitive sectors such as healthcare, education, and justice, human-in-the-loop standards where accountability and dignity are at stake, and strategic restraint where social costs clearly exceed near-term benefits.
Winning the AI war
Geopolitical competition in the AI age will not take place solely in clean rooms or data centers. It will also involve the less visible realm of social institutions: labor markets, communities, social protections, and political legitimacy. Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.
The West’s success in the Cold War owed much to its ability to reconcile capitalism with social protection. If the AI age is another “great transformation,” the same lesson applies. Chips matter. Data matters. But the ultimate source of power may be the capacity to re-embed technological change in society without sacrificing cohesion.
That is not a liberal-progressive distraction from geopolitical competition. It is its hidden core.
Jeremy Shapiro is the Director of Research at the European Council on Foreign Relations. Previously, he has worked at the U.S. State Department, the Brookings Institution, and in libraries reading Karl Polanyi.