A Real Political Economy of Technology

A Rebuttal to Morozov

A discarded computer sits in a drought affected wetland in Batuco, Chile on January 17, 2002. © Matias Basualdo/ZUMA/Shutterstock

Two distinct technological revolutions are underway, and they are competing for our attention and limited resources—as well as for political priority. Which path is backed by sustained investment and which is left marginal will shape the futures that societies are able to build.

On the one hand, there is generative AI, which some describe as a general-purpose technology akin to electricity or the internet. Its advocates claim that by automating tasks in the service sector—where productivity growth has long lagged—AI can lift advanced economies out of stagnation, while reorganizing the cultural and even cognitive foundations of human life.

Yet the gap between promise and performance remains wide. Outside narrow applications, productivity gains have proven difficult to demonstrate. AI “agents”—systems meant to autonomously plan and carry out complex tasks—are repeatedly announced but remain undelivered, underscoring the unreliability of generative systems in general. And unlike engines of past technological revolutions, which drove costs down through economies of scale, generative AI is extraordinarily energy-hungry—which sets it on a collision course with the other technological transformation of the era: the green transition.

This revolution too carries the potential to reorganize life at a fundamental level—reshaping how energy is produced and distributed, how cities are built, how food is grown. But driven by major advances in solar, wind, and battery technologies, the green transition exhibits a defining characteristic of technological revolutions that AI lacks: rapid and sustained cost declines.

The world we live in is shaped by the technologies we adopt. Technologies are not mere tools; they reorganize social practices, reshape identities and aspirations, and expand the range of futures people can meaningfully envision. Choosing generative AI or green energy as the dominant technological pathway would profoundly alter how we live and work.

On technical grounds, the green transition appears the stronger candidate for a genuinely transformative revolution, but technical potential alone never guarantees realization. From an economic standpoint, what counts as promising is not passively discovered. It must be actively produced through investment decisions. Which systems are developed further, and hence which innovations are allowed to mature into sociotechnical systems, depends on how investment is organized and who controls it.

Under capitalism, that control is concentrated in a narrow set of hands—venture capitalists, corporate executives, and state bureaucracies—whose decisions are oriented toward profitability and geopolitical competition. This concentration sharply narrows the range of technological paths that can be pursued, while suppressing public contestation over the purposes technologies are made to serve. Breaking free from this constraint requires more than better regulation of individual technologies. It requires opening up investment decisions themselves to democratic participation, so that alternative futures can be articulated, debated, and pursued for reasons beyond profitability and geopolitical rivalry.

In two articles in the New Left Review, I outlined a framework for a multidimensional economy designed to free technological development from its capitalist fetters, allowing a wider range of people and social values to shape systems of production. Individuals would receive credits to use for consumption, and firms would transact in points to cover operating costs and purchase intermediate inputs. Leftover points could not be converted into personal income or retained to finance future investment. This digital dual-currency system would block the channels through which market success currently feeds back into rising personal wealth and expanded control over future investment. Instead, sectoral Investment Boards—composed of elected representatives of workers, consumers, the wider society, and technical experts—would allocate dedicated investment funds across competing, firm-led proposals. Investment would not earn a monetary rate of return. What would matter instead is how any given proposal could improve social outcomes, and at what cost. An open Data Matrix would make the trade-offs involved in choosing among alternative proposals more visible.

Evgeny Morozov has criticized this framework in The Ideas Letter. He argues that generative AI is a revolutionary technology that would break the model I propose. His claim turns on a particular understanding of technological “worldmaking.” In his view, AI is worldmaking in a stronger sense than, say, renewable energy—where the social imaginary is already in place and technological development largely involves innovating around known goals. We do not yet know what AI is for or what forms of life it will reorganize or invent; its uses and meanings will emerge, he argues, only through experimentation.

Indeterminacy is not unique to AI. Earlier technological revolutions—from railroads to electricity to the internet—also passed through periods in which applications were unclear and investment raced ahead of understanding; speculative booms were followed by crashes as competing visions were tested and most failed. Morozov’s claim is that AI belongs squarely to this class of technologies. Because its future cannot be specified in advance, he argues, it must be explored through open-ended experimentation rather than guided by collectively articulated purposes.

Morozov’s preferred response to indeterminacy is proliferation: the simultaneous pursuit of many AI projects across cities, cooperatives, and movements, each probing a different possibility. Any democratically organized investment system that asks in advance what technologies are for—and that allocates resources accordingly—would, in his view, foreclose the process through which a technology like AI could come to discover its purpose. Justifying before exploring would discipline innovation and cut off the experimental dynamism that worldmaking technologies require.

What this response quietly bypasses, however, is the central problem of economics: how to allocate scarce resources across competing uses. Worldmaking technologies do not simply expand the space of possibility; they also force choices about which possibilities will be pursued widely, and at the expense of what alternatives. Some of the AI projects Morozov celebrates—small firm-level experiments or community initiatives—require little additional funding. But others—municipal AI systems embedded in schools, clinics, or housing administrations—depend on extensive data infrastructures, specialized technical labor, and large amounts of compute and energy. These projects are unavoidably capital- and resource-intensive. They compete for limited investment capacity not only with one another but also with other urgent social priorities, such as the need to rapidly decarbonize energy systems.

Once this economic constraint is acknowledged, pluralism alone is no longer an adequate answer. Yet Morozov offers no account of who would allocate resources across competing technological pathways nor of how such decisions would be made, evaluated, and revised over time.

This is where Morozov’s critique of my framework misfires. He treats local experimentation and collective control as opposing principles. To sustain this view, he recasts institutions I propose—which are designed to organize political-economic conflict over investment decisions—as administrative hurdles that would stifle technological dynamism. But these institutions are not meant to suppress experimentation; they are meant to organize it politically, by providing mechanisms to select, expand, revise, and abandon competing projects when not everything can be pursued at once. Experimentation and political selection are not alternatives. They are complementary moments in a single process.


In the framework I propose, firms submit proposals for large-scale technological or organizational change to sectoral Investment Boards, which allocate limited funds through democratic procedures. Morozov argues that evaluating such proposals along multiple dimensions—an effect of democratizing investment and severing it from profitability—would operate as an overbearing procedural constraint on innovation, especially in the case of what he calls worldmaking technologies like AI.

In reading my argument this way, Morozov assimilates it to a familiar mode of technology critique, one in which systems such as generative AI are faulted for bias, unsustainability, or degrading work quality, and then are subjected to calls for restraint. After a decade in which social media and smartphones have produced widely recognized harms that Silicon Valley has failed to address—amplifying distraction, worsening mental health, accelerating the spread of conspiracy theories, and deepening inequality—public resistance to any further disruption is entirely rational. Why should people be expected, once again, to submit to Silicon Valley’s experiments in sociotechnical engineering without any assurance that the resulting harms will be taken seriously?

Read through this lens, however, the framework I propose ends up appearing as a narrowing field of permission rather than an expanding space of possibility. Disruptive technologies almost always violate existing norms; under a regime in which each such violation must be resolved in advance, world-changing projects would be easy to block and hard to realize. It is this proceduralist image that leads Morozov to suggest that, in my framework, “woke” Investment Boards would domesticate technological development too early—forcing emergent possibilities to justify themselves before they have had the chance to generate new practices and new meanings through use.

This interpretation rests on a basic mischaracterization of how technological development unfolds in my account. I do not say that technological change arrives fully formed from research labs and then enters production subject to Investment Board review, as Morozov suggests. As in capitalism, most innovation in my framework is incremental and local, arising inside firms rather than outside them.

Under capitalism, the space of firm-level experimentation is narrow. Firms are permitted to innovate only insofar as they promise to either lower costs or raise revenues, and to do so in the most cost-effective way possible. Even when workers identify ways to improve sustainability or work quality, such possibilities are typically screened out as unprofitable before they can be seriously tested. Blitzscaling can briefly defer this constraint, but only to intensify it later: Once profitability becomes imperative, innovation is redirected toward monetization, degrading products and services in a process Cory Doctorow calls “enshittification.” The result is a dramatic underuse of the collective intelligence generated through workers’ shared participation in production: the tacit knowledge embedded in work routines, patterns of coordination across departments, and everyday problem-solving.

A multidimensional economy is designed to remove this bottleneck. Firms in good standing retain full autonomy over how they spend their operating budgets. They are free to reorganize workflows, experiment with new technologies, and contract with new suppliers through ordinary market transactions, which do not require prior approval from Investment Boards. Because future investment funding is not tied to expected profitability, experimentation can pursue a wider range of aims. When larger-scale investments are proposed, they need only make a substantive case that they would improve social outcomes in some concrete way, across any relevant social or ecological dimension.

Much experimentation, it should be emphasized, requires no investment at all—only a different use of time and attention within normal operations. Where modest costs are involved, these can be covered by ordinary operating budgets or through the direct allocations of investment funds that firms receive for regular upkeep and minor improvements. Firms are also embedded in a wider innovation ecosystem—reoriented toward multi-criterial forms of progress—including research institutes, consultancies, and community-based technology organizations that support creative problem-solving.

Large-scale investment proposals enter the picture only once possibilities have already been explored in practice and have been shown to warrant expansion. A school, for example, can freely experiment with existing AI tools to support teaching or improve learning outcomes, reallocating staff time or operating funds to explore possibilities whose value is not yet fully known. But proposing the development of a new AI application tailored to a district’s curriculum—along with the hardware and technical staff needed to deploy it system-wide—is a different matter. If such a project would cost 20 million–30 million points, and the education sector’s annual investment budget is 100 million points, funding it would displace other priorities.

This is where Morozov’s appeal to open-ended worldmaking runs up against an inescapable constraint. Even when the purposes and effects of a technology can only be discovered through use, committing resources to one pathway rather than another forecloses alternatives. A question cannot be avoided: What deserves funding, and to what extent? Democratic investment procedures exist precisely to make such trade-offs explicit, contestable, and collectively binding, rather than leaving them to be settled implicitly through profitability, lobbying power, or administrative fiat.


Morozov resists the idea of public direction-setting not only because he treats it as a brake on local experimentation, but because he understands it as the imposition of a single, democratically authorized conception of the good—fixed in advance and applied from above. On this reading, Investment Boards would first have to settle a determinate set of values and weights, then use them to screen and discipline innovation proposals, reducing public coordination to the administrative enforcement of a predefined evaluative framework. It is this image of ex ante value-balancing and procedural vetting that leads Morozov to see direction-setting as inherently hostile to pluralism and discovery—and to retreat instead toward dispersed local experimentation conducted with shared public resources.

But firm-level experimentation, indispensable as it is, is not sufficient. The problem is not only that large-scale investments involve scarce resources and therefore require choice. A further issue is that many of the transformations that matter most cannot be achieved by firms acting in isolation. They depend on patterns of investment that add up over time. Separate projects must align so as to build shared infrastructures, cumulative capacities, and interlocking technologies. Where investments align, cross-firm complementarities allow experimentation in one place to reinforce transformations elsewhere.

Capitalist firms are structurally ill-equipped to undertake such coordinated efforts. Large-scale transitions require coordination across firms and sectors that capitalist markets struggle to provide, even if large corporations sometimes achieve partial coordination within their own industries. This is why states have historically stepped in to fund and organize epoch-making technological innovations, from the internet to solar energy. Morozov’s suggestion that I neglect the role of the state in organizing technological change is puzzling, given how central publicly coordinated investment is to my account. The real question is not whether public coordination is necessary, but how systems of investment can be organized so as to set a direction across firms without collapsing into technocratic command-and-control hierarchies.

Taking complementarities between investments seriously means that Investment Boards cannot treat proposals as independent funding requests to be judged on their own merits. Boards must assess not only individual proposals, but how portfolios of investments add up into competing pathways of development. Even within a single sector, multiple futures will always be available, depending on how investments are combined, sequenced, and scaled over time. Agriculture might orient toward automated hydroponics and lab-grown meat or toward agroecology and convivial consumption; construction might favor new low-energy public housing or retrofitting existing buildings.

Direction-setting of this sort does not presuppose agreement. Instead, Investment Boards can remain sites of ongoing political contestation, in which competing factions—not the board as a unified actor—articulate different priorities and seek to orient innovation toward the futures they advocate. In this framework, firm-level experimentation is already oriented by such alternative horizons. Even where outcomes are uncertain, experimentation is rarely directionless: Firms explore possibilities in light of the futures they are trying to help bring about, drawing on sociotechnical imaginaries that link local problem-solving to broader transformations. Investment Boards intervene at the point where competing trajectories must be confronted politically and decisions must be made about which of those should be scaled.

For this reason, democratic control over investment cannot issue in or rely on consensus. It has to structure struggles among alternative trajectories of development—or “worlds,” in Morozov’s sense. Coordination is achieved amid conflict, precisely through the selection procedures Morozov dismisses.

However, even these firm and sectoral levels of coordination are insufficient to solve the most urgent problems contemporary societies face. The green transition, for example, demands simultaneous changes in energy systems, transport networks, housing, food systems, and consumption patterns. That is why my framework includes not only firm-level innovation and industry-level selection procedures, but also inter-industry coordinating committees. These bodies would take responsibility for large-scale social projects—such as greening the economy, shortening the working week, or repairing historical injustices—organized through renewable five-year mandates. Citizens’ assemblies would help decide on broad approaches. Coordinating committees would then deploy investment funds to lower the cost of complementary investments across sectors, functioning as a form of mission-oriented credit policy.

Yet cross-sectoral coordination cannot be a reason to bypass politics: All projects must be attached to specific firms and pass through the same contested process of proposal and selection. Conflicts over coordination are still moments of collective value formation, in which priorities are clarified and their salience updated through the surfacing, the debating, and the selecting among alternative futures.


As part of his effort to force my argument into a false opposition between local experimentation and collective decision-making, Morozov mischaracterizes my framework as requiring society to determine its values in advance, assign them weights, and then apply them administratively to govern economic life. This misreading allows Morozov to present experimentation with generative AI as uniquely resistant to collective direction, on the grounds that experiments undertaken to discover new uses also transform the values and aspirations of those who engage in those experiments. But this is not a peculiarity of AI.

Many of the consequential choices people and societies make are transformative in precisely this sense. Having a child, migrating to another country, committing to a vocation—as well as integrating new technologies into society or the economy—all reshape the preferences, capacities, and self-understandings of those involved. Acting does not merely realize prior values; it revises them over time.

My framework neither denies nor seeks to escape this recursive relation between action and valuation; it institutionalizes it. Values are clarified, contested, and recomposed over time as societies act, confront consequences, and revise their commitments in light of what those actions reveal or transform. Values are in no way rendered politically inarticulable by being formed through practice in this sense.

In an effort to make his objection stick, Morozov tries to assimilate my argument to well-known values-first approaches associated with thinkers such as Amartya Sen and Martha Nussbaum, as well as with formal decision-theoretic frameworks like multi-criteria decision analysis—approaches I draw on but also explicitly criticize. In these perspectives, the task is first to specify what matters as clearly as possible—capabilities or dimensions of well-being—and then to use those specifications to guide policy, as in projects like the UN Sustainable Development Goals and the OECD’s Beyond GDP indicators.

In capitalist societies, values-first frameworks remain largely ineffectual because they clash with the system’s underlying motor: profit-oriented investment. More fundamentally, however, those approaches mischaracterize how values operate. People rarely know in advance how many values they hold, how to rank them, or even what values such as sustainability or “good” work amount to in practice.

Values-first approaches rely on technocratic mediation to translate such priorities into action. Experts are tasked with specifying definitions and weights—based on surveys or deliberative procedures—and then operationalizing them as decision rules. Managerial judgment exercised within administrative constraints displaces political disagreement. This forecloses the process through which values are actually formed—acting on commitments, confronting consequences, and revising priorities in the light of experience.

In my framework, by contrast, the object of collective choice is not values as such but rival political and existential projects, each of which embodies a particular understanding of what values matter, how they are to be operationalized, and how they should be prioritized relative to one another. Competing ways of composing across values are articulated and contested through concrete proposals and the justifications offered for choosing among them. The broader social salience and the practical meaning of criteria then emerge retroactively, through decisions about which projects are funded and at what scale. This is how my framework makes space for politics—real politics—within the economy itself. Morozov invokes Gillian Rose’s “broken middle” to suggest that any institutional framework will falsely separate deciding from doing. But the institutions I propose are designed precisely to keep the two entwined.

The one exception might seem to be the Data Matrix, which would articulate criteria as indicators in order to analyze economic activity. But this does not mean that values would be settled in advance. The Data Matrix does not function as a planning algorithm or a decision-making authority, as Morozov assumes. Its role is explicitly informational and political, not executive. It would aggregate public data on production and consumption and link them to downstream ecological, social, and subjective impacts, so that the consequences of alternative investment choices could be forecast and debated.

Because it serves no executive function, the Data Matrix does not impose a single coherent evaluative framework; nor does it operate as a mandatory checkpoint in the decision process. Investment decisions are always made prospectively, on the basis of incomplete information and contested expectations. For this reason, the data gathered for the Data Matrix must itself remain open to contestation.

Citizen scientists, along with firms, associations, and political groups, would be able to petition for existing indicators to be modified or for new ones to be added, including alternative indicators that operationalize values in different ways. The aim of the Data Matrix is to use a wide base of public information to render rival interpretations of what matters intelligible, partially comparable, and politically consequential—without substituting measurement for judgment or postponing decision in the name of epistemic completeness.

The significance of this framework becomes clearest by contrast: Capitalism neutralizes the wider politics of values described here by rendering it economically irrelevant. Whatever critics or even governments may say, economic success continues to be measured in terms of a single evaluative dimension: economic efficiency gains, issuing in higher profits and greater GDP growth. All other values are relegated to the status of external constraints, limiting economic activity without directing it. In the alternative framework I propose, multiple values suffuse the choice of production plans so that progress unfolds along several dimensions at once—with complex implications for how people work and live.

Choosing among available paths may widen or contract the field of possibilities. Some futures are rendered easier to realize; others are effectively foreclosed; and people’s understandings of what matters shift as they move forward in time—much as in Morozov’s own account of technological worldmaking. The difference is that, in my framework, this process is understood from the outset as political because it unfolds under conditions of scarcity that force collective choice among rival futures.

What is puzzling about Morozov’s critique is that this political core disappears from view. A framework designed to organize conflict over investment—to make trade-offs visible, contestable, and binding when not everything can be pursued at once—is recast as a system of administrative closure that would prematurely discipline experimentation. His analysis is not simply a misunderstanding of my proposed framework but a reversal of its intent: It reads an account meant to politicize worldmaking as an attempt to suppress it.


This misreading does more than distort my argument; it disables Morozov’s ability to think through what a real alternative political economy would require. In rejecting market coordination, he evacuates the institutional space in which decentralized, firm-level experimentation can occur at all. In recoiling from political coordination, he is left with no strategy for organizing investment at scale, aligning complementary projects, or revising priorities over time under real resource constraints. What remains is not a political economy but a set of normative aspirations—gestures toward desirable features of a socialist future without any account of the institutions capable of sustaining local experimentation or coordinating collective transformation.

When Morozov has attempted to explain how a post-capitalist system might cohere, he has turned to research programs ill-suited to the task, precisely because they fail to endogenize the dynamic transformations of either social values or technological development. The critiques that Morozov directs at my framework thus apply with far greater force to the alternatives he himself advances.

For example, Daniel Saros’s proposal for a digital socialism, which Morozov praised in the New Left Review, distinguishes between credits and points, as my framework does, but embeds this distinction in a repeated auction that resets preferences and allocations at each cycle. It offers no theory of investment, no account of the political shaping of production, and no conception of worldmaking technological innovation. In fact, Saros abandoned the idea shortly after writing about it.

Morozov’s turn to cybernetics (borrowing from and then expanding on Eden Medina’s work) suffers from the same limitations. Cybernetics is a theory of systems that detect deviations and restore coherence in response to disturbance. Even in Stafford Beer’s most sophisticated formulations, the core concern is not innovation but viability: maintaining a system’s identity under changing conditions. Cybernetic management systems adapt to goals, but because those goals enter as exogenous policy assumptions, cybernetics cannot account for their dynamic transformation through either technological innovation or political conflict. As a research program, cybernetics has also long since been exhausted.

These limits matter because what Morozov treats as a challenge unique to AI—the endogenous formation of worlds through use—is in fact a general feature of periods of rapid technological change. Such moments do not simply expand the space of possibility; they force collective choices about which possibilities will be realized widely and which will remain marginal or unrealized. The problem, then, is not how to govern an exceptional technology, but how to organize production when technological dynamism collides with finite investment capacity and shifting social priorities.

Contrary to Morozov’s reading, addressing this challenge requires neither treating technology as neutral nor assuming that values must be settled in advance. Markets can be retained as spaces of decentralized experimentation, where possibilities are surfaced and explored. Political institutions, in turn, must take responsibility for investment coordination: deciding which trajectories are scaled, revised, or abandoned as their consequences unfold. In a post-capitalist future, Morozov may wish to push a generative-AI accelerationist path; I would prioritize a rapid green transition. Others will argue for slower, more cautious transformations, or for different forms of technological reorientation altogether.

These disagreements cannot be resolved by local experimentation alone, nor by consensus-oriented deliberation, nor by technocratic state control. They must instead be confronted politically and democratically, through institutions capable of reorganizing production over time in the face of sustained disagreement about which futures should be built—and which should be left behind.


Aaron Benanav is an assistant professor of Global Development at Cornell University and the author of Automation and the Future of Work (Verso, 2020). He is currently writing a book on multidimensional economics.

More Essays

Our Spreadsheet Overlords

Leif Weatherby

Whose Fault Was the Cold War?

Daniel Bessner

Beyond Hyperpolitics

Oliver Eagleton