Diagnosis
A clinical assessment of converging systemic crises — ecological, institutional, technological, and epistemic — drawn from institutional reports and the judgments of actors with every reason to say otherwise.
Opening
This site introduced you to the protocol and its vision — based on a reading of history that sheds a particular light on the future and suggests we must act and build. Before doing so, one must see clearly the horizon; and before acting collectively, one must agree on the facts, something which is increasingly becoming difficult.
What follows is a diagnosis in the clinical sense. Not a random list of problems, but the identification of a pathology whose multiple symptoms proceed from a single structural logic. It is preferable to see them as one crisis, viewed from different angles, rather than as separated crises. It is their interactions and potential consequences that make the present moment categorically different from previous periods of global turbulence.
The assessment draws on institutional reports — among them the WEF Global Risks Report 2026, the UNEP Emissions Gap Report 2025, the International AI Safety Report 2026 — but also on something harder to dismiss: the judgments of actors with every reason to say otherwise. Military intelligence agencies that plan for threats, not preferences. Insurance and reinsurance markets that price risk with their own capital. Agricultural banks. Commodity traders. These are not idealists with research grants. They are seeing the same picture.
That convergence — across competing interests, opposing methodologies, and actors with every incentive to minimise risk — is the basis on which this assessment proceeds: not because every source is perfect, but because the picture holds even when you remove the ones you distrust most.
I. The Earth
The Boundaries of the System
Seven of the nine boundaries that define Earth’s safe operating space have been breached — and all seven are worsening. The Planetary Health Check 2025 confirmed that ocean acidification has crossed the threshold for the first time, joining climate change, biosphere integrity, land system change, freshwater use, biogeochemical flows, and novel entities on the wrong side of the line. Only ozone depletion and atmospheric aerosol loading remain within safe limits.12
But the number seven is misleading, because the boundaries are not a checklist. They are a coupled system, and their interactions compound. A destabilized climate accelerates freshwater stress. Altered land systems accelerate biodiversity loss. Biodiversity loss weakens the biological carbon sinks that regulate the climate — which further destabilizes the climate. Ocean acidity has increased by approximately 30% since the industrial revolution — measured as hydrogen ion concentration — a change without precedent in millions of years of geological record.2 A third of global fish stocks are overexploited, compounding pressure on marine ecosystems already weakened by warming and acidification. On land, freshwater systems recorded 347 violent conflict events in 2023 — a 50% increase in a single year — as aquifers deplete beneath the North China Plain, the Punjab, and the American Central Valley.3 Forty percent of the world’s agricultural soil is degraded.4 The failure of any one of these systems tightens the constraints on all the others.
Climate: Overshoot Confirmed
The 1.5°C global warming threshold will be exceeded within the current decade — most likely between 2028 and 2032. The UNEP Emissions Gap Report 2025 confirmed what atmospheric physics had already made inevitable.56 Under full implementation of every national climate pledge currently on the table, end-of-century warming reaches 2.3–2.5°C. Under current policies, 2.8°C. To hold to a 1.5°C trajectory, global emissions would need to fall by 55% by 2035 relative to 2019 — a rate of reduction that no major economy has begun to approach.
The report introduces a distinction that reframes the entire climate conversation: peak temperature versus return temperature. The question is no longer whether humanity crosses the 1.5°C line. It is how long we remain beyond it, and what breaks while we are there. Every fraction of a degree of overshoot, and every year of duration, increases the probability of triggering processes that do not reverse on human timescales. The Greenland ice sheet is losing mass at an accelerating rate; its full destabilization commits the planet to meters of sea-level rise over centuries. The Brazilian Amazon, already approximately 20% deforested, approaches the threshold beyond which reduced rainfall and increased fire transform rainforest into savanna — a self-reinforcing shift.4 Permafrost across the Arctic contains twice the carbon currently in the atmosphere; as it thaws, it releases greenhouse gases without further human action. These are not forecasts. They are mechanisms already in motion, whose pace depends on how long the overshoot lasts.
Biodiversity: The Quiet Collapse
The planet is losing its biological complexity at a rate between 100 and 1,000 times the natural background. One million animal and plant species face extinction.78 Pollinator populations — on which between $235 billion and $577 billion in annual agricultural production depend — are declining across virtually every monitored region.4 Coral reefs experienced their most extensive bleaching on record in 2023–2024, across every ocean basin simultaneously.
The target is 30% of terrestrial and marine areas protected by 2030, set by the Kunming-Montreal Global Biodiversity Framework. Coverage today: 18% terrestrial, 8% marine. Five years separate the present from the deadline. The gap between ambition and capacity is not closing.
Two Failures and One Success
In August 2025, five years of negotiations on a global plastics treaty collapsed in Geneva. Over 100 nations demanded production caps. A bloc of petrostate producers insisted on limiting the scope to waste management. No agreement was reached.9
In January 2026, the Brazilian soy industry withdrew from the Amazon Soy Moratorium — a twenty-year zero-deforestation pact — after the state of Mato Grosso stripped tax incentives from participating companies. Models project a 30% surge in Amazon deforestation by 2045.10
In the same month, the High Seas Treaty entered into force — the first international legal framework for protecting ocean biodiversity beyond national jurisdiction, covering two-thirds of the world’s oceans. It took twenty years to negotiate.11
And the ozone layer continues to heal. Steadily, measurably, three decades after the Montreal Protocol was adopted in 1987. It remains the only planetary-scale environmental crisis that humanity has successfully reversed. The conditions that made it possible were specific: an unambiguous scientific signal, a small number of identifiable industrial actors, the availability of viable chemical substitutes, and sufficient political will to produce a binding treaty with enforcement mechanisms. No current crisis fully reproduces those conditions — the actors are more numerous, the interests more entangled, the substitutes less obvious, the timescales more compressed. But the precedent stands. The obstacle is not an inherent incapacity of collective action. It is the absence, so far, of governance architectures adequate to what must now be governed.
Atmospheric CO₂, once emitted, persists for centuries. Ice sheet destabilization, once initiated, is irreversible on human timescales. The global emissions reduction required to hold 1.5°C demands a 55% cut in ten years. The institutional mechanism for achieving it convenes once a year.
II. The World Order
The institutions that govern the global order share a common origin: they were built from wreckage. The United Nations from the ashes of a world war that killed eighty million people. The Bretton Woods system from the ruin of a depression that brought democracies to their knees. The Nuclear Non-Proliferation Treaty from the terror of the Cuban Missile Crisis, when the species came within hours of ending itself. The pattern has held for over a century — catastrophe, then construction — and the construction has never once preceded the catastrophe.
What follows is what happens when those guardrails come down faster than new ones can be erected, while the forces they were designed to contain have grown beyond anything their architects could have conceived.
The 1945 Architecture
The system was designed for a world of two superpowers and fifty states. It now confronts 193 states, non-state actors with state-level reach, weapons that did not exist when its treaties were drafted, financial flows that outpace the regulators built to observe them, and an information environment that no government controls and no treaty addresses. The internet — the most consequential infrastructure since electrification — is fracturing into three incompatible regimes: one built on rights, one on sovereignty, one oscillating between market freedom and security restriction. No institution exists to govern the space where they collide.
The UN Security Council still reflects the power map of 1945 — five permanent members, each armed with a veto, in a world that bears no structural resemblance to the one that arrangement was designed to govern. The WTO’s dispute resolution body has been non-functional since 2019. Geoeconomic confrontation has displaced environmental risk as the number one global threat for 2026 — the first time a crisis of human coordination has overtaken the biophysical crises described in the previous section.12
The distance between the architecture and the world it is supposed to hold together is no longer a matter of reform. It is structural.
But structural mismatch is only half the diagnosis. The other half is behavioral. What distinguishes 2026 from previous periods of institutional strain is not merely that the architecture is inadequate — it is that the major powers have stopped pretending it matters. The United States imposes tariffs on allies and adversaries without distinction and conditions security commitments on transactional compliance. Europe rearms because the transatlantic guarantee it built its postwar order around is no longer credible. Russia wages a conventional interstate war of territorial conquest in Europe — the first between recognized states since 1945. China conducts economic coercion openly, without diplomatic cover. No major power trusts any other. No alliance is unconditional. The diplomatic register has shifted from cooperation to positioning, from positioning to hostility, and on some fronts, from hostility to open violence. In earlier decades, states violated the rules while claiming to uphold them — and the hypocrisy itself was a form of tribute to the system.
That pretense has been abandoned.
What remains is what Hobbes described: the war of all against all — bellum omnium contra omnes. The post-1945 order was designed to prevent precisely this — by men who had seen where it leads.
Trade and Finance: From Rules to Weapons
The great wager of the late twentieth century was that economic interdependence would constrain conflict — that nations enmeshed in trade would find war too costly to contemplate. That wager has been lost. Interdependence did not prevent confrontation — it armed it.
What is underway is not a trade war. A war implies two sides contesting a single system. This is a fragmentation: the splintering of one rules-based order into competing blocs with incompatible standards, tariff structures, and enforcement regimes, each operating on the assumption that the others are adversaries. The effective U.S. tariff rate moved from 2.5% to 27% in three months — the steepest escalation in over a century — then back toward 9% after a Supreme Court ruling, still the highest since 1946. Replacement tariffs were announced within days.13 The European Union has imposed a carbon tariff on the embedded emissions of imports. China has responded with export controls on critical minerals.14 The only multilateral mechanism designed to arbitrate such disputes — the WTO Appellate Body — has been paralyzed for seven years. Each actor claims legitimacy. None can coordinate with the others. No one arbitrates.
The cost falls heaviest where it is least visible. Roughly 40% of African countries are in or approaching debt distress. Across the continent, debt servicing consumes 137% of what governments need for climate adaptation — they are paying the interest on yesterday’s borrowing while the biophysical conditions documented in the previous section accelerate beneath them. The Loss and Damage Fund, the international community’s answer to the most vulnerable nations’ climate costs, has received $700 million in pledges against an estimated annual need of $100–400 billion. Three orders of magnitude.
Democratic Recession and the Displacement Crisis
Political rights and civil liberties have deteriorated globally for nineteen consecutive years. Sixty countries registered decline in 2024. Thirty-four showed improvement.15 The erosion reaches into established democracies. The mechanisms are consistent enough to constitute a pattern: election manipulation, constraints on press freedom, erosion of judicial independence, normalization of executive overreach. This is not a series of national failures. It is a structural condition — the guardrail of democratic self-correction weakening across the system at the moment it is most needed.
Where guardrails fail entirely, the consequences are measured in bodies and displacement. Ukraine enters its fourth year of full-scale war with no resolution pathway and over half a million casualties. Sudan faces the largest humanitarian crisis on the planet — famine, mass displacement, and a conflict the international community has been unable to stop or even slow. From the Sahel — where a cascade of coups has opened corridors for jihadist expansion — to Myanmar to Gaza, the pattern repeats: the architecture lacks the mandate, the capacity, or the consensus to act. The number of forcibly displaced persons surpassed 122 million in 2025 — nearly double what it was a decade ago.16 Each number is a person who has lost the place where their life was built.
On February 5, 2026, the last nuclear arms control treaty between the United States and Russia expired — the final link in a chain stretching back to 1972.17 For the first time in over fifty years, no binding limits exist on the two largest nuclear arsenals on Earth. No inspections. No data exchanges. No transparency architecture.
Into this vacuum, a third variable: China’s arsenal, historically a fraction of the American and Russian stockpiles, expanding at a pace that will bring it into strategic relevance within years. But the collapse of arms control is only the visible layer. Beneath it, the nature of nuclear risk itself has changed. Hypersonic missiles have compressed decision windows from thirty minutes to less than ten. AI-integrated early warning and targeting systems push the logic of deterrence toward automation — not because any doctrine demands it, but because human cognition is too slow for the weapons that now exist. The danger is no longer miscalculation between leaders. It is the structural pressure to remove leaders from the loop entirely.
The architecture has not merely been retired. It has been outrun. What emerges — whether a new protocol of collective security or the automated management of mutual destruction — remains unwritten.
Arms control treaties were negotiated over decades and dismantled in years. Trade regimes that took a generation to construct shift in months. Financial contagion crosses borders in hours. Hypersonic weapons close the distance in minutes. The forces that these guardrails were built to contain now operate on timescales that leave no room for the deliberation they were designed to enable.
III. The Great Transformation
Long before AI, and before it became enclosed, the internet was born decentralized, open, non-commercial — and each of these properties was stripped from it. Commercial use, forbidden on ARPANET, took hold without a vote or a debate. The Domain Name System — the cadastre of the network, the infrastructure that determines who exists online — was placed under the authority of the U.S. Department of Commerce for eighteen years. The vast majority of users never learned that a sovereignty war had been fought over this invisible layer.
Then came the walls — and the tunnels. China built the Great Firewall. Russia grafted controls onto a network that had initially been free. The West chose a third path, which Snowden revealed in 2013: not the wall, but penetration. PRISM gave the NSA real-time access to data held by Google, Apple, Facebook, and Microsoft — a structural colonization of commercial infrastructure by intelligence agencies. The “private” web, the space in which two billion people believed they were corresponding and thinking in safety, had never existed.
When Russia annexed Crimea, one of the first military operations was the seizure of internet exchange points: to control the network is to control the informational environment of a population. In parallel, a slower and deeper enclosure. Meta, Google, and Amazon accumulated power over data rivaling that of states. The EU’s GDPR in 2018, then the Court of Justice ruling in 2023 — merging data protection violations with abuse of dominant position — marked the first counteroffensives. Too late to reverse the concentration. From DNS routing to recommendation algorithms, every layer of digital infrastructure has been a battleground between openness and capture.
Artificial intelligence did not inaugurate this conflict. It changed its nature.
From Data to Language
For thirty years, billions of human beings wrote — emails, articles, forums, code, conversations, theses, creative work. Freely, on platforms that presented themselves as services. This output, the largest accumulation of human language in history, became the training corpus of large language models.
These models do not think. They are stochastic machines — probabilistic architectures trained on human language, capable of producing text, code, reasoning, and images with a fluidity that mimics cognition without reproducing it. Their power comes not from intelligence but from scale: enough parameters, enough data, and emergent properties appear that their own designers cannot always explain. It is this combination — statistical prediction, emergent behaviors, adaptability to an incalculable number of tasks — that makes the technology simultaneously so promising and so difficult to govern.
And yet what matters here may not be the nature of the mechanism, but the breadth of its surface. These models operate across every domain in which human activity is mediated by language: writing, reasoning, translation, diagnosis, teaching, law, programming. Not the totality of intelligence. But a territory so vast that entire professions, industries, and institutions find themselves within its reach.
The promises are commensurate with that surface. A language model can democratize access to knowledge, reduce informational inequalities that centuries of public education have failed to close, and amplify the capacity of every connected individual to act. For the first time, a powerful cognitive tool is potentially accessible to everyone, everywhere, in every language. Potentially. Because the structure in which these tools are deployed determines whether this power emancipates or subjugates. And that structure, today, is one of concentration.
The Acceleration and the Shock
Every previous tool in human history automated a function. The lever: force. The engine: speed. The computer: calculation. Humans remained necessary for one thing, and one thing only — judgment. Deciding, interpreting, grasping context, navigating ambiguity. This is precisely where large language models operate. Not with the subtlety of a human mind. With something else: a brute force of pattern recognition applied to a surface no one had ever automated — language, reasoning, synthesis, creation.
The question is no longer whether these machines “think.” The question is whether the distinction still matters, economically and politically, when the output is close enough.
There is a structural irony that official reports do not articulate. These models were not built from nothing. They were trained on us — on thirty years of human writing deposited freely on an internet that presented itself as a space of exchange. Forums, articles, theses, code, conversations, creative work. This corpus — the largest linguistic patrimony ever assembled — has become the raw material for tools that now compete directly with those who produced it.
The IMF estimates that 40% of global employment is exposed, rising to 60% in advanced economies.18 The ILO documents that women are three times more vulnerable.19 Behind the models, data annotators in Kenya, the Philippines, and India label the training data for less than two dollars an hour.20 The producer fed the machine. The machine replaces the producer. The most precarious workers are the ones who make it possible.
On February 26, 2026, Jack Dorsey announced the elimination of more than 4,000 positions at Block — half the company’s workforce — not because the business was struggling but because it was thriving, and AI tools had made a smaller team more productive than the full one.21 It is one case among many. Amazon cut 30,000 corporate roles between late 2025 and early 2026.22 Salesforce reduced its customer support staff from 9,000 to 5,000. Baker McKenzie, one of the world’s largest law firms, announced the elimination of 600 to 1,000 business services positions — approximately 10% of its support workforce — citing AI-driven restructuring.23 The common thread is not distress. It is optimization.
And everything is accelerating. In January 2025, DeepSeek published R1 — a reasoning model whose final training stage cost approximately $294,000, built on a base model (DeepSeek-V3) trained for under $6 million — achieving performance comparable to proprietary systems costing orders of magnitude more.24 What this demonstrates is not merely that AI is getting cheaper. It is that it obeys a Jevons Paradox: efficiency gains do not reduce total deployment — they accelerate it, by making the technology accessible to an exponentially wider base of actors.
In February 2026, researchers from ETH Zurich and Anthropic demonstrated that LLMs can deanonymize online accounts at scale — linking pseudonymous profiles to real identities with high precision, in minutes, for a few dollars per profile. Online pseudonymity, one of the last remaining refuges in the current architecture of the web, is no longer a technical guarantee. It is a wager against computational power.
Data centers already consume the energy equivalent of entire nations.25 Autonomous weapons systems capable of selecting and engaging targets without meaningful human control are under development by multiple state actors, with no binding treaty regulating their deployment. These are not risks on the horizon. They are accomplished facts, compounding at a pace that institutions — designed for decision cycles of months or years — cannot process.
It is not that things are moving fast. It is that speed itself changes the nature of the problem. The technology may be sound. The velocity of its deployment, without institutional architecture capable of absorbing it, converts opportunity into systemic shock.
And the direction of this shock is not determined. Several futures are already in competition.
The Forces in Competition
What is at stake with artificial intelligence is not which country wins the race. It is which forces prevail within every country — and whether any institutional framework emerges capable of steering the outcome. Three forces are already shaping the transformation. They are not national models. They coexist, interact, and compete within the same systems.
The first is concentration. Computational power, training data, and distribution channels are accumulating in a shrinking number of hands. American companies dominate the top ten AI firms by market capitalization. NVIDIA alone exceeds $4.3 trillion as of early 2026 — more than the GDP of Germany.26 According to Gartner, global AI spending across hardware, infrastructure, and services is projected to surpass $2.5 trillion in 2026. Concentration is a structural tendency of the technology, which rewards scale, capital intensity, and control of infrastructure — though recent breakthroughs like DeepSeek suggest this tendency is not absolute at every capability level. But wherever it goes unchecked, it produces the same gravitational pull: cognitive tools of unprecedented power, rented back from the entities that captured the data and compute required to build them. The risk is not theoretical. It is that the capacity to think, write, produce, and diagnose — functions previously exercised by autonomous humans — becomes a service delivered by a handful of providers, without democratic input into the terms of access.
The second is integration. States are incorporating AI into the apparatus of governance — not as a tool added to existing institutions, but as a layer that restructures what institutions can see, predict, and act upon. The structural logic is legibility: the conversion of complex, opaque human behavior into data streams that can be read, modeled, and anticipated by centralized systems. China has moved furthest in this direction, with an estimated several hundred million surveillance cameras, an explicit AI+ initiative targeting the modernization of “social governance,” and predictive systems designed to fuse data streams from cameras, sensors, social media, and grassroots informants into models that anticipate disorder before it manifests.
But the logic of legibility is not uniquely Chinese. Consider algorithmic risk assessment in criminal sentencing: a system deployed across dozens of U.S. states that converts arrest records, zip codes, employment history, and demographic proxies into a score that influences whether a human being awaits trial in jail or at home. The tool is presented as objective. The data encodes decades of discriminatory policing. The defendant often cannot see the model, challenge its inputs, or understand how the score was produced. This is what AI integration into governance looks like in practice — not necessarily as a grand surveillance architecture, but as the quiet replacement of human judgment by opaque computational processes at decision points that determine people’s lives. The scale varies enormously between democracies and authoritarian systems. The underlying impulse — to render populations more readable and administrable through computation — operates in both.
The third is openness. DeepSeek, Llama, Mistral — open-weight models demonstrate that computational power is not condemned to remain proprietary. Communities of developers, researchers, and public institutions are building alternatives that prove the technical viability of a distributed path. This is not marginal: DeepSeek’s R1 achieved frontier-level performance at a fraction of incumbent costs, and Mistral has established Europe’s most credible position in foundation model development.
The technical argument for openness has already been won. The institutional argument has not. Open models can be downloaded, fine-tuned, and deployed by anyone — including the most powerful actors, who possess the infrastructure to exploit them at a scale that communities cannot match. Without governance frameworks and redistribution mechanisms, openness redistributes capability without redistributing power. The technical path exists. The political architecture to make it count is the piece that remains unbuilt.
These three forces coexist within every country, every economy, every political system. The United States contains both the most concentrated AI oligopoly on Earth and the most vibrant open-source ecosystem. China produced DeepSeek and the Great Firewall. Europe drafted the AI Act but hosts no developer operating at the scale of the leading American or Chinese frontier labs.27 No nation has resolved the tension between these forces. No institution currently operating has the mandate, the speed, or the design to do so.
That framework does not yet exist. Its absence is the central finding of this section — and the starting point of what this site proposes to build.
IV. The Minds
The three preceding sections traced a diagnosis through the biophysical substrate, the institutional architecture, and the technological transformation. Each described systems under structural stress. This section concerns the substrate on which all collective response depends: the capacity of human beings to perceive reality, form judgments, and act together on the basis of shared facts.
That capacity is degrading — not suddenly, and not because of any single technology. Its informational environment now reshapes itself at the speed of algorithmic optimization, while the deliberative capacity it is displacing — the habits of sustained attention, evidential reasoning, and institutional trust on which democratic governance depends — was built over centuries of educational practice, civic institution, and slow cultural accumulation. Of all the temporal mismatches documented in this assessment, this may be the most consequential: the fastest-moving force in the system is operating on the substrate that is slowest to repair.
The Long Erosion
The erosion did not begin with the smartphone. It did not begin with social media. Its earliest measurable signals appear decades before the first browser, in data that has nothing to do with screens.
The political scientist Robert Putnam documented, across five decades of longitudinal data, a sustained decline in civic association, social trust, and community participation beginning in the 1960s and accelerating through the 1980s and 1990s — well before the internet reached mass adoption.28 The causes Putnam identified were multiple — suburban sprawl, the two-career household, generational value shifts — but the central finding was structural: the social infrastructure of democratic deliberation was contracting before a single platform existed to accelerate the process.
What the screens inherited was not a healthy deliberative culture disrupted by technology. It was a culture already losing the institutional habits — local association, sustained reading, face-to-face civic argument — through which collective judgment had been formed.
The evidence that the decline extends to cognition itself arrived later, and it is harder to dismiss. For most of the twentieth century, measured intelligence rose steadily across every country where testing was conducted — a phenomenon known as the Flynn effect, documented across 271 samples totaling four million participants.29 Better nutrition, expanding education, increasing cognitive stimulation: the gains were real and consistent.
They have reversed. A 2018 study in the Proceedings of the National Academy of Sciences, using Norwegian military conscription data covering three decades of birth cohorts, identified the turning point: cohorts born after approximately 1975 show a sustained decline of 0.23 IQ points per year.30 The finding is methodologically unusual — and unusually difficult to dismiss — because the decline appears within families. Brothers born later score lower than brothers born earlier. This rules out genetic drift. It rules out immigration effects. It rules out changes in who takes the test. Something in the shared environment shifted, and it shifted against cognitive development.
The causes remain debated. The phenomenon is not. And the pattern of its appearance — a reversal emerging in the mid-1970s, deepening through the decades that followed — places it squarely within the same period that saw television displace reading as the dominant medium, civic participation decline, and the informational environment begin its long migration from the printed page to the screen.
The OECD’s Programme for International Student Assessment confirms the trajectory at the other end of the age spectrum. The 2022 PISA results documented a decline of approximately 10–11 points in reading (depending on country inclusion criteria for trend comparability) and approximately 15 points in mathematics across OECD countries — the largest single-cycle drops in the program’s history.31 Pre-pandemic trend analysis confirms the decline was underway before COVID-19; the pandemic accelerated it, but did not cause it.
These are not cultural impressions. They are measurements — conducted across millions of subjects, replicated across countries, published in peer-reviewed journals of the highest caliber. They converge on a picture that no single study could establish alone: the cognitive and deliberative capacities on which complex democratic governance depends have been declining, measurably, for decades.
And the decline is not confined to the informational environment. Ninety-nine percent of the global population breathes air exceeding WHO guidelines. Lead exposure, fine particulate matter, and the proliferation of microplastics — now detected in human brain tissue, with concentrations rising measurably between 2016 and 2024 — all carry documented neurocognitive effects.32 The full causal picture is contested, but the convergence of informational, chemical, and nutritional pressures on cognitive development across the same decades is not. The erosion of the mind, like the erosion of the soil documented in Section I, has more than one source — and the sources compound.
The Architecture and Its Consequences
The long erosion documented above proceeded gradually, across decades, driven by diffuse forces. What happened next was not gradual. Beginning in the mid-2000s, the informational environment through which democratic societies form judgments was re-engineered — deliberately, at scale, and for a purpose that has nothing to do with the formation of judgment.
The business model is by now well understood, though its structural implications remain under-examined. The dominant platforms — Google, Meta, YouTube, TikTok, X — do not sell information to users. They sell users to advertisers. The commodity is not content but attention, measured in time spent, interactions generated, and behavioral data captured. This much has been reported so often that it has become a cliché, and clichés are where analytical danger lives — because familiarity replaces understanding.
The deeper structure is not the sale of attention. It is the engineering of attention. To sell attention at the scale and precision that advertisers require, platforms must learn what captures each individual user — not users in the aggregate, but this user, now, on this screen. The architecture is therefore built to model the beliefs, emotional triggers, preferences, and identity markers of each person who touches it, and to surface content calibrated to produce engagement from that specific psychological profile. The optimization function is not truth, relevance, or civic utility. It is the maximization of the time and emotional energy each user invests in the platform. Every scroll, every pause, every click is a data point fed back into a model whose sole purpose is to learn what holds you — and to give you more of it.33
This is not a side effect. It is the product. What the Harvard Business School scholar Shoshana Zuboff identified as surveillance capitalism is not merely a business model that profits from data — it is an economic logic that requires the continuous extraction and operationalization of human experience as raw material for prediction and behavioral modification.33 The commodification of attention has a lineage stretching back to nineteenth-century newspaper advertising; but the personalization layer — the capacity to model and target each individual’s cognitive vulnerabilities at machine speed — is without precedent.34
The consequences follow from the architecture with something close to logical necessity.
Content that provokes outrage, fear, or tribal identification generates more engagement than content that informs. This is not a hypothesis — it is the central finding of the largest study of news diffusion ever conducted. In 2018, researchers at the MIT Media Lab analyzed the spread of approximately 126,000 verified true and false news stories on Twitter, shared by three million users over more than a decade. False news reached more people, penetrated deeper into networks, and spread faster than true news in every category examined — politics, science, finance, urban legends. The effect was not driven by bots. It was driven by humans, sharing content that was more novel and more emotionally arousing than accurate reporting.35
But the asymmetry is not random. It is personalized. The architecture does not simply amplify whatever is loudest. It learns what each user is most likely to engage with and surfaces that content preferentially. For a user whose prior engagement patterns indicate conservative political identity, the algorithm surfaces content that confirms and intensifies that identity. For a user whose patterns indicate progressive identity, the reverse. The content that maximizes engagement for each profile tends, by measured tendency, to be content that is more extreme, more emotionally charged, and less accurate than what a neutral editorial process would surface — because extremity and emotional charge are what the engagement function rewards.36
The result is not an echo chamber in the simple sense — users are not sealed off from opposing views. The finding is more troubling than that. In an experiment led by Duke University’s Christopher Bail and published in PNAS, Republican Twitter users who were paid to follow a bot retweeting liberal elected officials and opinion leaders became more conservative, not less. Democrats exposed to conservative content showed a slight increase in liberal attitudes — an effect that did not reach statistical significance.36 Exposure to opposing views, mediated through the architecture of the platform, increased polarization rather than reducing it — because the architecture framed the encounter not as informational exchange but as identity confrontation. The medium did not carry the message. The medium transformed the message into a provocation calibrated to deepen the division it purported to bridge.
People do not primarily believe falsehoods because they want to. They believe them because the informational environment in which they encounter them does not activate — and in many cases actively suppresses — the slower, more effortful cognitive processes through which accuracy is assessed.37 The architecture rewards speed: the quick share, the instinctive reaction, the emotional first response. It penalizes the pause. Every design choice — the infinite scroll, the notification pulse, the engagement metric made visible — is engineered to keep the user in the fast register and out of the slow one. The architecture does not merely distribute misinformation. It cultivates the cognitive conditions under which misinformation thrives.
Into this already degraded environment, generative AI has introduced something structurally new. Synthetic media — deepfake video, cloned audio, AI-generated text — has reached a sophistication that defeats casual inspection. The volume of deepfake content has surged from approximately 500,000 files in 2023 to a projected eight million in 2025; human detection rates for high-quality video deepfakes stand at roughly 24%.38 Between 96% and 98% of all deepfake content online consists of nonconsensual intimate imagery, and 99% of it targets women — a gendered weaponization that constitutes the dominant use case of the technology.38
But the epistemic damage extends beyond individual deception. The deeper consequence is what legal scholars have termed the liar’s dividend: when any image, video, or audio recording can plausibly be dismissed as synthetic, the evidentiary basis for public discourse collapses. A politician caught on tape can now claim the recording is fabricated. A war crime documented on video can be waved away. The very existence of convincing fakes poisons the well of evidence — not only when fakes are present, but whenever they are merely plausible. This is the structural inversion: the same technology that makes it trivially easy to fabricate evidence makes it trivially easy to deny it. The result is not a world flooded with lies. It is a world in which the distinction between evidence and fabrication becomes structurally unresolvable by ordinary citizens.
The cumulative outcome is not misinformation. It is something more structurally damaging: the emergence of populations inhabiting incompatible informational realities. The most comprehensive empirical study of this phenomenon — a systematic analysis of media consumption, sharing patterns, and information flows during the 2016 U.S. election by Harvard researchers — concluded that the American information ecosystem had bifurcated into structurally distinct and mutually unintelligible media environments, driven not by individual choice but by the architecture of distribution.39 This is not a metaphor. It is an empirically documented condition in which two groups of citizens, consuming information through the same platforms, arrive at factual pictures of the world so divergent that productive political exchange between them becomes impossible — not because they disagree about values, but because they disagree about what has happened.
The longitudinal data on institutional trust confirms the structural nature of the shift. Trust in media, government, business, and civil society has declined across 28 countries tracked by the Edelman Trust Barometer over more than two decades.40 The OECD’s cross-national government trust data shows parallel trajectories.41 These are not sentiment fluctuations. They are structural trends operating across diverse political systems, media environments, and cultural contexts — which suggests that the cause is not local politics but the informational architecture through which all these societies now operate.
The same conclusion was reached from a different starting point by Martin Gurri, a former CIA open-source analyst — not an academic or an activist, but a professional analyst trained to identify structural patterns in public information environments.42 The observation is precise. The old architecture of institutional authority was imperfect, often captured, and frequently dishonest. But it performed a function: it produced a shared factual surface — contested, incomplete, biased, but common — on which democratic deliberation could occur. The new architecture produces engagement. It produces activation. It produces identity confirmation at scale. What it does not produce — what it is not designed to produce, what its optimization function actively works against — is a shared evidentiary basis for collective judgment.
Democratic institutions deliberate at the speed of hearings, publications, investigations, rulings. Disinformation propagates at the speed of algorithmic amplification — seconds.43 By the time a correction is published, the false claim has been seen by millions. By the time an investigation concludes, the narrative has been absorbed. The temporal mismatch between the production of falsehood and the production of correction is not a bug in the system. It is the system. And it operates on citizens whose capacity for the slower, more effortful cognition that accuracy requires has been declining — measurably, across countries — for decades.
Formation Under Distortion
The architecture described above was built by adults, for adults, to extract commercial value from adult attention. Its most consequential effects may be on people who had no part in building it and no capacity to consent to its terms.
A child born in 2010 entered primary school the year the iPad was released. By the time that child reached adolescence, recommendation algorithms had been refined through billions of feedback cycles, and the smartphone had become the primary mediating device between the developing mind and the external world. The informational environment this child inhabits is not a degraded version of an earlier one. It is a different environment — and the mind formed within it is being shaped differently.
The evidence is institutional, longitudinal, and cross-national. The World Health Organization’s Health Behaviour in School-aged Children survey — covering 280,000 young people across 44 countries — documents that the proportion of adolescents reporting problematic social media use rose from 7% in 2018 to 11% in 2022.44 These are not self-selected samples or activist surveys. They are the WHO’s own data, comparable in institutional weight to the IPBES biodiversity assessments or the UNEP emissions reports cited in Section I.
The methodological debate over effect sizes matters here, and the intellectually honest move is to confront it directly. A landmark study published in Nature Human Behaviour, using specification curve analysis across a dataset of 350,000 adolescents, found that the association between digital technology use and adolescent wellbeing is real — but small. The effect sizes were comparable to the negative association between wearing glasses and wellbeing, and substantially smaller than those of bullying, sleep deprivation, or smoking.45 This finding does not dismiss the problem. It specifies it: the damage is not dramatic and acute, as alarmist coverage suggests, but diffuse and structural — a low-grade pressure applied continuously, across an entire generation, through every waking hour of screen exposure. Small effects, compounded across hundreds of millions of adolescents over years of developmental formation, produce population-level consequences that no individual study can capture. The honest conclusion is not that screens are harmless. It is that the mechanism is chronic, not catastrophic — which is precisely what makes it difficult to mobilize against and easy to dismiss.
But beneath the behavioral data — time spent, self-reported wellbeing, problematic use — lies a deeper and less reversible change. The cognitive neuroscientist Maryanne Wolf has documented what she terms the erosion of the “reading brain”: the specific neurological circuit, built through years of sustained engagement with long-form text, that supports deep reading, inferential reasoning, critical analysis, and the capacity to hold complex arguments in working memory.46 This circuit is not innate. It must be constructed through practice — through the slow, effortful, repeated experience of following an extended argument from premise to conclusion, of holding ambiguity without resolution, of building mental models from text alone. The circuit is, in Wolf’s framing, one of the most significant cognitive achievements in human cultural history. And it is being less activated.
The experimental evidence supports the neurological claim. A meta-analysis of 54 studies confirms that the same text read on screen produces measurably lower comprehension and weaker inferential processing than the same text read on paper.47 The difference is not explained by distraction alone. Screen reading activates different attentional patterns — more scanning, less linear sustained engagement — and these patterns, repeated thousands of times across developmental years, shape the neural pathways available for subsequent cognitive tasks. The concern is not that young people read less, though they do. It is that the mode of reading they practice most — fragmented, scroll-driven, optimized for speed — exercises a different cognitive circuit than the one required for the kind of reasoning on which democratic deliberation, scientific literacy, and institutional judgment depend.
The OECD’s PISA 2022 results confirm the trajectory at the population level. Mathematics scores across OECD countries fell by approximately 15 points — the steepest recorded drop in PISA’s history — and reading scores declined by approximately 10–11 points among countries with comparable trend data.48 Pre-pandemic trend analysis confirms the deterioration was underway before COVID-19; the pandemic accelerated a process already in motion. The students tested in 2022 were born between 2006 and 2007 — the first cohort to have spent their entire adolescence within the architecture described in the previous section.
The institutional response has been to place the architecture’s own products inside the classroom. Across OECD countries, tablet-based learning environments have proliferated — often funded or promoted by the technology companies whose platforms the evidence implicates. Sweden reversed its national digitization policy for primary schools in 2023 after its own education agency concluded that screen-based learning was contributing to declining literacy outcomes, and began reintroducing physical textbooks.49 It remains an exception. Most education systems continue to adopt digital tools on the assumption that the medium through which content is delivered is neutral with respect to the cognition it produces. The research reviewed above suggests that assumption is wrong.
Several jurisdictions have begun to act on the platform side. Australia legislated a minimum age of 16 for social media access in 2024. The European Union’s Digital Services Act imposes specific obligations on platform design for minors. The United Kingdom’s Online Safety Act 2023 establishes a regulatory framework addressing platform architecture directly. These are early, partial responses. They share a common assumption: that the damage is located in specific platforms or content categories rather than in the cognitive architecture the environment produces.
None of this would matter beyond the domain of educational policy if the capacities being shaped were merely academic. They are not. The capacity for sustained attention, tolerance of ambiguity, engagement with complexity, and assessment of evidence under conditions of uncertainty are not school skills. They are the cognitive prerequisites of democratic citizenship. They are what allows a population to evaluate competing claims, resist manipulation, hold institutions accountable, and deliberate on collective action in the face of incomplete information. Every crisis documented in this assessment — ecological, institutional, technological — requires precisely these capacities for its resolution. And every trend documented in this section — the declining test scores, the altered reading brain, the chronic low-grade pressure of the attention architecture — is eroding them in the generation that will inherit the consequences.
The epistemic crisis is not only a present-tense emergency. It is shaping the deliberative capacity of the next generation — the generation on whose judgment the response to every other crisis in this document will ultimately depend.
The Recursive Trap
Every crisis documented in this assessment admits, in principle, of a response. Emissions can be reduced. Institutions can be reformed. Technologies can be governed. The difficulties are immense, but they are difficulties of will, coordination, and design — not of the capacity to perceive and reason about the problem itself. The physics of the climate system is understood. The failures of the institutional architecture are visible. The trajectory of the technological transformation can be mapped.
Democratic governance requires a population capable of sustained attention, evidential reasoning, tolerance of complexity, and the formation of collective judgment under conditions of uncertainty. These are the capacities that the preceding three subsections have shown to be under structural erosion — not by a single cause, but by a convergence of forces operating across decades, across countries, and across every medium through which citizens encounter the world. The long cognitive decline documented in the first subsection is compounded by an informational architecture, documented in the second, that actively suppresses the reflective cognition accuracy requires. And the third subsection has shown that the generation inheriting these compounded pressures is being formed within them — its cognitive architecture shaped during the developmental years in which the capacity for deep reading, sustained argument, and evidential discrimination is either built or is not.
The crisis documented in this section is of a different order. It is a crisis in the capacity to respond to crises.
This is the recursive trap. The degradation of collective epistemic capacity is not one crisis among others. It is the meta-crisis — the degradation of the very faculty through which all other crises must be perceived, evaluated, and acted upon. A society that cannot sustain attention cannot follow a climate model. A polity that cannot distinguish evidence from fabrication cannot hold institutions accountable. A generation trained on fragments cannot build the structures that complexity demands.
The trap is recursive because any response to it must be enacted by the very capacities it has degraded. The institutions that would regulate the attention economy deliberate at a pace the attention economy has already outrun. The educational systems that would rebuild deep reading operate within the informational environment that undermines it. The democratic publics that would demand reform are the publics whose capacity for collective judgment is in question. There is no external vantage point from which to repair the system. The repair must come from within a system already compromised.
This is not a problem that more information solves. It is a problem in the conditions under which information becomes collective judgment — the conditions under which facts are perceived, weighed, debated, and converted into action. Those conditions were not given. They were constructed, over centuries, through educational practice, institutional design, civic habit, and the slow accumulation of deliberative norms. They can be reconstructed. But not by diagnosis alone.
The work of reconstruction is where this assessment ends and construction begins.
References
Footnotes
-
Stockholm Resilience Centre & Potsdam Institute for Climate Impact Research, Planetary Health Check 2025 (2025). planetaryhealthcheck.org ↩
-
Richardson, K., et al., “Earth beyond six of nine planetary boundaries,” Science Advances 9, no. 37 (September 2023). doi:10.1126/sciadv.adh2458. Updated in Planetary Health Check 2025. ↩ ↩2
-
Water Peace and Security Partnership, Conflict Data 2023 — freshwater conflict events. waterpeacesecurity.org ↩
-
UNEP, Global Environment Outlook — GEO-7: People and Planet in Peril (Nairobi: UNEP, 2024). unep.org/geo/ ↩ ↩2 ↩3
-
United Nations Environment Programme, Emissions Gap Report 2025: Coming in Hot (Nairobi: UNEP, November 2025). unep.org/resources/emissions-gap-report-2025 ↩
-
UNEP, Emissions Gap Report 2025 — supplementary technical analysis on NDC alignment and 1.5°C pathways. ↩
-
IPBES, Global Assessment Report on Biodiversity and Ecosystem Services (Bonn: IPBES Secretariat, 2019). ipbes.net/global-assessment ↩
-
UNEP, Global Environment Outlook — species threat assessment, reaffirming IPBES conclusions. ↩
-
UNEP, “INC-5.2 on Plastic Pollution concludes without agreement,” press release, August 2025. unep.org ↩
-
ABIOVE withdrawal from Amazon Soy Moratorium, January 2026. Reported in Mongabay, Reuters, and Carbon Brief. ↩
-
United Nations, “Agreement under the United Nations Convention on the Law of the Sea on the Conservation and Sustainable Use of Marine Biological Diversity of Areas Beyond National Jurisdiction” (BBNJ Agreement), entered into force January 17, 2026. ↩
-
World Economic Forum, Global Risks Report 2026, 21st ed. (Geneva: WEF, January 2026). weforum.org/publications/global-risks-report-2026 ↩
-
Yale Budget Lab, “U.S. Tariff Tracker: Effective Tariff Rates 2025–2026,” updated February 2026; Tax Foundation, “Tariff Analysis Post-IEEPA Ruling,” February 2026. ↩
-
Center for Strategic and International Studies, “China’s Export Controls on Critical Minerals: Gallium, Germanium, and Antimony,” analysis, 2025. csis.org ↩
-
Freedom House, Freedom in the World 2025 (Washington, DC: Freedom House, February 2025). freedomhouse.org/report/freedom-world/2025 ↩
-
United Nations High Commissioner for Refugees, Global Trends: Forced Displacement in 2024 (Geneva: UNHCR, 2025). unhcr.org/global-trends ↩
-
Nuclear Threat Initiative, “New START Treaty Expiration: Implications and Next Steps,” February 2026. nti.org. See also Chatham House, Lowy Institute, and Brookings analyses. ↩
-
International Monetary Fund, “Gen-AI: Artificial Intelligence and the Future of Work,” IMF Staff Discussion Note (January 2024). imf.org ↩
-
International Labour Organization, “Generative AI and Jobs: A global analysis of potential effects on job quantity and quality,” ILO Working Paper (2025). ilo.org ↩
-
CBS News, “The invisible workers behind AI” (2025); Brookings Institution, “Data Labelers and the Ethics of AI” (2025); Computer Weekly, “Kenya AI workers form Data Labelers Association” (January 2025). ↩
-
Jack Dorsey announcement and reporting on Block (Square) workforce reduction of 4,000+ positions, February 26, 2026. Reported in Bloomberg, TechCrunch, and The Verge. ↩
-
Amazon corporate role reductions totaling approximately 30,000 positions, late 2025–early 2026. Reported in The American Bazaar (January 28, 2026), CNBC, and Reuters. ↩
-
Baker McKenzie workforce restructuring affecting 600–1,000 business services positions, citing AI-driven reorganization. Reported in RollOnFriday (February 5, 2026), Bloomberg Law, Legal Cheek, Above the Law, and The Global Legal Post. ↩
-
DeepSeek, “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” technical report and Nature publication, January–September 2025. The $294,000 figure refers to the RL fine-tuning stage of R1; the ~$5.6–6M figure covers the full DeepSeek-V3 base model training (2,048 H800 GPUs, ~55 days). See The Register, 19 September 2025; SemiAnalysis cost analysis, January 2025. ↩
-
International Energy Agency, Electricity 2024: Analysis and Forecast to 2026 (Paris: IEA, 2024). iea.org/reports/electricity-2024 ↩
-
NVIDIA market capitalization data: approximately $4.31–4.5 trillion as of February–March 2026. Sources: companiesmarketcap.com, stockanalysis.com, Trading Economics. ↩
-
European Commission, “AI Act” (Regulation EU 2024/1689); Executive Order 14235, “Removing Barriers to American Leadership in Artificial Intelligence,” January 23, 2025; Paris AI Action Summit communiqué and non-signatory statements, February 2025. ↩
-
Putnam, R.D., “Tuning In, Tuning Out: The Strange Disappearance of Social Capital in America,” PS: Political Science and Politics 28, no. 4 (1995): 664–683. See also Putnam, R.D., Bowling Alone: The Collapse and Revival of American Community (New York: Simon & Schuster, 2000). ↩
-
Pietschnig, J. & Voracek, M. (2015). “One Century of Global IQ Gains: A Formal Meta-analysis of the Flynn Effect (1909–2013).” Perspectives on Psychological Science 10, no. 3: 282–306. doi:10.1177/1745691615577701. ↩
-
Bratsberg, B. & Rogeberg, O. (2018). “Flynn effect and its reversal are both environmentally caused.” Proceedings of the National Academy of Sciences 115, no. 26: 6674–6678. doi:10.1073/pnas.1718793115. ↩
-
OECD, PISA 2022 Results: Factsheets (Paris: OECD Publishing, 2023). Mathematics: OECD average declined from 489 (2018) to 472 (2022). Reading: declined approximately 10–11 points among countries with comparable trend data; precise figure depends on country inclusion criteria for trend comparability. oecd.org/pisa ↩
-
On neurocognitive effects of environmental pollutants: Landrigan, P.J., et al., “The Lancet Commission on pollution and health,” The Lancet 391, no. 10119 (2018): 462–512. doi:10.1016/S0140-6736(17)32345-0. On microplastics in human brain tissue: Nihart, A.J., Garcia, M.A., El Hayek, E., et al. (2025). “Bioaccumulation of microplastics in decedent human brains.” Nature Medicine 31, no. 4: 1114–1119. doi:10.1038/s41591-024-03453-1. WHO air quality data cited in Section I of this document. ↩
-
Zuboff, Shoshana, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2019). See also Williams, James, Stand Out of Our Light: Freedom and Resistance in the Attention Economy (Cambridge: Cambridge University Press, 2018) — Williams is a former Google strategist turned Oxford philosopher; his account of how the attention economy degrades individual agency and deliberative capacity complements Zuboff’s structural analysis. ↩ ↩2
-
Wu, Tim, The Attention Merchants: The Epic Scramble to Get Inside Our Heads (New York: Knopf, 2016). Wu traces the commodification of attention from nineteenth-century newspaper advertising through radio, television, and the internet, establishing the current architecture as the latest — and most precise — stage of a century-long process. ↩
-
Vosoughi, S., Roy, D., & Aral, S. (2018). “The spread of true and false news online.” Science 359, no. 6380: 1146–1151. doi:10.1126/science.aap9559. ↩
-
Bail, C.A., et al. (2018). “Exposure to opposing views on social media can increase political polarization.” Proceedings of the National Academy of Sciences 115, no. 37: 9216–9221. doi:10.1073/pnas.1804840115. ↩ ↩2
-
Pennycook, G. & Rand, D.G. (2019). “Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning.” Cognition 188: 39–50. doi:10.1016/j.cognition.2018.06.011. ↩
-
Deepfake volume, detection, and impact statistics: Keepnet Labs, “Deepfake Statistics & Trends 2026” (January 2026); Entrust Identity Fraud Report 2025; Pindrop 2025 Voice Intelligence + Security Report. Nonconsensual intimate imagery prevalence: UK Parliament, House of Lords briefing, 2024; Ceartas.io Global Deepfake Statistics & Impact Report, 2025. Note: figures are drawn primarily from commercial threat intelligence monitoring; independent governmental verification remains partial given the novelty of the phenomenon. Europol’s Internet Organised Crime Threat Assessment (IOCTA) 2024 corroborates the qualitative trend; precise volume figures remain sourced from commercial monitoring. ↩ ↩2
-
Benkler, Y., Faris, R., & Roberts, H., Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (New York: Oxford University Press, 2018). ↩
-
Edelman, Trust Barometer (annual, 2001–2026). edelman.com/trust ↩
-
OECD, Trust in Government (2021, updated 2024). oecd.org/gov/trust-in-government.htm ↩
-
Gurri, Martin, The Revolt of the Public and the Crisis of Authority in the New Millennium (San Francisco: Stripe Press, 2018). Not peer-reviewed, but his central argument — that networked information demolishes institutional authority while offering no replacement — is analytically precise and widely cited in policy and intelligence communities. ↩
-
Rosa, Hartmut, Social Acceleration: A New Theory of Modernity, trans. Jonathan Trejo-Mathys (New York: Columbia University Press, 2013). ↩
-
World Health Organization Regional Office for Europe, Health Behaviour in School-aged Children (HBSC) International Report 2022 (Copenhagen: WHO Europe, 2024). Survey of 280,000 young people across 44 countries. who.int ↩
-
Orben, A. & Przybylski, A.K. (2019). “The association between adolescent well-being and digital technology use.” Nature Human Behaviour 3: 173–182. doi:10.1038/s41562-018-0506-1. Dataset of 350,000 adolescents; specification curve analysis testing 20,004 model specifications. ↩
-
Wolf, Maryanne, Reader, Come Home: The Reading Brain in a Digital World (New York: HarperCollins, 2018). Wolf is Professor-in-Residence and Director of the Center for Dyslexia, Diverse Learners, and Social Justice at the UCLA Graduate School of Education and Information Studies; formerly the John DiBiaggio Professor at Tufts University. See also Wolf, M., Proust and the Squid: The Story and Science of the Reading Brain (New York: HarperCollins, 2007) for the foundational neuroscientific account. ↩
-
Delgado, P., Vargas, C., Ackerman, R., & Salmerón, L. (2018). “Don’t throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension.” Educational Research Review 25: 23–38. doi:10.1016/j.edurev.2018.09.003. Meta-analysis of 54 studies confirming the screen-paper comprehension gap. See also Mangen, A., Walgermo, B.R., & Brønnick, K. (2013). “Reading linear texts on paper versus computer screen: Effects on reading comprehension.” International Journal of Educational Research 58: 61–68. doi:10.1016/j.ijer.2012.12.002. ↩
-
OECD, PISA 2022 Results: Factsheets (Paris: OECD Publishing, 2023). Mathematics: OECD average declined from 489 (2018) to 472 (2022), approximately 15 points. Reading: declined approximately 10–11 points among countries with comparable trend data. oecd.org/pisa ↩
-
Swedish National Agency for Education (Skolverket), review of digitization strategy in primary education, 2023. Minister for Schools Lotta Edholm announced the policy shift in May 2023, citing research linking screen-based learning to declining literacy outcomes. Reported in The Guardian, Politico, and Swedish national media. ↩