Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 11 février 2026techdirt.com

Peter Mandelson Invokes Press Harassment Protections To Dodge Questions About His Support Of Jeffrey Epstein

Par : Mike Masnick
11 février 2026 à 21:28

Peter Mandelson—the former UK cabinet minister who was just sacked as Britain’s ambassador to the United States over newly revealed emails with Jeffrey Epstein—has found a novel way to avoid answering questions about why he told a convicted sex offender “your friends stay with you and love you” and urged him to “fight for early release.” He got the UK press regulator to send a memo to all UK media essentially telling them to leave him alone.

The National published what they describe as the “secret notice” that went out:

CONFIDENTIAL – STRICTLY NOT FOR PUBLICATION: Ipso has asked us to circulate the following advisory:

Ipso has today been contacted by a representative acting on behalf of Peter Mandelson.

Mr Mandelson’s representatives state that he does not wish to speak to the media at this time. He requests that the press do not take photos or film, approach, or contact him via phone, email, or in-person. His representatives ask that any requests for his comment are directed to [REDACTED]

We are happy to make editors aware of his request. We note the terms of Clause 2 (Privacy) and 3 (Harassment) of the Editors’ Code, and in particular that Clause 3 states that journalists must not persist in questioning, telephoning, pursuing or photographing individuals once asked to desist, unless justified in the public interest.

Clauses 2 and 3 of the UK Editor’s Code—the privacy and harassment provisions—exist primarily to protect genuinely vulnerable people from press intrusion. Grieving families. Crime victims. People suffering genuine harassment.

Mandelson is invoking them to avoid answering questions about his documented friendship with one of history’s most notorious pedophiles—a friendship so extensive and problematic that it just cost him his job as ambassador to the United States, days before a presidential state visit.

According to Politico, the UK Foreign Office withdrew Mandelson “with immediate effect” after emails showed the relationship was far deeper than previously known:

In a statement the U.K. Foreign Office said Mandelson had been withdrawn as ambassador “with immediate effect” after emails showed “the depth and extent” of his relationship with Epstein was “materially different from that known at the time of his appointment.”

“In particular Peter Mandelson’s suggestion that Jeffrey Epstein’s first conviction was wrongful and should be challenged is new information,” the statement added.

So we have a senior political figure who just got fired over revelations that he told a convicted sex offender his prosecution was “wrongful” and should be challenged, who maintained this friendship for years longer than he’d admitted, and his response is to invoke press harassment protections?

The notice does include the important qualifier “unless justified in the public interest.” And it’s hard to imagine a clearer case of public interest: a senior diplomat, just sacked from his post, over previously undisclosed communications with a convicted pedophile, in which he expressed support for challenging that pedophile’s conviction. If that’s not public interest, the term has no meaning.

But the mere act of circulating this notice creates a chilling effect. It puts journalists on notice that pursuing this story could result in complaints to the regulator. It’s using the machinery of press regulation as a shield against legitimate accountability journalism.

Now, to be fair, one could imagine scenarios where even a disgraced public figure might legitimately invoke harassment protections—it wasn’t that long ago there was a whole scandal in the UK with journalists hacking the voicemails of famous people. But that’s not what’s happening here. Mandelson is invoking these provisions to avoid being asked questions at all. “Please don’t inquire about why I told a convicted pedophile his prosecution was wrongful” is not the kind of harm these rules were designed to prevent.

This is who Mandelson has always been: someone who sees regulatory and governmental machinery as tools to be deployed on behalf of whoever he’s serving at the moment. Back in 2009, we covered how he returned from a vacation with entertainment industry mogul David Geffen and almost immediately started pushing for aggressive new copyright enforcement measures, including kicking people off the internet for file sharing. As we wrote at the time, he had what we called a “sudden conversion” to Hollywood’s position on internet enforcement that happened to coincide suspiciously with his socializing with entertainment industry executives.

Back then, the machinery was deployed to serve entertainment executives who wanted harsher copyright enforcement. Now it’s being deployed to serve Mandelson himself.

There’s a broader pattern here that goes beyond one UK politician. The Epstein revelations have been remarkable not just for what they’ve revealed about who associated with him, but for how consistently the response from the powerful has been to deflect, deny, and deploy every available mechanism to avoid genuine accountability. Some have used their media platforms to try to reshape the narrative. Some have simply refused to comment.

Mandelson is trying to use the press regulatory system itself.

It’s worth noting that The National chose to publish the “confidential – strictly not for publication” memo anyway, explicitly citing the public interest. Good for them. Because if there’s one thing that absolutely serves the public interest, it’s shining a light on attempts by the powerful to use the systems meant to protect the vulnerable as shields for their own accountability.

Mandelson’s representatives say he “does not wish to speak to the media at this time.” That’s his right to request—but no media should have to agree to his terms. Weaponizing press regulation to create a cone of silence around questions of obvious public interest is something else entirely. It’s elite impunity dressed up in the language of press ethics.

Aujourd’hui — 12 février 2026techdirt.com

The Policy Risk Of Closing Off New Paths To Value Too Early

11 février 2026 à 23:26

Artificial intelligence promises to change not just how Americans work, but how societies decide which kinds of work are worthwhile in the first place. When technological change outpaces social judgment, a major capacity of a sophisticated society comes under pressure: the ability to sustain forms of work whose value is not obvious in advance and cannot be justified by necessity alone.

As AI systems diffuse rapidly across the economy, questions about how societies legitimate such work, and how these activities can serve as a supplement to market-based job creation, have taken on a policy relevance that deserves serious attention.

From Prayer to Platforms

That capacity for legitimating work has historically depended in part on how societies deploy economic surplus: the share of resources that can be devoted to activities not strictly required for material survival. In late medieval England, for example, many in the orbit of the church made at least part of their living performing spiritual labor such as saying prayers for the dead and requesting intercessions for patrons. In a society where salvation was a widely shared concern, such activities were broadly accepted as legitimate ways to make a living.

William Langland was one such prayer-sayer. He is known to history only because, unlike nearly all others who did similar work, he left behind a long allegorical religious poem, Piers Plowman, which he composed and repeatedly revised alongside the devotional labor that sustained him. It emerged from the same moral and institutional world in which paid prayer could legitimately absorb time, effort, and resources.

In 21st-century America, Jenny Nicholson earns a sizeable income sitting alone in front of a camera, producing long-form video essays on theme parks, films, and internet subcultures. Yet her audience supports it willingly and few doubt that it creates value of a kind. Where Langland’s livelihood depended on shared theological and moral authority emanating from a Church that was the dominant institution of its day, Nicholson’s depends on a different but equally real form of judgment expressed by individual market participants. And she is just one example of a broader class of creators—streamers, influencers, and professional gamers—whose work would have been unintelligible as a profession until recently.

What links Langland and Nicholson is not the substance of their work or any claim of moral equivalence, but the shared social judgment that certain activities are legitimate uses of economic surplus. Such judgments do more than reflect cultural taste. Historically, they have also shaped how societies adjust to technological change, by determining which forms of work can plausibly claim support when productivity rises faster than what is considered a “necessity” by society.

How Change Gets Absorbed

Technological change has long been understood to generate economic adjustment through familiar mechanisms: by creating new tasks within firms, expanding demand for improved goods and services, and recombining labor in complementary ways. Often, these mechanisms alone can explain how economies create new jobs when technology renders others obsolete. Their operation is well documented, and policies that reduce frictions in these processes—encouraging retraining or easing the entry of innovative firms—remain important in any period of change.

That said, there is no general law guaranteeing that new technologies will create more jobs than they destroy through these mechanisms alone. Alongside labor-market adjustment, societies have also adapted by legitimating new forms of value—activities like those undertaken by Langland and Nicholson—that came to be supported as worthwhile uses of the surplus generated by rising productivity.

This process has typically been examined not as a mechanism of economic adjustment, but through a critical or moralizing lens. From Thorstein Veblen’s account of conspicuous consumption, which treats surplus-supported activity primarily as a vehicle for status competition, to Max Weber’s analysis of how moral and religious worldviews legitimate economic behavior, scholars have often emphasized the symbolic and ideological dimensions of non-essential work. Herbert Marcuse pushed this line of thinking further, arguing that capitalist societies manufacture “false needs” to absorb surplus and assure the continuation of power imbalances. These perspectives offer real insight: uses of surplus are not morally neutral, and new forms of value can be entangled with power, hierarchy, and exclusion.

What they often exclude, however, is the way legitimation of new forms of value can also function to allow societies to absorb technological change without requiring increases in productivity to be translated immediately into conventional employment or consumption. New and expanded ways of using surplus are, in this sense, a critical economic safety valve during periods of rapid change.

Skilled Labor Has Been Here Before

Fears that artificial intelligence is uniquely threatening simply because it reaches into professional or cognitive domains rest on a mistaken historical premise. Episodes of large-scale technological displacement have rarely spared skilled or high-paid forms of labor; often, such work has been among the first affected. The mechanization of craft production in the nineteenth century displaced skilled cobblers, coopers, and blacksmiths, replacing independent artisans with factory systems that required fewer skills, paid lower wages, and offered less autonomy even as new skilled jobs arose elsewhere. These changes were disruptive but they were absorbed largely through falling prices, rising consumption, and new patterns of employment. They did not require societies to reconsider what kinds of activity were worthy uses of surplus: the same things were still produced, just at scale.

Other episodes are more revealing for present purposes. Sometimes, social change has unsettled not just particular occupations but entire regimes through which uses of surplus become legitimate. In medieval Europe, the Church was the one of the largest economic institutions just about everywhere, clerical and quasi-clerical roles like Langland’s offered recognized paths to education, security, status, and even wealth. When those shared beliefs fractured, the Church’s economic role contracted sharply—not because productivity gains ceased but because its claim on so large a share of surplus lost legitimacy.

To date, artificial intelligence has not produced large-scale job displacement, and the limited disruptions that have occurred have largely been absorbed through familiar adjustment mechanisms. But if AI systems begin to substitute for work whose value is justified less by necessity than by judgment or cultural recognition, the more relevant historical analogue may be less the mechanization of craft than the narrowing or collapse of earlier surplus regimes. The central question such technologies raise is not whether skilled labor can be displaced or whether large-scale displacement is possible—both have occurred repeatedly in the historical record—but how quickly societies can renegotiate which activities they are prepared to treat as legitimate uses of surplus when change arrives at unusual speed.

Time Compression and its Stakes

In this respect, artificial intelligence does appear unusual. Generative AI tools such as ChatGPT have diffused through society at a pace far faster than most earlier general-purpose technologies. ChatGPT was widely reported to have reached roughly 100 million users within two months of its public release and similar tools have shown comparably rapid uptake.

That compression matters. Much surplus has historically flowed through familiar institutions—universities, churches, museums, and other cultural bodies—that legitimate activities whose value lies in learning, spiritual rewards or meaning rather than immediate output. Yet such institutions are not fixed. Periods of rapid technological change often place them under strain–something evident today for many–exposing disagreements about purpose and authority. Under these conditions, experimentation with new forms of surplus becomes more important, not less. Most proposed new forms of value fail, and attempts to predict which will succeed have a poor historical record—from the South Sea Bubble to more recent efforts to anoint digital assets like NFTs as durable sources of wealth. Experimentation is not a guarantee of success; it is a hedge. Not all claims on surplus are benign, and waste is not harmless. But when technological change moves faster than institutional consensus, the greater danger often lies not in tolerating too many experiments, but in foreclosing them too quickly.

Artificial intelligence does not require discarding all existing theories of change. What sets modern times apart is the speed with which new capabilities become widespread, shortening the interval in which those judgments are formed. In this context, surplus that once supported meaningful, if unconventional, work may instead be captured by grifters, legally barred from legitimacy (by say, outlawing a new art form) or funneled into bubbles. The risk is not waste alone, but the erosion of the cultural and institutional buffers that make adaptation possible.

The challenge for policymakers is not to pre-ordain which new forms of value deserve support but to protect the space in which judgment can evolve. They need to realize that they simply cannot make the world entirely safe, legible and predictable: whether they fear technology overall or simply seek to shape it in the “right” way, they will not be able to predict the future. That means tolerating ambiguity and accepting that many experiments will fail with negative consequences. In this context, broader social barriers that prevent innovation in any field–professional licensing, limits on free expression, overly zealous IP laws, regulatory bars on the entry to small firms–deserve a great deal of scrutiny. Even if the particular barriers in question have nothing to do with AI itself, they may retard the development of surplus sinks necessary to economic adjustment. In a period of compressed adjustment, the capacity to let surplus breathe and value be contested may well determine whether economies bend or break.

Eli Lehrer is the President of the R Street Institute.

❌
❌