Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Jeff Bezos Is Destroying What’s Left Of The Washington Post To Please Our Dim, Unpopular Autocrats

Jeff Bezos this week continued to dismantle what’s left of the Washington Post via another massive round of layoffs that left remaining staff stunned. Among the latest cuts is the elimination of the paper’s popular sports desk, scaling back of international and local news, the firing of an untold swath of journalists, and the ending of the paper’s book sections, among other major changes.

This comes on the heels of other decisions by Bezos to fire all of the paper’s black columnists, turn the op-ed section into pro-corporatist agitprop, censor cartoonists that criticize Jeff, and generally shift the paper’s journalistic tone in a more right wing, autocrat-friendly, corporatist direction. You know, like every other major corporate media outlet from CNN to CBS.

Of course, nobody actually wants this. The actual audience for extraction class agitprop is arguably very small and already quite well served. So it’s amusing to see WAPO leadership insist that these additional, brutal cuts are necessary because the paper has been losing subscribers and “wants to be competitive“:

“Murray acknowledged that the Post has struggled to reach “customers” and talked about the competitive media marketplace. “Today, the Washington Post is taking a number of actions across the company to secure our future,” he saidaccording to an audio recording of the meeting.”

Let’s be clear: billionaires like Jeff Bezos don’t want a functioning press. They want the lazy simulacrum of a functional press that caters to their ideology (more for me, less for you) and protects their interests. As with Larry Ellison’s acquisition of CBS and TikTok, and Elon Musk’s acquisition of Twitter, it’s best to view this as a global project to defang accountability for the planet’s richest, shittiest people and corporations.

Former Washington Post editor Marty Baron didn’t really mince words about what this means for a once-functional newspaper that, at this point, probably can’t be salvaged:

A staggering statement from former Washington Post editor Marty Baron: "This ranks among the darkest days in the history of one of the world's greatest news organizations."

Ben Mullin (@benmullin.bsky.social) 2026-02-04T14:34:22.001Z

WAPO management insist that they’re going to “narrow their focus on politics.” By this they mean more of the feckless, “both sides,” “view from nowhere” DC gossip reporting you see at other billionaire-owned outlets like Axios, Semafor, and Politico. Glad-handy journalism that’s less concerned with the truth than it is appeasing ownership, protecting access, and keeping the ad money flowing.

The kind of wimpy, soft-knuckled cack that can (and repeatedly is) exploited by authoritarian zealots who know these outlets lack the courage to call them out for what they really are. You see, if you’re honest about the extremist nature of our unpopular autocratic government, you might lose access, upset paper management, alienate Republican ad viewers, or piss off regulators eyeing your latest merger.

Bezos could fund functional journalism at the Washington Post for decades to come without making a dent in his finances, were that something of actual interest to him. This is a guy who just blew $75 million on a propaganda puff piece kissing the ass of the president’s wife. That kind of money could fund most independent newsrooms for the better part of the next decade.

Jeff wants to ensure the administration will pay him to launch his unreliable rockets into space, slather his fledgling LEO satellite network with subsidies, coddle his cloud computing empire, allow him to dominate every last aspect of modern retail, and generally be broadly exploitative in a way that undermines competition, consumers, and labor. He wants, and applauds, Trump’s destruction of the regulatory state.

Bezos still “wins” even if the Post doesn’t survive his “leadership.” At worst (for Jeff) the paper is converted into a sad, pseudo-journalistic simulacrum that exists largely to blow smoke up the ass of wealth and power. At best another major media institution is destroyed, eliminating yet another outlet that used to (admittedly with increasing inconsistency) hold billionaires and corporate power to account.

But it’s really something even worse than just rich people destroying journalism to coddle their delicate egos and protect their financial interests. All of this really is part of a broad, multi-generational effort by the extraction class to eliminate checks and balances and accountability, erode informed consensus, befuddle the electorate, and dismantle not just democratic norms, but democracy itself.

And, if you hadn’t noticed, it’s been a smashing success so far.

If there’s a plus side to this mess, it’s that Jeff and Elon and Larry’s clumsy efforts to dominate and destroy U.S. journalism create vast new opportunities for indie newsletter authors, worker-owned newsrooms, and independent outlets (like Techdirt), to serve a public that’s desperate for something tangible, courageous, and real in a sea of bullshit and clumsy artifice. Give them, and us, your time and money.

OpenAI’s New Scientific Writing And Collaboration Workspace ‘Prism’ Raises Fears Of Vibe-Coded Academic AI Slop

It is no secret that large language models (LLMs) are being used routinely to modify and even write scientific papers. That’s not necessarily a bad thing: LLMs can help produce clearer texts with stronger logic, not least when researchers are writing in a language that is not their mother tongue. More generally, a recent analysis in Nature magazine, reported by Science magazine, found that scientists embracing AI — of any kind — “consistently make the biggest professional strides”:

AI adopters have published three times more papers, received five times more citations, and reach leadership roles faster than their AI-free peers.

But there is also a downside:

Not only is AI-driven work prone to circling the same crowded problems, but it also leads to a less interconnected scientific literature, with fewer studies engaging with and building on one another.

Another issue with LLMs, that of “hallucinated citations,” or “HalluCitations,” is well known. More seriously, entire fake publications can be generated using AI, and sold by so-called “paper mills” to unscrupulous scientists who wish to bolster their list of publications to help their career. In the field of biomedical research alone, a recent study estimated that over 100,000 fake papers were published in 2023. Not all of those were generated using AI, but progress in LLMs has made the process of creating fake articles much simpler.

Fake publications generated using LLMs are often obvious because of their lack of sophistication and polish. But a new service from OpenAI, called Prism, is likely to eliminate such easy-to-spot signs, by adding AI support to every aspect of writing a scientific paper:

Prism is a free workspace for scientific writing and collaboration, with GPT‑5.2⁠—our most advanced model for mathematical and scientific reasoning—integrated directly into the workflow.

It brings drafting, revision, collaboration, and preparation for publication into a single, cloud-based, LaTeX-native workspace. Rather than operating as a separate tool alongside the writing process, GPT‑5.2 works within the project itself—with access to the structure of the paper, equations, references, and surrounding context.

It includes a number of features that make creating complex — and fake — papers extremely easy:

  • Search for and incorporate relevant literature (for example, from arXiv) in the context of the current manuscript, and revise text in light of newly identified related work
  • Create, refactor, and reason over equations, citations, and figures, with AI that understands how those elements relate across the paper
  • Turn whiteboard equations or diagrams directly into LaTeX, saving hours of time manipulating graphics pixel-by-pixel

There is even voice-based editing, allowing simple changes to be made without the need to write anything. But scientists are already worried that the power of OpenAI’s Prism will make a deteriorating situation worse. As an article on Ars Technica explains:

[Prism] has drawn immediate skepticism from researchers who fear the tool will accelerate the already overwhelming flood of low-quality papers into scientific journals. The launch coincides with growing alarm among publishers about what many are calling “AI slop” in academic publishing.

One field that is already plagued by AI slop is AI itself. An FT article on the topic points to an interesting attempt by the International Conference on Learning Representations (ICLR), a major gathering of researchers in the world of machine learning, to tackle this problem with punitive measures against authors and reviewers who violate the ICLR’s policies on LLM-generated material. For example:

Papers that make extensive usage of LLMs and do not disclose this usage will be desk rejected [that is, without sending them out for external peer review]. Extensive and/or careless LLM usage often results in false claims, misrepresentations, or hallucinated content, including hallucinated references. As stated in our previous blog post: hallucinations of this kind would be considered a Code of Ethics violation on the part of the paper’s authors. We have been desk -rejecting, and will continue to desk -reject, any paper that includes such issues.

Similarly:

reviewers [of submitted papers] are responsible for the content they post. Therefore, if they use LLMs, they are responsible for any issues in their posted review. Very poor quality reviews that feature false claims, misrepresentations or hallucinated references are also a code of ethics violation as expressed in the previous blog post. As such, reviewers who posted such poor quality reviews will also face consequences, including the desk rejection of their [own] submitted papers.

It is clearly not possible to stop scientists from using AI tools to check and improve their papers, nor should this be necessary, provided authors flag up such usage, and no errors are introduced as a result. A policy of the kind adopted by the ICLR requiring transparency about the extent to which AI has been used seems a sensible approach in the face of increasingly sophisticated tools like OpenAI’s Prism.

Follow me @glynmoody on  on Bluesky and Mastodon.

❌