Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 13 février 2026Flux principal

Judge Accuses DOJ Of Telling Court To “Pound Sand,” In Case Over Venezuelans Sent To Salvadoran Concentration Camp

Par : Mike Masnick
13 février 2026 à 17:27

Judge Boasberg got his vindication in the frivolous “complaint” the DOJ filed against him, and now he’s calling out the DOJ’s bullshit in the long-running case that caused them to file the complaint against him in the first place: the JGG v. Trump case regarding the group of Venezuelans the US government shipped off to CECOT, the notorious Salvadoran concentration camp.

Boasberg, who until last year was generally seen as a fairly generic “law and order” type judge who was extremely deferential to any “national security” claims from the DOJ (John Roberts had him lead the FISA Court, for goodness’ sake!), has clearly had enough of this DOJ and the games they’ve been playing in his court.

In a short but quite incredible ruling, he calls out the DOJ for deciding to effectively ignore the case while telling the court to “pound sand.”

On December 22, 2025, this Court issued a Memorandum Opinion finding that the Government had denied due process to a class of Venezuelans it deported to El Salvador last March in defiance of this Court’s Order. See J.G.G. v. Trump, 2025 WL 3706685, at *19 (D.D.C. Dec. 22, 2025). The Court offered the Government the opportunity to propose steps that would facilitate hearings for the class members on their habeas corpus claims so that they could “challenge their designations under the [Alien Enemies Act] and the validity of the [President’s] Proclamation.” Id. Apparently not interested in participating in this process, the Government’s responses essentially told the Court to pound sand.

From a former FISC judge—someone who spent years giving national security claims every benefit of the doubt—”pound sand” is practically a primal scream.

Due to this, he orders the government to work to “facilitate the return” of these people it illegally shipped to a foreign concentration camp (that is, assuming any of them actually want to come back).

Believing that other courses would be both more productive and in line with the Supreme Court’s requirements outlined in Noem v. Abrego Garcia, 145 S. Ct. 1017 (2025), the Court will now order the Government to facilitate the return from third countries of those Plaintiffs who so desire. It will also permit other Plaintiffs to file their habeas supplements from abroad.

Boasberg references the Donald Trump-led invasion of Venezuela and the unsettled situation there for many of the plaintiffs. He points out that the lawyers for the plaintiffs have been thoughtful and cautious in how they approach this case. That is in contrast to the US government.

Plaintiffs’ prudent approach has not been replicated by their Government counterparts. Although the Supreme Court in Abrego Garcia upheld Judge Paula Xinis’s order directing the Government “to facilitate and effectuate the return of” that deportee, see 145 S. Ct. at 1018, Defendants at every turn have objected to Plaintiffs’ legitimate proposals without offering a single option for remedying the injury that they inflicted upon the deportees or fulfilling their duty as articulated by the Supreme Court.

Boasberg points to the Supreme Court’s ruling regarding Kilmar Abrego Garcia, saying that it’s ridiculous that the DOJ is pretending that case doesn’t exist or doesn’t say what it says. Then he points out that the DOJ keeps “flagrantly” disobeying courts.

Against this backdrop, and mindful of the flagrancy of the Government’s violations of the deportees’ due-process rights that landed Plaintiffs in this situation, the Court refuses to let them languish in the solution-less mire Defendants propose. The Court will thus order Defendants to take several discrete actions that will begin the remedial process for at least some Plaintiffs, as the Supreme Court has required in similar circumstances. It does so while treading lightly, as it must, in the area of foreign affairs. See Abrego Garcia, 145 S. Ct. at 1018 (recognizing “deference owed to the Executive Branch in the conduct of foreign affairs”)

Even given all this, the specific remedy is not one that many of the plaintiffs are likely to accept: he orders that the US government facilitate the return of any of those who want it among those… not in Venezuela. But, since most of them were eventually released from CECOT into Venezuela, that may mean that this ruling doesn’t really apply to many men. On top of that Boasberg points out that anyone who does qualify and takes up the offer will likely be detained by immigration officials upon getting here. But, if they want, the US government has to pay for their plane flights back to the US. And, in theory, the plaintiffs should then be given the due process they were denied last year.

Plaintiffs also request that such boarding letter include Government payment of the cost of the air travel. Given that the Court has already found that their removal was unlawful — as opposed to the situation contemplated by the cited Directive, which notes that “[f]acilitating an alien’s return does not necessarily include funding the alien’s travel,” Directive 11061.1, ¶ 3.1 (emphasis added) — the Court deems that a reasonable request. It is unclear why Plaintiffs should bear the financial cost of their return in such an instance. See Ms. L. v. U.S. Immig. & Customs Enf’t (“ICE”), 2026 WL 313340, at *4 (S.D. Cal. Feb. 5, 2026) (requiring Government to “bear the expense of returning these family units to the United States” given that “[e]ach of the removals was unlawful, and absent the removals, these families would still be in the United States”). It is worth emphasizing that this situation would never have arisen had the Government simply afforded Plaintiffs their constitutional rights before initially deporting them.

I’m guessing not many are eager to re-enter the US and face deportation again. Of course, many of these people left Venezuela for the US in the first place for a reason, so perhaps some will take their chances on coming back. Even against a very vindictive US government.

The frustrating coda here is the lack of any real consequences for DOJ officials who treated this entire proceeding as a joke—declining to seriously participate and essentially daring the court to do something about it. Boasberg could have ordered sanctions. He didn’t. And that’s probably fine with this DOJ, which has learned that contempt for the courts carries no real cost.

Unfortunately, that may be the real story here. Judge gets fed up, once again, with a DOJ that thumbs its nose at the court, says extraordinary things in a ruling that calls out the DOJ’s behavior… but does little that will lead to actual accountability for those involved, beyond having them “lose” the case. We’ve seen a lot of this, and it’s only going to continue until judges figure out how to impose real consequences for DOJ lawyers for treating the court with literal contempt.

News Publishers Are Now Blocking The Internet Archive, And We May All Regret It

Par : Mike Masnick
13 février 2026 à 19:57

Last fall, I wrote about how the fear of AI was leading us to wall off the open internet in ways that would hurt everyone. At the time, I was worried about how companies were conflating legitimate concerns about bulk AI training with basic web accessibility. Not surprisingly, the situation has gotten worse. Now major news publishers are actively blocking the Internet Archive—one of the most important cultural preservation projects on the internet—because they’re worried AI companies might use it as a sneaky “backdoor” to access their content.

This is a mistake we’re going to regret for generations.

Nieman Lab reports that The Guardian, The New York Times, and others are now limiting what the Internet Archive can crawl and preserve:

When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.

Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive’s APIs and filter out its article pages from the Wayback Machine’s URLs interface. The Guardian’s regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.

The Times has gone even further:

The New York Times confirmed to Nieman Lab that it’s actively “hard blocking” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its robots.txt file, disallowing access to its content.

“We believe in the value of The New York Times’s human-led journalism and always want to ensure that our IP is being accessed and used lawfully,” said a Times spokesperson. “We are blocking the Internet Archive’s bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization.”

I understand the concern here. I really do. News publishers are struggling, and watching AI companies hoover up their content to train models that might then, in some ways, compete with them for readers is genuinely frustrating. I run a publication myself, remember.

But blocking the Internet Archive isn’t going to stop AI training. What it will do is ensure that significant chunks of our journalistic record and historical cultural context simply… disappear.

And that’s bad.

The Internet Archive is the most famous nonprofit digital library, and has been operating for nearly three decades. It isn’t some fly-by-night operation looking to profit off publisher content. It’s trying to preserve the historical record of the internet—which is way more fragile than most people comprehend. When websites disappear—and they disappear constantly—the Wayback Machine is often the only place that content still exists. Researchers, historians, journalists, and ordinary citizens rely on it to understand what actually happened, what was actually said, what the world actually looked like at a given moment.

In a digital era when few things end up printed on paper, the Internet Archive’s efforts to permanently preserve our digital culture are essential infrastructure for anyone who cares about historical memory.

And now we’re telling them they can’t preserve the work of our most trusted publications.

Think about what this could mean in practice. Future historians trying to understand 2025 will have access to archived versions of random blogs, sketchy content farms, and conspiracy sites—but not The New York Times. Not The Guardian. Not the publications that we consider the most reliable record of what’s happening in the world. We’re creating a historical record that’s systematically biased against quality journalism.

Yes, I’m sure some will argue that the NY Times and The Guardian will never go away. Tell that to the readers of the Rocky Mountain News, which published for 150 years before shutting down in 2009, or to the 2,100+ newspapers that have closed since 2004. Institutions—even big, prominent, established ones—don’t necessarily last.

As one computer scientist quoted in the Nieman piece put it:

“Common Crawl and Internet Archive are widely considered to be the ‘good guys’ and are used by ‘the bad guys’ like OpenAI,” said Michael Nelson, a computer scientist and professor at Old Dominion University. “In everyone’s aversion to not be controlled by LLMs, I think the good guys are collateral damage.”

That’s exactly right. In our rush to punish AI companies, we’re destroying public goods that serve everyone.

The most frustrating bit of all of this: The Guardian admits they haven’t actually documented AI companies scraping their content through the Wayback Machine. This is purely precautionary and theoretical. They’re breaking historical preservation based on a hypothetical threat:

The Guardian hasn’t documented specific instances of its webpages being scraped by AI companies via the Wayback Machine. Instead, it’s taking these measures proactively and is working directly with the Internet Archive to implement the changes.

And, of course, as one of the “good guys” of the internet, the Internet Archive is willing to do exactly what these publishers want. They’ve always been good about removing content or not scraping content that people don’t want in the archive. Sometimes to a fault. But you can never (legitimately) accuse them of malicious archiving (even if music labels and book publishers have).

Either way, we’re sacrificing the historical record not because of proven harm, but because publishers are worried about what might happen. That’s a hell of a tradeoff.

This isn’t even new, of course. Last year, Reddit announced it would block the Internet Archive from archiving its forums—decades of human conversation and cultural history—because Reddit wanted to monetize that content through AI licensing deals. The reasoning was the same: can’t let the Wayback Machine become a backdoor for AI companies to access content Reddit is now selling. But once you start going down that path, it leads to bad places.

The Nieman piece notes that, in the case of USA Today/Gannett, it appears that there was a company-wide decision to tell the Internet Archive to get lost:

In total, 241 news sites from nine countries explicitly disallow at least one out of the four Internet Archive crawling bots.

Most of those sites (87%) are owned by USA Today Co., the largest newspaper conglomerate in the United States formerly known as Gannett. (Gannett sites only make up 18% of Welsh’s original publishers list.) Each Gannett-owned outlet in our dataset disallows the same two bots: “archive.org_bot” and “ia_archiver-web.archive.org”. These bots were added to the robots.txt files of Gannett-owned publications in 2025.

Some Gannett sites have also taken stronger measures to guard their contents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine return a message that says, “Sorry. This URL has been excluded from the Wayback Machine.”

A Gannett spokesperson told NiemanLab that it was about “safeguarding our intellectual property” but that’s nonsense. The whole point of libraries and archives is to preserve such content, and they’ve always preserved materials that were protected by copyright law. The claim that they have to be blocked to safeguard such content is both technologically and historically illiterate.

And here’s the extra irony: blocking these crawlers may not even serve publishers’ long-term interests. As I noted in my earlier piece, as more search becomes AI-mediated (whether you like it or not), being absent from training datasets increasingly means being absent from results. It’s a bit crazy to think about how much effort publishers put into “search engine optimization” over the years, only to now block the crawlers that feed the systems a growing number of people are using for search. Publishers blocking archival crawlers aren’t just sacrificing the historical record—they may be making themselves invisible in the systems that increasingly determine how people discover content in the first place.

The Internet Archive’s founder, Brewster Kahle, has been trying to sound the alarm:

“If publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.”

But that warning doesn’t seem to be getting through. The panic about AI has become so intense that people are willing to sacrifice core internet infrastructure to address it.

What makes this particularly frustrating is that the internet’s openness was never supposed to have asterisks. The fundamental promise wasn’t “publish something and it’s accessible to all, except for technologies we decide we don’t like.” It was just… open. You put something on the public web, people can access it. That simplicity is what made the web transformative.

Now we’re carving out exceptions based on who might access content and what they might do with it. And once you start making those exceptions, where do they end? If the Internet Archive can be blocked because AI companies might use it, what about research databases? What about accessibility tools that help visually impaired users? What about the next technology we haven’t invented yet?

This is a real concern. People say “oh well, blocking machines is different from blocking humans,” but that’s exactly why I mention assistive tech for the visually impaired. Machines accessing content are frequently tools that help humans—including me. I use an AI tool to help fact check my articles, and part of that process involves feeding it the source links. But increasingly, the tool tells me it can’t access those articles to verify whether my coverage accurately reflects them.

I don’t have a clean answer here. Publishers genuinely need to find sustainable business models, and watching their work get ingested by AI systems without compensation is a legitimate grievance—especially when you see how much traffic some of these (usually less scrupulous) crawlers dump on sites. But the solution can’t be to break the historical record of the internet. It can’t be to ensure that our most trusted sources of information are the ones that disappear from archives while the least trustworthy ones remain.

We need to find ways to address AI training concerns that don’t require us to abandon the principle of an open, preservable web. Because right now, we’re building a future where historians, researchers, and citizens can’t access the journalism that documented our era. And that’s not a tradeoff any of us should be comfortable with.

❌
❌