■ Backchannel is exclusive to WIRED subscribers. Join us before August 20 to keep receiving it every week and unlock a growing set of subscriber benefits. In this week's edition: Donald Trump's AI Action Plan is based on competing with China–and, maybe, acting like China. Also, the history of tech companies abandoning their free speech rights, and what happened to my original Mac. |
Trump's executive order banning top-down ideological bias is a blatant exercise in top-down ideological bias. |
|
|
On November 2, 2022, I attended a Google AI event in New York City. One of the themes was responsible AI. As I listened to executives talk about how they aligned their technology with human values, I realized that the malleability of AI models was a double-edged sword. Models could be tweaked to, say, minimize biases, but also to enforce a specific point of view. Governments could demand manipulation to censor unwelcome facts and promote propaganda. I envisioned this as something that an authoritarian regime like China might employ. In the United States, of course, the Constitution would prevent the government from messing with the outputs of AI models created by private companies.
This Wednesday, the Trump administration released its AI manifesto, a far-ranging action plan for one of the most vital issues facing the country—and even humanity. The plan generally focuses on besting China in the race for AI supremacy. But one part of it seems more in sync with China's playbook. In the name of truth, the US government now wants AI models to adhere to Donald Trump's definition of that word.
You won't find that intent plainly stated in the 28-page plan. Instead it says, "It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas."
That's all fine until the last sentence, which raises the question—truth according to whom? And what exactly is a "social engineering agenda"? We get a clue about this in the very next paragraph, which instructs the Department of Commerce to look at the Biden-era AI rules and "eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change." (Weird uppercase as written in the published plan.) Acknowledging climate change is social engineering? As for truth, in a fact sheet about the plan, the White House says, "LLMs shall be truthful and prioritize historical accuracy, scientific inquiry, and objectivity." Sounds good, but this comes from an administration that limits American history to "uplifting" interpretations, denies climate change, and regards Donald Trump's claims about being America's greatest president as objective truth. Meanwhile, just this week, Trump's Truth Social account reposted an AI video of Obama in jail.
In a speech touting the plan in Washington on Wednesday, Trump explained the logic behind the directive: "The American people do not want woke Marxist lunacy in the AI models," he said. Then he signed an executive order entitled "Preventing Woke AI in the Federal Government." While specifying that the "Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace," it declares that "in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas." Since all the big AI companies are courting government contracts, the order appears to be a backdoor effort to ensure that LLMs in general show fealty to the White House's interpretation of history, sexual identity, and other hot-button issues. In case there's any doubt about what the government regards as a violation, the order spends several paragraphs demonizing AI that supports diversity, calls out racial bias, or values gender equality. Pogo alert—Trump's executive order banning top-down ideological bias is a blatant exercise in top-down ideological bias. |
|
|
It's up to the companies to determine how to handle these demands. I spoke this week to an OpenAI engineer working on model behavior who told me that the company already strives for neutrality. In a technical sense, they said, meeting government standards like being anti-woke shouldn't be a huge hurdle. But this isn't a technical dispute: It's a constitutional one. If companies like Anthropic, OpenAI, or Google decide to try minimizing racial bias in their LLMs, or make a conscious choice to ensure the models' responses reflect the dangers of climate change, the First Amendment presumably protects those decisions as exercising the "freedom of speech and expression" touted in the AI Action Plan. A government mandate denying government contracts to companies exercising that right is the essence of interference.
You might think that the companies building AI would fight back, citing their constitutional rights on this issue. But so far no Big Tech company has publicly objected to the Trump administration's plan. Google celebrated the White House's support of its pet issues, like boosting infrastructure. Anthropic published a positive blog post about the plan, though it complained about the White House's sudden seeming abandonment of strong export controls earlier this month. OpenAI says it is already close to achieving objectivity. Nothing about asserting their own freedom of expression. |
|
|
The reticence is understandable because, overall, the AI Action Plan is a bonanza for AI companies. While the Biden administration mandated scrutiny of Big Tech, Trump's plan is a big fat green light for the industry, which it regards as a partner in the national struggle to beat China. It allows the AI powers to essentially blow past environmental objections when constructing massive data centers. It pledges support for AI research that will flow to the private sector. There's even a provision that limits some federal funds for states that try to regulate AI on their own. That's a consolation prize for a failed portion of the recent budget bill that would have banned state regulation for a decade.
For the rest of us, though, the "anti-woke" order is not so easily brushed off. AI is increasingly the medium by which we get our news and information. A founding principle of the United States has been the independence of such channels from government interference. We have seen how the current administration has cowed parent companies of media giants like CBS into apparently compromising their journalistic principles to favor corporate goals. Extending this "anti-woke" agenda to AI models, it's not unreasonable to expect similar accommodations. Senator Edward Markey has written directly to the CEOs of Alphabet, Anthropic, OpenAI, Microsoft, and Meta urging them to fight the order. "The details and implementation plan for this executive order remain unclear," he writes, "but it will create significant financial incentives for the Big Tech companies … to ensure their AI chatbots do not produce speech that would upset the Trump administration." In a statement to me, he said, "Republicans want to use the power of the government to make ChatGPT sound like Fox & Friends."
As you might suspect, this view isn't shared by the White House team working on the AI plan. They believe their goal is true neutrality, and that taxpayers shouldn't have to pay for AI models that don't reflect unbiased truth. Indeed, the plan itself points a finger at China as an example of what happens when truth is manipulated. It instructs the government to examine frontier models from the People's Republic of China to determine "alignment with Chinese Communist Party talking points and censorship." Unless the corporate overlords of AI get some backbone, a future evaluation of American frontier models might well reveal lockstep alignment with White House talking points and censorship. But you might not find that out by querying an AI model. Too woke.
|
|
| The AI industry's relative silence regarding the anti-woke executive order is reminiscent of Meta's behavior when conservative politicians accused it of "censorship" due to its content moderation policies in 2024. At the time, Meta didn't invoke its First Amendment rights to police content. Instead, CEO Mark Zuckerberg wrote a letter to Republicans with a tactical apology. He even used the word "censorship" to describe his company's content takedowns, though of course only government actions qualify as censorship. I wrote about this in August 2024.
What stood out to me, besides the letter's simpering tone, was how Zuckerberg used the word "censor." For years the right has been using that word to describe what it regards as Facebook's systematic suppression of conservative posts. Some state attorneys general have even used that trope to argue that the company's content should be regulated, and Florida and Texas have passed laws to do just that. Facebook has always contended that the First Amendment is about government suppression, and by definition its content decisions could not be characterized as such. Indeed, the Supreme Court dismissed the lawsuits and blocked the laws.
Now, by using that term to describe the removal of the Covid material, Zuckerberg seems to be backing down. After years of insisting that, right or wrong, a social media company's content decisions did not deprive people of First Amendment rights—and in fact said that by making such decisions, the company was invoking its free speech rights—Zuckerberg is now handing its conservative critics just what they wanted.
|
|
| Mike asks, "Do you still own a 1984 Macintosh?"
Thanks for the question, Mike. Yes, I do. What else would you expect from the guy who wrote a book about that computer? Somewhere in my basement, my original Mac rests, along with an Apple II, a Macintosh II, a Macintosh Plus, an iBook, a Laserwriter, and other devices in an unintentional museum of inert Apple products. The collectability value of my original Mac—which didn't boot the last time I tried it—took a dive when I leveled up from the original 128K of RAM to 512K. (That made it a "Fat Mac.") That upgrade cost $1,000 in 1984 and would later devalue the machine by untold hundreds of dollars in the collector's marketplace. Maybe one day I will exhume it and set up a MacAquarium.
Submit your questions in the comments below the article, or send an email to backchannel@wired.com. Write ASK LEVY in the subject line.
|
|
|
"You can't be expected to have a successful AI program when every single article, book, or anything else that you've read or studied, you're supposed to pay for. 'Gee. I read a book, I'm supposed to pay somebody.' And, you know, we appreciate that, but you just can't do it because it's not doable."—Donald Trump, on AI and intellectual property, July 23, 2025 |
|
| Sign Up to Our Other Subscriber Newsletters |
|
|
| A clear-eyed view of the tech news coming out of China by Zeyi Yang and Louise Matsakis Sign up |
|
|
| The inside story from the intersection of Trumpworld and Silicon Valley by Jake Lahut Sign up |
|
|
| Dispatches from the heart of the AI scene by senior correspondent Kylie Robison Sign up |
|
|
|
Comments
Post a Comment