AI safety meets the war machine
PLUS: Will age verification amplify censorship and privacy risks online? View in browser | Update preferences ■ In this week's Backchannel : Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. The future of warfare is AI—and the future of AI may be shaped by warfare When Anthropic last year became the first major AI company cleared by the US government for classified use—including military applications—the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with ...