How a Congressional Primary Became a Proxy Battle Over A.I.
When political candidates rehearse for campaign debates, they commonly cast a surrogate in the role of their rival. Rob Portman, who represented Ohio in the Senate for a dozen years, played Al Gore for George W. Bush, and Barack Obama for both John McCain and Mitt Romney. The thirty-five-year-old New York State assemblyman Alex Bores

When political candidates rehearse for campaign debates, they commonly cast a surrogate in the role of their rival. Rob Portman, who represented Ohio in the Senate for a dozen years, played Al Gore for George W. Bush, and Barack Obama for both John McCain and Mitt Romney. The thirty-five-year-old New York State assemblyman Alex Bores, a candidate in the pullulating Democratic primary for New York’s Twelfth Congressional District, opted instead to enlist a chatbot. This might seem like his generation’s path of least resistance. Bores, however, who has a neatly trimmed beard and wears a navy suit for any and all occasions, comes across as the sort of very good boy who does more homework than is strictly necessary. Just after the new year, I joined him on a road trip to Albany for the opening of the legislative session. Bores takes pride in his functional competence, and upon being told that he would not be the one driving—his chief of staff, Anna Myers, wanted him on the phone to thank donors—he accepted the passenger seat with some reluctance. To Myers’s mild exasperation, he prioritized a round of “birthday calls,” a regular practice he extends to family, friends, and people he once bumped into on the subway.
Bores grew up on the Upper East Side; his parents worked in network television, and in his first major media appearance, as a three-year-old, his mother read him “Everyone Poops” on ABC7 Eyewitness News. He traces his political commitments to second grade, when his father brought him to a union picket line, and he carried a sign that read “Disney is mean to my dad.” Before turning to public service, Bores worked in the software industry, including for the defense contractor Palantir, where he has described working on epidemic preparedness, V.A.-hospital staffing, and other projects related to government efficiency. He now advertises himself as New York’s first Democratic elected official with a computer-science degree. (Two were elected the same day.) As a youthfully industrious Assembly member, he devoted much of his tenure to the nascent matter of A.I. regulation. This was a niche agenda in those days—which is to say, a year ago—but he was already a habitual A.I. user. At his office in Albany, he told me he had worked with Stanford-affiliated researchers to feed the entirety of the state’s legal code into a specially designed A.I. tool. He instructed the system to find examples of outdated, nonsensical, or discriminatory provisions—“zombie laws that were clogging up our system,” as he described them. It returned from an afternoon of work with more than four thousand suggestions, including Article 1src-B of the New York General Business law, one section of which mandates the speedy delivery of international money orders delivered by steamboat, and Labor Law Section 2src3-A, which decreed that all elevators be furnished with chairs. Less hilariously, New York Domestic Relations Law Section 13-AA required that any applicant for a marriage license who “is not of the Caucasian, Indian or Oriental race” first submit to a test for sickle-cell anemia.
It was only natural that his debate prep would feature a chatbot—specifically Claude Cowork, an “agentic” assistant, developed by Anthropic, that can execute multistep instructions on its own. The upcoming panel, he explained to Claude, would feature the other two front-runners in what’s currently a nine-way contest: Micah Lasher, who is a forty-four-year-old member of the New York State Assembly, a former mayoral aide to Michael Bloomberg, and a Democratic-establishment favorite; and Jack Schlossberg, a social-media influencer who, as the thirty-three-year-old grandson of President John F. Kennedy, is Democratic-establishment royalty. Schlossberg is known for having once submitted to X the question of who was “way hotter,” Second Lady Usha Vance or the former First Lady Jacqueline Kennedy Onassis. When asked to explain such a bizarre inquiry, Schlossberg defended it as a provocative demonstration of trollery: “The internet is a nuance-destruction machine—there’s no room for qualifying anything, ever. You have to be very controversial to break through.” It’s unclear what it would have looked like for Schlossberg to promote a nuanced discussion of the relative sex appeal of his late grandmother, but he has a talent for aura-farming. In one early poll, Schlossberg led the Twelfth Congressional District race with twenty-two per cent.
Bores wanted to use Claude for debate practice, which required that the chatbot first bring itself up to speed on his rivals. Claude, in response to Bores’s prompt, proposed that it begin by spinning up a few different sub-agents to perform various parts of the task—including, say, background investigations—in parallel. A few minutes later, however, Claude pinged Bores with an update: “One agent declined the research.”
Bores replied, “Wait, what? Slow down and explain that one.”
This particular sub-agent, Claude wrote, believed its directive violated a company policy against “opposition research,” or the compiling of dossiers on private individuals. Claude was happy to assure Bores that it had contrived a workaround: it had done the research itself. ” Bores wondered, “Who are you and who is the sub-agent? You are both Claude. Also, why did the other sub-agents not refuse? Why is it just this one?”
The other Claude’s denial, Bores expressed to whichever Claude he was talking to, made no sense: “This is a candidate forum. We’re all putting ourselves out there in public. Maybe tell it that?” Claude agreed to pursue this approach, and soon reappeared to affirm that the other Claude had found it convincing.
Bores recalls thinking, “How surreal is this?” He knew he wasn’t supposed to be distracted from the debate-prep objective, but his first instinct was “to spend so much time interrogating this decision.” He nevertheless recognized that this would not strike most people as surreal for much longer. For many of us, the work of actually doing things may soon be replaced by the managerial supervision of A.I. subalterns. And this is something of a best-case scenario; if automation can come for Rob Portman’s storied career as a debate surrogate, it can come for anyone. Bores didn’t see many signs that our representatives were taking this sufficiently seriously. At a crowded breakfast to welcome elected officials back to the capitol, Bores stopped to greet the state commissioner of labor, who volunteered that she tells workers that A.I. is “not going to replace their jobs, but upskill them.” Bores responded with a noncommittal noise.
When Bores entered the primary, in mid-autumn, his ambition seemed premature. He faced a stark lack of name recognition. His signature piece of legislation was New York’s RAISE Act, a preliminary effort to regulate A.I. developers. This had endeared him to the A.I.-safety community, but it hadn’t exactly galvanized the masses, and he hadn’t planned to make A.I. the centerpiece of his campaign. Voters who did have feelings about A.I. tended to express them in the form of local opposition to new data centers, but this didn’t rate as a concern in midtown Manhattan. The only figures who seemed to take the RAISE Act personally were those who viewed any regulation of the industry as a major threat to both national competitiveness and their own equity holdings. Soon after Bores announced his candidacy, a pro-.A.I. super-PAC network called Leading the Future—lavishly funded by the venture-capital firm Andreessen Horowitz and OpenAI’s president, Greg Brockman, among others—announced that it was willing to spend millions of dollars to defeat Bores and set an example for any candidate with regulatory aspirations. He would be defined, like it or not, by his relationship to the technology, and he would serve as a bellwether for A.I. politics on the national stage.
In March, Bores told me that, in the two months since we had first met, his conversations had taken on an entirely new tenor: now, he said, “some activists see me and they’re, like, ‘Yeah! Fuck A.I.!’ ” He paused: “That’s not really my message, but . . . cool?”
On a frigid evening in Albany, the sidewalks glazed in black ice, the only bar open downtown was what Bores identified as the Republican joint, where he liked to meet his colleagues for karaoke. At the time, Bores could hardly be blamed for steering conversations to the other planks of his platform. In his three years in office, he passed more than thirty bills through the Assembly on basic quality-of-life and affordability issues, among them attempts to restrain telemarketing scams and junk fees. When he did turn to A.I., he packaged it in terms of the Trump Administration’s efforts to preëmpt the ability of states to enact their own regulations. These efforts failed, so A.I.-aligned lobbyists pressured Governor Kathy Hochul and other legislators to strip the RAISE Act of third-party-audit requirements and other more aggressive provisions. Bores nevertheless describes the act as “the strongest A.I.-safety law in the country.” This isn’t saying much. The final version requires that frontier labs develop and publicize transparent safety protocols, and to report any concerning incidents to the government: in other words, more or less what they already should be doing. The fines for noncompliance are rounding errors. Bores regards it as at least a good opening gambit.
Leading the Future, the tech right’s super PAC, didn’t want to take any chances. The outfit has run dramatic television spots that prominently feature an orange-jumpsuited Sam Bankman-Fried, the disgraced crypto executive who directed some of his fraudulently obtained fortune to A.I.-safety causes and communities—including an organization that supported Bores in his first run for office—or refer darkly to Bores’s tenure at Palantir, and the company’s contract with ICE. (Bores maintains that, during his tenure, Palantir’s contract with ICE was focussed on human and drug trafficking, and that he left before the company formally agreed to lend its tools for deportations.) One ad is titled “Expert in Hypocrisy,” which is particularly resonant given that it somehow neglects to mention that among Leading the Future’s donors is Joe Lonsdale, a Palantir co-founder. District residents have been blanketed with anti-Bores mailers and texts. The candidate has become accustomed to comical mailroom interactions with his neighbors, who are routinely delivered fliers emblazoned with his face. A recent Vanity Fair piece recounted an interaction he’d had while phone-banking with what the reporter described as a “particularly disgruntled voter.” After he introduced himself, the voter said, “Are you the Palantir guy? Absolutely not,” and hung up. In a follow-up text, he explained that the smear campaign was funded by “AI billionaires, which should tell you all you need to know about how they feel about me.” When the article appeared, Bores forwarded it along to the woman on the other side of the exchange. She replied to say that this “ ‘disgruntled’ voter, upon doing her homework, donated $5srcsrc to your campaign.”
Other factions within the A.I. community have rallied in response. Leading the Future’s measures have been countered by super PACs that have emerged to underwrite candidates who support measured regulation. Several of these PACs are affiliated with an organization called Public First, which received a grant of twenty million dollars from Anthropic. Public First has spent four hundred and fifty thousand dollars in support of Bores. Although Anthropic itself has stipulated that its contributions can’t be used for electoral purposes, plenty of individual employees at Anthropic have given directly to the Bores campaign. The broader A.I.-safety cohort—including some who work at OpenAI—has organized itself to pitch in, and Bores has raised substantial sums for a congressional primary. Chris Larsen, a crypto billionaire, was so appalled by Leading the Future’s tactics that he committed to a three-and-a-half-million-dollar ad spend on behalf of Bores—or at least notionally on behalf of Bores, as the first one was primarily an anti-OpenAI salvo. All of this has increased the candidate’s profile significantly; a recent poll commissioned by Bores had him within a few points of Schlossberg. (The betting markets are split: Kalshi leans toward Lasher, and Polymarket toward Bores.))
The race took on the dimensions of a proxy battle between OpenAI and Anthropic, a clash of money and priorities that seems likely to prefigure other midterm contests. In December, the House Minority Leader, Hakeem Jeffries, created the new House Democratic Commission on A.I. and the Innovation Economy. He appointed moderate caucus members who faced opponents to their left and might benefit from a funding source that wasn’t AIPAC, the pro-Israel lobby that has become anathema to many progressive voters. Five House Democrats—including candidates in New York, New Jersey, California, and Virginia—were recently endorsed by Leading the Future. One of the commission’s co-chairs, the North Carolina incumbent Representative Valerie Foushee, instead received ad-spending from the Public First side of the aisle. Her challenger in the primary was Nida Allam, a candidate supported by the progressive group Justice Democrats, who positioned herself as an enemy of the entire industry as such: she made a point of calling for a sweeping federal ban on data-center construction. Foushee, who did not favor the ban, narrowly prevailed. In an editorial for The Nation, Usamah Andrabi, the communications director for Justice Democrats, argued that there was essentially no difference between an A.I. PAC and AIPAC.
This has forced the hands of Bores’s rivals. Micah Lasher has taken to telling audiences, “I think we should have dark money out of this race, both from OpenAI, which is trying to hurt Alex, and from Anthropic, who is the biggest funder of his campaign.” (Lasher’s principled stand against super PACs does not include the one run by Michael Bloomberg, a major funder of his campaign. According to Bores, Bloomberg strategists lobbied against the RAISE Act.) Jack Schlossberg has publicly rejected A.I.-related money. In March, Representative Alexandria Ocasio-Cortez called on other Democrats to do the same. Two days later, Bores told me, “One of the things, even the last few weeks, that has changed—and I’m still processing this myself—is that this has become a political issue broader than this race.”
Public sentiment is clear. In March, the pollster David Shor wrote on X that artificial intelligence had risen in salience “faster than any issue we track—it’s now more important to voters than climate change, child care, and abortion.” But the median voter isn’t exactly baying for moderation and compromise. People are angry and afraid, and often reasonably so. Among some fraction of liberal voters, however, any concession to the technology’s utility is increasingly regarded as gauche, at best—and, at worst, a sign of reactionary corporate centrism. The fashionable newsletter “Today in Tabs,” which tends to reflect and reproduce the consensus opinion in certain left-leaning media-insider quarters, recently published a post titled “Who Goes AI?” The column gossiped about which journalists and pundits did not pass the new purity test. (For those who might be slow on the uptake, an italicized note emphasized that it was a tribute to Dorothy Parker’s famous 1941 Harper’s story “Who Goes Nazi?”) Bores, presumably, had already failed as well.
This hard-line stance, which gathers redistributive economics and far-reaching resentment of tech’s A.I. oligarchy under the banner of “A.I. populism,” seems like smart politics. Shor has found that A.I.-populist messaging has, as yet, no meaningful partisan valence. In a recent NBC News survey, the net favorability of A.I. ranked slightly below that of ICE. (The only two terms that polled worse than A.I. were the Democratic Party and the Islamic Republic of Iran.) The rise of asymmetric violence against data centers, as predicted in one widely read essay by an A.I. entrepreneur, has seemed increasingly plausible. In early April, someone threw a Molotov cocktail at the home of OpenAI’s C.E.O., Sam Altman; three nights later, two twentysomethings allegedly shot at the property.
On the right, Steve Bannon has said that “AI oligarchs want techno-feudalism” and Josh Hawley has emphasized how much the regulatory legislation he’s introduced has rankled “all those tech guys.” Their Democratic counterparts have often lagged in comparison. In an essay that went niche-viral this winter, the writer Dan Kagan-Kans argued that the left is in danger of “missing out on A.I.” His targets were the subset of progressives who write off A.I. as fake and bullshit—a mere grift. The prevalence of such dismissiveness seems to be on the wane. In March, Bernie Sanders posted footage of himself kibitzing with Eliezer Yudkowsky, the author of a no-nonsense book called “If Anyone Builds It, Everyone Dies,” and other representatives of the core safety cohort. Sanders and Ocasio-Cortez have demanded a federal moratorium on data-center construction until the industry is reined in. Bores approves of the measure insofar as it provides elected bodies with leverage to assert their democratic prerogatives.
Many people who might join a broad A.I.-populist movement have good reasons to do so. Although economists disagree in their predictions, the future of the job market seems inauspicious, especially for white-collar workers. Even in the absence of major unemployment shocks, inequality will likely increase as returns to capital exponentially outstrip those to labor. A.I. slop provokes aesthetic revulsion, and the creation of infinite noise at zero cost will only further undermine the credibility of our information environment. Cognitive offloading, including but not limited to basic cheating, seems to defy our perennial hopes for an educated citizenry. An affective reliance on chatbots will only further erode our communal bonds. The dangers to mental health, especially among teens, are already alarming. The geopolitical risks posed by advanced models, whether in the hands of non-state actors, authoritarian governments, or our own automated military, should freak everyone out. And then there’s the nontrivial possibility that A.I., perhaps in the form of “superintelligence,” will lead to our wholesale extinction.
A.I. executives have not always covered themselves in glory, but many calls for regulation are coming from inside the house: Joshua Achiam, the “chief futurist” of OpenAI, tweeted that it was an “own-goal” to go after Bores. Bores counts among his donors employees across most of the frontier A.I. labs, not just Anthropic and OpenAI. This solidarity, however, has its limits, and this novel coalition—which includes people who are worried about A.I. for all sorts of disparate reasons—is unstable. Many progressives have come to believe, for example, that data centers present unique risks to local water resources. As Andy Masley, a prominent A.I. researcher, has repeatedly pointed out, such claims have thus far been wildly exaggerated: existing data centers use far less water than fields of soybeans or golf courses. The more epistemically rigorous elements in the coalition have struggled to make common cause with these lower-information voters. Although the proliferation of data centers certainly seems broadly bad for the environment, and even in the best of circumstances they remain unsightly and loud, many of the other environmental issues that anti-A.I. activists cite have less to do with “A.I.” as such than they do with over-all energy policy and concentrated corporate power. The poor Memphis neighborhoods that abut xAI’s Colossus, currently the largest training cluster in the world, have been the victims of awful increases in pollution, but that is because Elon Musk powered his installation with gargantuan and unpermitted methane-gas turbines. In the longer term, a more active government might press these companies to make commensurable investments in the green-energy transition. These are trade-offs that can be negotiated, at both the local and national levels, to benefit our communities.
Bores has put together a wonky A.I.-policy framework, with eight subheadings and forty-three bullet points, that reflects this temperate, deliberative approach. If companies want to build data centers, for example, they should be required not only to absorb any electricity-cost increases but also to pay for upgrades to our grid infrastructure. (In what might be perceived as a sop to one wing of his fragile coalition, he proposes “monitoring water usage.”) A.I. companies should pay at least their fair share of property taxes, if not more. In northern Virginia’s Loudoun County, the buzzing-windowless-behemoth capital of the world, every dollar in services provided to data centers returns twenty-six dollars in revenue, and that has allowed the county to lower its real-property taxes every year for the past decade.
Elements of the Bores plan seem wishful (like a call for the sorts of job-retraining programs that were promised to mitigate offshoring and seem unlikely to mitigate sweeping automation) or unrigorous (like a technical solution to deepfakes that experts say has been, thus far, a failure). But other elements are more considered and ambitious. Bores studied labor relations and economics as an undergraduate at Cornell, and talks frequently about a “token tax” on corporate A.I. usage; this would represent a truly radical shift of the tax burden from labor to capital. Now is the perfect time, he thinks, for the government to buy out-of-the-money warrants in A.I. firms—to reserve the right, in other words, to make cheap equity investments in high-growth technology companies that would pay off handsomely once they reach certain valuations. These revenues might be recirculated to the public in the form of “A.I. dividends” from a sovereign wealth fund. As he described the idea at a recent tech-governance forum, “If you do it afterwards, you’re a Communist seizing the means of production, but if you do it now you’re a venture capitalist.” The crowd, which skewed heavily toward the quarter-zip community, laughed in what sounded like approval.
Bores uses A.I. not only for debate prep but for help with home-cybersecurity projects—he’s the sort of person who wouldn’t be caught dead with default router settings—and hobbyist vibe coding. (It’s a very long story, but Google Maps’ access to real-time data for the Roosevelt Island tram runs off of his personal laptop.) In other words, he represents the outlier in the middle. His fluency with the technology has made him vulnerable to voters who loathe all A.I. Like other politicians with technocratic instincts and forty-three-point plans, Bores might be called out for what rationalists call a “missing mood”—the sense that his mild, matter-of-fact rhetoric is incommensurate with the gravity of the situation.
It’s not easy for anyone, at the moment, to square an admiration for Silicon Valley ingenuity and dynamism with a credible commitment to restrain the tech sector’s imperial appetites. The situation is tricky enough in the case of innovations that users voluntarily adopt—plenty of people do not let their dim view of Meta overly interfere with their enjoyment of Instagram. It is much worse for A.I., where the downside risks are blindingly apparent and the upside gains are either highly localized (among software engineers, for example) or entirely theoretical (an end to disease). The public calculus would presumably change if, say, A.I. went and actually cured cancer.
The citizens of many other countries, especially in East Asia, are vastly more optimistic about the technology. Then again, they tend to be vastly more optimistic about their governments’ capacity to handle the attendant social change. Bores believes that institutional decline and mistrust of A.I. are two sides of the same coin. “We’re building the power of gods, and it’s in the hands of, you know, five people,” he said. “If you don’t trust government to be able to be a counterweight to that, that’s a pretty grim picture.”
Bores thinks he has found the right language to capture this ambivalence. On a cold Saturday morning in March, Bores met me for lunch in an upscale bistro in the gentrified Hotel Chelsea, toward the southern extremity of the Twelfth Congressional District; he had an hour between an Easter-egg hunt and a No Kings protest in midtown, where he planned to meet his dad. The issue, as he saw it, isn’t that A.I. is bad (which it is) or good (which it also is) but that we all feel as though it is happening to us. One term of art for this dynamic is “gradual disempowerment,” or the incremental concession of human practices to irreducibly complex autonomous systems, but Bores preferred to speak of the loss of control. If this formulation helps keep the coalition together, it might serve as a template for candidates in the coming midterms and beyond.
One of the things that might aid his reformist campaign is the fact that the top-line polling does not tell quite the same story in cross-section. While white voters narrowly disapprove of A.I., Latino voters are much more positive toward it, and Black voters even more so. Men are more positive than women, and young people much more positive than their parents. The strongest predictor of one’s attitudes, at least according to one recent progressive poll, is personal use: voters who use it are overwhelmingly more positive than voters who don’t.
There are many explanations for these results, but one possibility is that “control” is a relative concept. At the macro scale of society, loss of control seems like a legitimate reason for worry. At the micro scale, however, A.I. often feels enabling. I recently used a chatbot to help fix a stupid if beloved (by my five-year-old) remote-control airplane I otherwise would have thrown out, and I saved a lot of money by opting out of a health-care plan that one broker had confidently deemed a necessity. I’ve handled minor plumbing issues, and other vexations around the house, that might otherwise have put me at the mercy of service providers in dubious good faith. And I determined that a local branch of a startup dentistry practice had indeed tried to pressure me into an expensive procedure I didn’t actually require. One might remain agnostic on A.I.’s ability to cure cancer and also believe that it can serve as an effective counterbalance against unscrupulous (human) actors. Still, Bores told me that lines were starting to blur. He could feel it in his daily donor calls. He told me, “Twice in just the last week, people asked me if I was an A.I.” ♦

0 comments