Can A.I. Be Pro-Worker?

As an example of how A.I. could be used in a pro-worker fashion, the report points to an Electrician’s Assistant (EA), developed by Schneider Electric, a French-based multinational company. When confronted by a tricky problem, the electrician feeds information and pictures into an assistant, which is a large language A.I. model. The assistant conducts a

Powered by NewsAPI , in Liberal Perspective on .

news image

As an example of how A.I. could be used in a pro-worker fashion, the report points to an Electrician’s Assistant (EA), developed by Schneider Electric, a French-based multinational company. When confronted by a tricky problem, the electrician feeds information and pictures into an assistant, which is a large language A.I. model. The assistant conducts a diagnosis and issues recommendations, in an iterative fashion, for how to fix the problem. It also helps the electrician file maintenance reports, and the paper cites evidence that the time spent on this task has been halved. “Tools akin to EA could be readily built to support many additional trade and modern craft workers, such as plumbers, building contractors, and health-care workers,” the report says.

This is an encouraging story, but how representative is it? For every example like the Electrician’s Assistant, there is one in which A.I. is already displacing jobs, or, at least, it’s being used as an excuse for big layoffs. Last week, Block, a financial-services platform, announced that it was getting rid of four thousand workers, out of ten thousand in total, on the ground that A.I. could do their jobs. Even in cases where companies have employed A.I. programs without engaging in mass layoffs, they have often been used to surveil and coerce workers rather than empower them. Amazon has an Associate Development and Performance Tracker program that it employs in its warehouses and always-on cameras that it deploys in its delivery vehicles, which are two notorious instances. Last week, Burger King said that it’s testing new A.I.-powered headsets, which can be used, among other things, to check whether its customer-service employees say “please” and “thank you.”

The three M.I.T. economists don’t underestimate the scale of the challenge. “None of the big companies are pouring even a small fraction of their investment into developing A.I. as a pro-human, pro-worker tool,” Acemoglu said in his interview. To reorient things, he and his colleagues make a series of policy recommendations, including changing the tax laws, fostering competition in the A.I. sector, and giving workers a direct stake in A.I. One key proposal is for the government to use its financial power—both as a provider of research grants, and as a buyer and user of technology systems—to push the development of A.I. in a pro-worker direction. In the health and education sectors, for example, which together make up about twenty-five per cent of the nation’s G.D.P., government (at the federal and local level) is a major purchaser of tech products—a position it could use to demand the development of A.I. assistants that enhance workers’ capabilities. When I called up Autor last week, to ask him about the report, he cited the opportunity for A.I. assistants to help nurses carry out more demanding medical tasks, and to help teachers offer their students personalized support. “We pay for this stuff, we use it, the welfare of our children and grandchildren depends on it,” Autor said, referring to taxpayers. “I’m not saying the government should take over A.I., but it should use its power to shape its development.”

In theory, the tax code could also be used to reshape the incentives of A.I. developers and users. When firms make investment decisions, they often have a choice of buying new labor-saving equipment, such as a chatbot, or hiring new workers and retraining existing ones. The current tax code, with its low rates on capital income and accelerated depreciation schedules, pushes businesses in the first direction. It “favors capital to an enormous extent, while it is very burdensome toward workers,” Autor pointed out. One way to change this would be to raise taxes on capital and reduce taxes on labor: that would make the code more neutral. A more drastic and more politically challenging option, which Autor said is worth considering, would be to tax consumption rather than working.

Part of the report that particularly caught my eye is a section titled “Discouraging expertise theft.” Right now, A.I. companies “freely scrape content from websites, social media, YouTube, newspapers, Wikipedia, and blogs, then statistically recombine this material and sell access to the results,” the report notes. “Authors, journalists, visual artists, musicians, translators, and countless other creators find their work appropriated as training data, with no compensation or control.” A recently published book, “The Means of Prediction,” by an Oxford economist, Maximilian Kasy, likens this grab to the enclosure of common land by landlords during medieval times—a development which greatly benefitted the landlords but destroyed the livelihoods of many small farmers. “A lot of the internet is being enclosed and resold to us as private property,” Autor said. “This is a huge reallocation of property rights.” With some firms using the performance of their own employees as data to train A.I. models, the report argues that the appropriation issue goes well beyond the internet: “Few employees would willingly train an apprentice designed to replace them, and yet this is precisely what happens when companies use worker expertise to build automation systems.”

Read More