A few of our stories and columns are now in front of the paywall. We at The Chief-Leader remain committed to independent reporting on labor and civil service. It's been our mission since 1897. You can have a hand in ensuring that our reporting remains relevant in the decades to come. Consider supporting The Chief, which you can do for as little as $3.20 a month.
Every year, politicians, lobbyists and advocacy groups congregate at the New York State Association of Black, Puerto Rican, Hispanic & Asian Legislators’ “Caucus Weekend” to network and cut deals.
The theme this year was “The AI Renaissance” and the political elite were eager to embrace artificial intelligence as the vanguard for progress in public service. Undeterred by previous bungled attempts at innovation like the CityTime scandal or controversial NYPD robot dogs, public officials throughout the conference preached a vision of an automated robot government of the future with brief lip service to safeguards.
The city’s chief technology officer, Matthew Fraser, was thankfully more muted in his assessment, stating during a panel that the government needs to be careful with its AI investments.
However, his commitments to safety were vague and noncommittal. In Fraser’s words, “[we] don’t want to put too many safeguards in that would prevent technology from growing, and don’t want too few that would not protect your constituency.”
When it comes to how AI will affect city workers, Fraser was more concrete: The city wants to use AI in public benefits and communication right now, impacting District Council 37, Local 1549 members in agencies like the Human Resources Administration and 311 first.
The problems of AI implementation in government services are clear: AI tools like large language models (LLMs) are nowhere near reliable enough to be used for vital public information. Anyone who has tried to use a chatbot for their Spectrum cable service knows that artificial intelligence can be frustratingly unintelligent, but the use of LLMs creates additional hazards and risks for New Yorkers.
We depend on 311 operators for important information, from school closures during bad weather to polling locations on Election Day. The information that call center operators provide is what makes New York City run — our lives, our economy, our democracy. Trusting this vital service to algorithms and robots without human accountability means, at best, unhelpful and roundabout conversations with chatbots but also fictional laws and, at worst, made-up policies.
Fraser stated that the city’s partnership with Microsoft and OpenAI “locked” the algorithm for the MyCity Chatbot so it couldn’t learn anything except what it was given. But recent news about Air Canada’s chatbot “hallucinating” policies that didn’t exist, even though it was trained on a similarly “locked” dataset, should immediately give policymakers pause if they wish to expand the use of these chatbots in other public services.
For city workers like myself, the city’s embrace of AI appears to be geared towards an unspoken goal: cutting costs by cutting jobs.
CEOs are honest enough in the private sector when they tell their shareholders that they want to use AI to cut employee costs, but public officials bend over backwards to say they want technology to help workers instead of replacing them. To Fraser’s credit, he praised the work of 311 call center operators and cited their high customer satisfaction rates, but his rhetoric doesn’t hold up to reality when the entire existence of the AI industry is dependent on the proposition that AI can save money by cutting basic jobs.
Many were outraged when American workers were forced to train their own offshore replacements. We should feel that same outrage when the data we generate is being used to train algorithms that are meant to replace us. When the city can find millions to experiment with AI but not the funds to ensure we have heat and hot water in our office buildings, it seems like our leaders already think workers have been replaced by robots who don’t require dignity on the job.
AI is a boon for politicians who want to cut deals and cut corners because it’s a magic wand that lets them sidestep the problems of understaffing in city agencies, fixing Tier 6 pensions, and the fundamental issues that have been eating away at city workers for years.
On the job, conditions for city workers are at a breaking point. DC37 Local 1549 members endure daily harassment from managers in various agencies and face verbal and physical abuse from customers. Public officials and policymakers should be finding ways to fix these workplaces instead of firing workers. But when the power to implement technology rests in the hands of management and workers don’t have an equal say, it’s always workers that pay the price — with longer hours, continued understaffing and intensified micromanagement.
The way out is for city workers across city agencies to organize and to build power, so that we have an equal say in deciding how the city uses AI. City workers can start by talking to one another about issues in the workplace, showing up to union meetings and planning ways to take action.
At the political level, we can use our organized political power to help pass bills such as State Senator Kristen Gonzalez’s LOADinG Act, which would bar the use of discriminatory or biased algorithms in government agencies. We can’t depend on promises from politicians that government use of AI will have safeguards — we need to see bills signed into law.
At the bargaining table, city workers in DC37 and Local 1549 can organize to help bargain for rank-and-file members and the common good. Just as teachers demand smaller class sizes so that students can get a better education, public-sector workers can bargain to be paid for the valuable data that our work generates in addition to better working conditions so that the public can receive higher quality services.
When agencies like HRA aren’t continually facing staffing shortages, it means that New Yorkers can be better served by a city workforce that isn’t overburdened and overworked. When Tier 6 is fixed, the city is better able to recruit and retain talent in these positions.
And when AI technology is being implemented, city workers should not only be consulted by management but also be compensated and protected for the data that we produce so that AI can work for us — not the other way around.
Honda Wang works at the NYC Campaign Finance Board. He is a rank-and-file member of AFSCME Local 1549, DC37, which represents NYC clerical-administrative employees.
We depend on the support of readers like you to help keep our publication strong and independent. Join us.
Comments
No comments on this item Please log in to comment by clicking here