
New York City’s bans on facial-recognition tech have got to go. The charges that were dropped against Zuhdi Ahmed prove that.
Ahmed was a pro-Palestinian protester accused of throwing a rock at a pro-Israel counter-demonstrator. Yet a judge ruled that he can’t be charged because cops used prohibited AI facial-recognition tech (a tip provided by a FDNY fire marshal in response to a Crimestoppers post) to nail him.
Let’s be clear: There’s nothing wrong with such tools in the pursuit of justice.
In this case, FDNY fire investigators did what they always do — they shared crimefighting information with their colleagues.
Yet Judge Valentina “Let-’em-go” Morales deemed that detectives ran afoul of departmental guidelines in using information obtained by a city fire marshal who was investigating the burning of an Israeli flag at the protest and turned up publicly sourced photos of Ahmed at high school social events.
Per The City, the Legal Aid Society filed suit targeting the FDNY’s use of Clearview AI’s facial-recognition software in previous investigations.
Bad enough the NYPD is banned from using such tech; but banning its use outside the department is just bonkers, ensuring that perps who could otherwise be made to pay for their crimes simply won’t.
Plus, under the department’s facial-recognition guidelines implemented in 2020, NYPD investigators are limited to searches of arrest and parole photos.
In 2019, NYPD’s facial-recognition software helped ID the mystery man who left a pair of rice-cooker bombs in the subway. The man, Larry Griffin II was ID’d before the devices were found to be fakes.
Nor is this a matter of government spying or a breach of privacy: Cameras are all over the place these days, and pictures of people fill up the internet.
Meanwhile, facial-recognition tech — in combination with AI — is evolving and improving as an effective investigative tool.
Mayor Eric Adams needs to lift the ban pronto.