August 13, 2019 New York Gov. Andrew Cuomo signed legislation in late July to create a temporary state commission that will examine how artificial intelligence impacts his state.
In doing so, New York joined Vermont and Washington in establishing an A.I. task force that will examine the cutting-edge technology and then make recommendations about how it should be regulated. The groups vary in their mission, but the general message is the same: companies pushing A.I., the brains behind innovation like robotics and facial recognition software, can’t necessarily be trusted to do what’s in the best interest of state residents.
Brandie Nonnecke, founding director of University of California's Center for Information Technology Research in the Interest of Society Policy Lab, say that task forces could help keep state lawmakers up to date about the technology. The end result, she says, will be better-written bills that don’t get stuck in legislative purgatory.
"I think it's important that the states engage in these task forces," Nonnecke says. "It allows them better identify the needs and to gather expert feedback."
The tasks forces are typically filled by industry experts, politicians, and academics who periodically meet and create reports intended to educate lawmakers about A.I. policy.
In New York, the new A.I. task force must present a final report to Gov. Cuomo and other state leaders by the end of 2020 detailing A.I.’s impact on data privacy, how to regulate A.I., and the potential impact of regulation on the tech industry. Meanwhile, in Washington, the focus is very narrow: The impact of A.I. on employment in the state.
At the federal level since Nov. 2018, members of Congress have introduced eight A.I.-related bills, according to Nonnecke. They include the Artificial Intelligence Initiative Act that would increase funding for A.I. research, and the Commercial Facial Recognition Privacy Act that would require certain organizations to get consent from users to scan their faces.
None have actually passed.
States, in contrast, and are more likely to enact A.I.-related policies, Nonnecke wrote in May. Additionally, she believes that state A.I. task forces could have more sway with lawmakers and are able to put the topic in front of them more often, she told Fortune.
If California had an A.I. task force, Nonnecke said, it may have led to a better law that prohibit bots— software that runs automated tasks—from influencing voters with false information during elections, among other things. That law, which she said includes several gaping holes, went into effect in July.
"The intent of the law is great—we shouldn't deceive people," Nonnecke says. But the bill lacks important details, she added, like who is supposed to monitor bots on social media services, where disinformation runs rampant, while also including a convoluted definition of bots.
One thing is certain: Expect lawmakers to introducing more A.I.-related bills, even if they lack nuance and specifics. Those rules will have a huge impact, good and bad, on all kinds of industries as well as on a public that must live with the data collection, tracking, and upheaval in employment that the technology will inevitably bring.
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com
No comments:
Post a Comment