What happens when a machine begins to think for itself? Does it have feelings? Does it suffer? Does it need special attention? These philosophical questions, once the domain of science fiction novels, have now become a pressing scientific reality, prompting researchers to demand answers.
As a result, experts in the field are urging companies like Google, Microsoft and others to investigate whether AI has already developed consciousness.
A group of philosophers and computer scientists recently published a study in Nature calling on AI developers—such as OpenAI, Google and Microsoft—to examine the "realistic possibility that some AI systems will be conscious and/or robustly agentic—and thus morally significant—in the near future." The study also calls on these companies to define how such systems should be treated if they are found to be "alive" and "thinking."
"Failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer," says Jonathan Mason, a mathematician based in Oxford.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
"It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about—that we didn’t even realize that it had perception," he adds.
Jeff Sebo, a philosopher at New York University and a co-author of the report, warned, "If we wrongly assume a system is conscious, welfare funding might be funneled towards its care, and therefore taken away from people or animals that need it, or it could lead you to constrain efforts to make AI safe or beneficial for humans."
The report describes this as a "transitional moment." One of its co-authors, Kyle Fish, was recently hired as an "AI-welfare researcher" by the AI firm Anthropic in California.
"There is a shift happening because there are now people at leading AI companies who take AI consciousness, agency, and moral significance seriously," Sebo says.