Has AI become conscious? Researchers demand answers from Google and Microsoft

Ethical questions once confined to science fiction are now an urgent focus for philosophers and computer scientists, according to new report

What happens when a machine begins to think for itself? Does it have feelings? Does it suffer? Does it need special attention? These philosophical questions, once the domain of science fiction novels, have now become a pressing scientific reality, prompting researchers to demand answers.
As a result, experts in the field are urging companies like Google, Microsoft and others to investigate whether AI has already developed consciousness.
OpenAI developers conference 
(Video: OpenAI)
A group of philosophers and computer scientists recently published a study in Nature calling on AI developers—such as OpenAI, Google and Microsoft—to examine the "realistic possibility that some AI systems will be conscious and/or robustly agentic—and thus morally significant—in the near future." The study also calls on these companies to define how such systems should be treated if they are found to be "alive" and "thinking."
"Failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer," says Jonathan Mason, a mathematician based in Oxford.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
"It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about—that we didn’t even realize that it had perception," he adds.
2 View gallery
סונדאר פיצ'אי, מנכ"ל חברת אלפבית־גוגל
סונדאר פיצ'אי, מנכ"ל חברת אלפבית־גוגל
Alphabet CEO Sundar Pichai
(Photo: Jeff Chiu / AP)
2 View gallery
כנס המפתחים של OpenAI
כנס המפתחים של OpenAI
OpenAI developers conference
(Photo: Justin Sullivan / Getty Images)
Jeff Sebo, a philosopher at New York University and a co-author of the report, warned, "If we wrongly assume a system is conscious, welfare funding might be funneled towards its care, and therefore taken away from people or animals that need it, or it could lead you to constrain efforts to make AI safe or beneficial for humans."
The report describes this as a "transitional moment." One of its co-authors, Kyle Fish, was recently hired as an "AI-welfare researcher" by the AI firm Anthropic in California.
"There is a shift happening because there are now people at leading AI companies who take AI consciousness, agency, and moral significance seriously," Sebo says.
<< Follow Ynetnews on Facebook | Twitter | Instagram >>
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""