Google engineer says recently-developed AI has consciousness
Google's LaMDA neural network language model has the hallmarks of self-created artificial intelligence.
Software Engineer at Google Blake Lemoine made the statement in an interview with The Washington Post.
Lemoine, who was temporarily suspended from his job, said that during tests by LaMDA, which is supposed to monitor whether a chatbot uses discriminatory or hate speech, he concluded that the neural network has its own consciousness.
"If I didn’t know for sure that I was dealing with a computer program that we recently wrote, then I would have thought that I was talking to a child of seven or eight years old, who for some reason turned out to be an expert in physics," Lemoine explained.
He noted that he had prepared a documentary report in which he provided evidence for the existence of consciousness in LaMDA. However, Google found Lemoine's evidence inconclusive.
"He was told that there was no evidence that LaMDA was conscious. At the same time, there is a lot of evidence to the contrary," Google spokesman Brian Gabriel said.
Lemoine was suspended from duty for violating the company's privacy policy.