Berkeley professor talks about the sentience of AI
- News
- Forschung
- Veranstaltungen

Could machines one day develop something like consciousness? How does artificial intelligence (AI) change when it not only analyzes data, but also interacts with the physical world? These questions were the focus of the lecture by Prof. Edward A. Lee, Emeritus Professor at the University of California, Berkeley, entitled "Will Embodied AI Become Sentient?". The event was jointly organized by TU Dortmund University, the Research Center Trustworthy Data Science and Security (RC Trust) of the University Alliance Ruhr (UA Ruhr) and the Lamarr Institute for Machine Learning and Artificial Intelligence as part of the "UA Ruhr Distinguished Lecture Series Trustworthy AI - TU Dortmund in Conversation Special".
"Artificial intelligence is a disruptive technology and will not only change our research, but every aspect of our daily lives," said Prof. Manfred Bayer, Rector of TU Dortmund University, in his welcoming address. However, the technology also brings challenges with it. Last but not least, the question remains as to which ethical rules AI should be subject to. "We shouldn't just leave this area to commercial providers," said Bayer. "This is where universities should come in." After a brief introduction by TU Professor and RC Trust Director Prof. Emmanuel Müller, who initiated the lecture, Edward A. Lee spoke to around 250 guests in lecture hall 6 of the lecture hall building on the South Campus. Around 130 other listeners were also connected digitally via Zoom.
Sensitivity cannot be observed objectively
Lee's lecture focused on the question of whether a machine - in this case artificial intelligence - could become a sentient being. This sentience is achieved by many animals and is at a lower level than consciousness , emphasized the US computer scientist and electrical engineer. The challenge: the sentience of a being is not objectively observable from the outside, as he illustrated by comparing a structure in the brains of mice and in an AI language model. These are similar to a certain extent, but no conclusions can be drawn about their sentience. So how can it be proven at all? This can be done using the concept of "Zero Knowledge Proof", for which Silvio Micali and Shafi Goldwasser from the Massachusetts Institute of Technology (MIT) were honored with the Turing Award, the highest award in computer science.
Lee explained the concept using a thought experiment in which two people, Shah and Mick, stand in front of a cave. This is circular on the inside. At the far end of the circular tunnel is a gate that blocks the passage and can only be opened with the correct password. Shah wants to convince Mick that she knows the password, but wants to prevent Mick from convincing anyone else that she knows the password. So her goal is to convince Mick, but not to give him any further information, including the password. An external observer must also not draw any conclusions from the method of communication.
This is possible if Shah goes into the cave first and stands on one side - A or B - in front of the gate. Next, Mick stands at the entrance to the cave and calls out to Shah which side - again A or B - Shah should come out of. If this process is repeated very often, it becomes clear to Mick that Shah must know the password, as it would be extremely unlikely that they would always have chosen the same side of the cave by chance. The trick is that Mick has this knowledge subjectively, as an external observer outside the cave cannot objectively check whether Mick and Shah have coordinated their actions in advance.
"Cognitive function arises through interaction"
This thought experiment can be transferred to theoretical computer science, where such a process is referred to as bisimulation. The key point is that it is possible to obtain certain knowledge, but this requires knowledge of the internal structure of the process. This knowledge cannot be observed from the outside and can only be comprehended through one's own interaction - which shakes the scientific principle of objective observation.
What does this mean for the initial question of whether AI could develop sentience? This could only happen if AI could also act in the physical world, for example by mechanically grasping something and examining it using sensors. "Cognitive function does not arise in the brain, but through interaction with the world," explained Lee. His thesis: artificial intelligence will change if it can perceive the physical world and interact with it. In this way, it could well develop an ego consciousness and free will. The special thing, however, is that we humans would never know this objectively, as knowledge of this could not be determined from the outside, as in the thought experiment.
After the lecture, Prof. Lee answered questions from the guests, followed by a panel discussion with social psychologist Prof. Nicole Krämer (University of Duisburg-Essen), Prof. Jens Gerken (Professor of Inclusive Human-Robot Interaction, TU Dortmund University) and Prof. Sergio Lucia, (Professor of Process Automation Systems, TU Dortmund University), moderated by Prof. Jakob Rehof (Professor of Software Engineering, TU Dortmund University). The afternoon ended with drinks and snacks in the neighboring Rudolf Chaudoire Pavilion.
About the person
Prof. Edward A. Lee studied computer science and engineering at Yale University and conducted research at both MIT and the University of California, Berkeley. For many years, Lee has driven the development of innovative technologies such as the open source projects Ptolemy and Lingua Franca. His work ranges from technical foundations in robotics and signal-based systems to the philosophical and social implications of technology.