We are announcing two PhD positions in robot ethics, one in Cognitive Science and one in Practical Philosophy within the project “The imperfect creator creating the perfect: Ethics for autonomous systems/AI”

Please contact Christian Balkenius (christian.balkenius@lucs.lu.se) for further information about the positions

1. Non-Verbal Signals of Trust and In-group Identification in Humans and Robots

Robots are set to increasingly interact with humans as work partners. But in the near future, autonomous machines will also carry out tasks around humans without necessarily interacting directly with them. Both of these settings require that humans perceive the robots in question as trustworthy. A focus on trust from the perspective of humans towards autonomous systems is undeniably of prime importance. However, as machines are assigned responsibility without oversight, and are expected to carry out their function outside controlled environments, these systems also need the ability to reciprocate the trust relation. Such a system might also be required to withdraw trust and avoid particular interaction partners to maintain system integrity and succeed in carrying out their assigned task. Examples of such systems might be delivery robots subject to sabotage or vandalism, autonomous systems tasked with maintenance of public spaces, or self-driving cars acting as robot taxis.

The goal of the PhD project is to investigate how trust between a human and a robot can develop as a result of their interaction. A particular focus is on nonverbal cues that reinforce or interfere with the development of trust. A related question is what aspects of a robot that leads to it becoming part of the in-group, as part of the family or the work group. Of equal importance is what makes a robot be seen as untrustworthy or considered not one of us, i.e. as part of the out-group.

Nonverbal cues that play a role include looks, proximity and posture, gaze behavior, facial expressions, pupil dilation, and movement, including mimicry. Depending on how these are modulated during interaction they will influence what one person thinks about another. The approach complements modelling of trust based on deliberate processing and focuses on personal trust between individual agents.

Workplan

The thesis work uses an interdisciplinary approach and consists of four main parts: (1) An analysis of the psychological and cognitive theories of trust and in-group identification; (2) An analysis of current computational models of trust; (3) Computational modeling of the mechanisms behind trust and in-group identification in humanoid robots, based on cognitive and neuroscientific data; (4) Experimental studies of human-robot and robot-robot interaction where these mechanisms are implemented.

Depending on the preferences and skills of the PhD candidate, the thesis work can focus more on one or two of these tasks. The PhD candidate will have access to the Lund Cognitive Robotics Lab with its infrastructure of humanoid robots and computing equipment and work closely with the other members of the project. The candidate will be supervised by the senior project members with competences in cognitive science, philosophy and robotics.

The PhD candidate is expected to take part in the activities of the WASP-HS research school.

Apply here for the position in cognitive science with focus on AI and ethics

2. Ethics from theory to robot implementations

As robots are used increasingly in areas such as domestic, automotive, healthcare, or military settings, safety measures need to be put into place to make sure that robots are not dangerous to humans. Ideally, they should know when they do something wrong. One solution often suggested is something akin to Asimov’s robot laws, but they are problematic as a basis for ethical robots since they require that the robot has a full understanding of the rules, their consequences, and perfect reasoning skills. Similar critique can potentially be put forward for other systems of ethical rules.

The goal of the PhD project is to investigate how different ethical rule-sets can be used to control a robot in practical interaction with humans or other robots. The focus is on the consequences on the interaction between a small set of human or robotic agents. These consequences can potentially be misaligned with the consequences for society that motivates the rules in themselves, and the potential tension between individual and societal consequences is an important area to study. Of particular interest is what happens if agents using different ethical standards are to interact or collaborate. A related questions is under what conditions an agent using a non-ethical behavior can exploit the situation. This relates to the concept of evolutionary stable strategies as studied in behavioural ecology.

The PhD project combines theoretical work with practical experiments.

Workplan

The thesis work uses an interdisciplinary approach and consists of four main tasks: (1) Systematic analysis of classical ethical theories from an algorithmic perspective. Can they be translated to code that can be run on a robot? (2) Analysis of the practical requirements on the robots abilities to follow each of the different theories; (3) Computer implementation of each of the theories as far as it is possible. These implementations should target the sample scenarios that will be developed in the project; (4) Experimental tests of the different ethical systems in human-robots and robot-robot interaction.

Depending on the preferences and skills of the PhD candidate, the thesis work can focus more on one or two of these tasks. The PhD candidate will have access to the Lund Cognitive Robotics Lab with its infrastructure of humanoid robots and computing equipment and work closely with the other members of the project.

The PhD candidate is expected to take part in the activities of the WASP-HS research school.

Apply here for the position in practical philosophy with focus on AI and ethics

WASP-HS

The PhD positions are part of the Wallenberg AI, Autonomous Systems and Software Program on Humanities and Society (WASP-HS) aims to realize excellent research and develop competence on the consequences and challenges of artificial intelligence and autonomous systems for the individual person and society. This 10-year program is initiated and generously funded by the Marianne and Marcus Wallenberg Foundation (MMW) with 660 million SEK. In addition to this, the program receives support from collaborating industry and from participating universities. Major goals are more than 10 new faculty positions and more than 70 new PhDs. For more information about the research and other activities conducted within WASP-HS please visit http://wasp-hs.org/.

The WASP-HS graduate school provides foundations, perspectives, and state-of-the-art knowledge in the different disciplines taught by leading researchers in the field. Through an ambitious program with research visits, partner universities, and visiting lecturers, the graduate school actively supports forming a strong multi-disciplinary and international professional network between PhD students, researchers and practitioners in the field. It thus provides added value on top of the existing PhD programs at the partner universities, providing unique opportunities for students who are dedicated to achieving international research excellence with societal relevance.