(ORDO NEWS) — Researchers at the University of North Carolina have developed a system by which AI can make ethical choices and take moral responsibility for its decisions.
The system is designed primarily for robots and AI that work with hospitals and provide assistance to people. If a robot cannot make morally responsible decisions, it is unlikely to help a person.
Isaac Asimov once formulated the 3 laws of robotics . The main purpose of these laws is to limit the behavior of robots in such a way that they cannot harm humans.
And these laws in Asimov’s stories are “hardwired” directly into the “positron brain” of the robot. But the whole Asimov cycle “I am a robot” is written about the fact that these “three laws” do not always work.
The heroes of the stories constantly find themselves in a position where it is not clear what “harm” is and how “not to cause it”.
But one way or another, it is necessary to formulate a system of prohibitions and recommendations, otherwise a person will not be able to work with robots if they are independent enough to make their own decisions, and not just follow commands.
Researchers at Carolina University set out to develop AI principles that could bring such limitations to how robots work.
The project, which is being developed at the University of North Carolina, focuses on technologies in which people interact with artificial intelligence programs, such as virtual assistants in medical institutions.
“Robots for patient care are supposed to ensure the safety and comfort of patients. From a practical point of view, this means that technologies should be used in situations where ethical judgments need to be made,”.
Dublevich gives this example: “Suppose a robot is in a situation where two people need medical attention.
One patient is unconscious, but he needs urgent help, and the second patient also needs help, but he is conscious and requires the robot to help him.
The robot must make a choice which patient to help first. In general, should a robot help a patient who is unconscious and therefore unable to consent to treatment?”
Scientists have identified two factors – purpose and action. The first factor determines the purpose of the action and the nature of the agent performing the action. The second is the action itself.
Generally, people tend to view certain actions, such as lying, as inherently “bad.”
But, as scientists say, in some cases, if the goal is right (to help a person), and the action is false (for example, lying), if it is difficult to help a person without lying to him, lying is permissible.
For example, while a robot is saving a dying person, another patient will be more calm if the robot lies to him, but calms him down.
People do this quite often, but their “permissibility of lying” is based on very subtle senses of appropriateness, which it will not be possible to build into AI for a long time.
Researchers have developed decision tree search for AI. In general, this is similar to Asimov’s laws of robotics, only there are a lot of such laws.
The model developed was called Agent, Action, and Consequences (ADC). It should represent how people make complex ethical decisions in the real world.
The robot does not have any special flair, but it has the ability to learn, high search speed and solution optimization.
Whether high speed can replace a person’s ideas about morality and responsibility is not yet clear, but the first step has been taken.
Dublevich notes: “We don’t just say that these ethical frameworks will work well for AI, we present them in a language that allows them to be embedded in a computer program.”
This is one of the first collaborations between ethical philosophers and engineers. It is likely that in the future the number of such interdisciplinary research in robotics will grow rapidly.
Contact us: [email protected]