dc.description.abstract |
With the improvements of robotics science, robots have gradually began to take their place in social environments. The human-robot interaction (HRI) studies evolved with the increasing number of robots involved in the social world. It is impossible to predict whether this changed relationship will be competent or corrupt. The most significant anxiety of humanity regarding robots is that one or a group of robots dominate the world and create an apocalypse for humankind. The science of robot ethics has emerged to prevent this possible disaster scenario and to define the limits of HRI. There are many approaches in the literature about how robot ethics should be and what to expect from robot ethics. In thisthesis, we examined the applied ethics approaches and designed an ethical unit for the service robotBOSS,whichwasdevelopedbyourlab.Ourethicsunitworksasaexpert systemusing fuzzy logic, which is called Fuzzy Expert System (FES) in machine learning. The purpose of designed FES is to enable the robot to approach the human being more ethically than any person. There are two kinds of ethics rules in our FES. One is a long-term memory of ethics that is universal consent and set in stone, and the other one is a short-term memory of ethics rules that will alternate according to the working environment and duty of the robot. Theethicalmoduletakesthepossiblebehavioroftherobotfromthebehaviorcontrollerand the environmental perception as inputs. By combining these inputs with rules created from fuzzy clusters, the robot will choose the most ethical, so the most harmless to the user, of the possible behaviors. If there is no ethical behavior, the robot will stop acting and show no action. In the first stage of research, we determined the outline of the ethics module and ethical parameters according to the literature. Then we designed ethical problems through possible actions. Finally, we have ensured that the FES uses these ethical problems in the inference phase. In our study, the usability of our ethics module was examined with the chat-bot interface. Behavioral comparison between the robot with the ethics module and the robot without our module was observed in the simulation environment and on NAO robot. |
|