Archives and Documentation Center
Digital Archives

Modeling, learning , and utilizing trust in multiagent systems

Show simple item record

dc.contributor Graduate Program in Systems and Control Engineering.
dc.contributor.advisor Yolum, Pınar.
dc.contributor.author Kafalı, Remzi Özgür.
dc.date.accessioned 2023-03-16T11:34:43Z
dc.date.available 2023-03-16T11:34:43Z
dc.date.issued 2007.
dc.identifier.other SCO 2007 K34
dc.identifier.uri http://digitalarchive.boun.edu.tr/handle/123456789/15632
dc.description.abstract In open multiagent systems with autonomous and heterogeneous agents involved, achieving cooperation is necessary but difficult. Since every agent has limited and different capabilities, they need to exchange some of their tasks to get the maximum efficiency. For this purpose, they need to find out whom they can trust and delegate their tasks to be done. An important issue in building trust is to model other agents based on the previous interactions with them and decide on future relations by reviewing this useful information. However, building accurate models of others and updating them with recent findings is difficult since the agents do not always behave as expected regarding their autonomous behavior. This thesis studies trust in the context of a service selection problem where the agents try to find the best service provider. Two learning algorithms that can be used to model the agent's environment are considered. The learning algorithms vary in terms of the models that they generate as well as their update behavior based on interactions. The learning algorithms have been evaluated using the Agent Reputation and Trust (ART) Testbed simulation environment. This platform is chosen since it fits the best to the experiments done in the thesis. The results of the simulations compare the two algorithms in terms of the accuracy of models, the effectiveness in finding trustworthy agents as well as the effort needed to build accurate models. Further, the algorithms are compared in terms of their robustness when some agents cheat or respond erratically.
dc.format.extent 30cm.
dc.publisher Thesis (M.S.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2007.
dc.subject.lcsh Intelligent agents (Computer software)
dc.subject.lcsh Trust.
dc.title Modeling, learning , and utilizing trust in multiagent systems
dc.format.pages xii, 72 leaves;


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Digital Archive


Browse

My Account