Naučno-tehničeskij Vestnik Informacionnyh Tehnologij, Mehaniki i Optiki (Mar 2014)
TRUST AND REPUTATION MODEL DESIGN FOR OBJECTS OF MULTI-AGENT ROBOTICS SYSTEMS WITH DECENTRALIZED CONTROL
Abstract
The problem of mechanisms design for protection of multi-agent robotics systems from attacks of robots-saboteurs is considered. Functioning analysis of these systems with decentralized control is carried out. The type of the so-called soft attacks using interception of messages, misinformation formation and transmission to group of robots which are also realizing other actions without identified signs of invasion of robots-saboteurs. Analysis of existing information security models of the system based on the trust level computation, calculated in the process of agents’ interaction is carried out. Information security model is offered in which robots-agents produce the trust levels to each other on the basis of situation analysis emerging on a certain step of iterative algorithm with usage of onboard sensor devices. On the basis of calculated trust levels, recognition of “saboteur” objects in the group of legitimate robots-agents is done. For measure of likeness (adjacency) increase for objects from the same category (“saboteur” or “legitimate agent”), calculation algorithm for agents reputation is offered as a measure of public opinion about qualities of this or that agent-subject. Implementation alternatives of the algorithms for detection of saboteurs on the example of the basic algorithm for distribution of purposes in the group of robots are considered.