Abstract
For decades, researchers from different fields tried to understand what makes a self-interested individual help another individual who is a potential competitor in the struggle for survival. This curiosity about human cooperative or noncooperative behaviour resulted in a diverse and huge set of game-theoretic strategies and models. Using a game as a unit of social interaction, two types of complex iterative interactions setup have been utilized to understand how different strategies behave in the presence of others over a period, which ones are better (dominant), or how cooperation/defection evolve in this setup. The first iterative method is a traditional tournament where each player follows a strategy and every player interacts with every other player in an iteration called round and there can be many rounds (iterations) in a tournament. Another approach is an evolutionary version where a population of players starts the process, and interact with each other randomly. These interactions can be performed a number of times. As a result of these interactions, the payoff each player accumulates characterises the player's fitness in this context. After a number of interactions, new players have to be introduced whose strategies are dependent on the strategies of previous players and their fitness.
For the implementation of this dilemma of cooperation as computer programs, different ways have been adopted. Some researchers implemented them as a single complex program, whereas others built dedicated platforms. Some others adopted existing simulation methods such as the Agent-based Modeling and Simulation (ABMS) approach. Although all these efforts and models accelerated the research and provided useful insights into the dilemma of cooperation, the ways these models are conceptualized and implemented make them quite complex and less intuitive in terms of understanding. Even, the agent-based modeling toolkits support simple, light and reactive agents which simply react to the changes in the environment and have no internal state. As a result, the game-theoretic models proposed using those techniques lack interpretability and the cognitive complexity required to express complex human-like decision-making or strategy. On the other hand, to express complex human-like decision-making, many cognitively rich frameworks are available. Amongst all those cognitive frameworks, the Belief-Desire-Intention (BDI) model is most known for its generality. Although BDI-based agent models are expressive enough to capture human-like behaviour, they are often resource-heavy. BDI agents perform planning/re-planning from first principles to decide their actions. Whereas we believe practical game theoretic simulations especially the evolutionary models requiring populations of many agents would be unfeasible with existing heavy cognitive models.
We believe that a simpler version of teleo-reactive rules and BDI-like cognitive concepts are a suitable choice for designing the interpretable decision-making procedure of cognitive agents for the game-theoretic simulation. In such simulations, agents use pre-programmed models to play in the environment, constantly review these models with what they observe, and reason about strategic decisions that they need to make to maximise their preset objectives. The resulting representation would be straightforward, robust and compact to code very complex and adaptive behaviours in a goal-subgoal structure (plan libraries). In order to represent the simulation environment, the games it may contain, and their players, our framework advocates the use of existing meta-programming techniques that enable us to manipulate simulation components as program structures that evolve due to simulation events, as is customary in existing frameworks. Our target framework is based on normal logic programs implemented in Prolog and uses the same data structures to represent programs as well as data. The novelty of our approach is that we combine meta-interpreters to represent simulation components, the Event Calculus (EC) to represent the evolution of these components due to events happening in them, and an EC extension to deal with agent interaction within the resulting agent environment.
So in this thesis, we propose a novel knowledge representation framework called \cs\ for tournament based game-theoretic simulation experiments using cognitive agents. The framework allows an experimenter to evolve a population of such agents with strategies expressed teleo-reactively as logic programs. When agents encounter each other, events take place in the environment, caused either by agent actions or by environment processes. Such events change the environment's internal state, and these changes are then observed by agents that, in turn, decide to take new actions that affect the environment. This loop continues until the terminating conditions of the simulation are met. Using this framework, we show how to repeat experiments from the literature based on Axelrod's tournament. We introduced the concept of forgetting where memory structures are removed from the main memory to improve the overall performance in terms of time and memory usage. We also evaluate our platform's performance in efficiently supporting large simulations in game theoretic settings.
We further extended our framework and proposed ~\ecs\ to support evolutionary simulation experiments using cognitive agents. It allows an experimenter to evolve a population of such agents via generations. Such generations are governed by a set of rules in order to study how specific agent behaviours evolve from generation to generation in a specific simulation domain. Every generation's time is divided into rounds which can further be divided into encounters based on model-specific requirements. To exemplify the framework, we show how to successfully repeat existing experiments from evolutionary simulations of agent cooperation.
We have also proposed a domain-specific language called{\em Sim-TR} to abstract out from the low-level details and discuss the core concepts of our simulation framework for conducting a game-theoretic simulation and for generating an explanation. We introduce the concept of explanation templates to generate an explanation for different questions using different components of knowledge. We showed how to generate an explanation for different types of questions. We also provided some example dialog scenarios.
In short, our contribution is a practical, symbolic-AI representation framework and a platform identifying the important parts of a tournament and evolutionary game-theoretic simulation. These parts are then used to develop game-theoretic interactions methodologically, so that simulation experiments are easier to repeat. Then, using the explanation framework, explanations are also generated for user debugging.
For the implementation of this dilemma of cooperation as computer programs, different ways have been adopted. Some researchers implemented them as a single complex program, whereas others built dedicated platforms. Some others adopted existing simulation methods such as the Agent-based Modeling and Simulation (ABMS) approach. Although all these efforts and models accelerated the research and provided useful insights into the dilemma of cooperation, the ways these models are conceptualized and implemented make them quite complex and less intuitive in terms of understanding. Even, the agent-based modeling toolkits support simple, light and reactive agents which simply react to the changes in the environment and have no internal state. As a result, the game-theoretic models proposed using those techniques lack interpretability and the cognitive complexity required to express complex human-like decision-making or strategy. On the other hand, to express complex human-like decision-making, many cognitively rich frameworks are available. Amongst all those cognitive frameworks, the Belief-Desire-Intention (BDI) model is most known for its generality. Although BDI-based agent models are expressive enough to capture human-like behaviour, they are often resource-heavy. BDI agents perform planning/re-planning from first principles to decide their actions. Whereas we believe practical game theoretic simulations especially the evolutionary models requiring populations of many agents would be unfeasible with existing heavy cognitive models.
We believe that a simpler version of teleo-reactive rules and BDI-like cognitive concepts are a suitable choice for designing the interpretable decision-making procedure of cognitive agents for the game-theoretic simulation. In such simulations, agents use pre-programmed models to play in the environment, constantly review these models with what they observe, and reason about strategic decisions that they need to make to maximise their preset objectives. The resulting representation would be straightforward, robust and compact to code very complex and adaptive behaviours in a goal-subgoal structure (plan libraries). In order to represent the simulation environment, the games it may contain, and their players, our framework advocates the use of existing meta-programming techniques that enable us to manipulate simulation components as program structures that evolve due to simulation events, as is customary in existing frameworks. Our target framework is based on normal logic programs implemented in Prolog and uses the same data structures to represent programs as well as data. The novelty of our approach is that we combine meta-interpreters to represent simulation components, the Event Calculus (EC) to represent the evolution of these components due to events happening in them, and an EC extension to deal with agent interaction within the resulting agent environment.
So in this thesis, we propose a novel knowledge representation framework called \cs\ for tournament based game-theoretic simulation experiments using cognitive agents. The framework allows an experimenter to evolve a population of such agents with strategies expressed teleo-reactively as logic programs. When agents encounter each other, events take place in the environment, caused either by agent actions or by environment processes. Such events change the environment's internal state, and these changes are then observed by agents that, in turn, decide to take new actions that affect the environment. This loop continues until the terminating conditions of the simulation are met. Using this framework, we show how to repeat experiments from the literature based on Axelrod's tournament. We introduced the concept of forgetting where memory structures are removed from the main memory to improve the overall performance in terms of time and memory usage. We also evaluate our platform's performance in efficiently supporting large simulations in game theoretic settings.
We further extended our framework and proposed ~\ecs\ to support evolutionary simulation experiments using cognitive agents. It allows an experimenter to evolve a population of such agents via generations. Such generations are governed by a set of rules in order to study how specific agent behaviours evolve from generation to generation in a specific simulation domain. Every generation's time is divided into rounds which can further be divided into encounters based on model-specific requirements. To exemplify the framework, we show how to successfully repeat existing experiments from evolutionary simulations of agent cooperation.
We have also proposed a domain-specific language called{\em Sim-TR} to abstract out from the low-level details and discuss the core concepts of our simulation framework for conducting a game-theoretic simulation and for generating an explanation. We introduce the concept of explanation templates to generate an explanation for different questions using different components of knowledge. We showed how to generate an explanation for different types of questions. We also provided some example dialog scenarios.
In short, our contribution is a practical, symbolic-AI representation framework and a platform identifying the important parts of a tournament and evolutionary game-theoretic simulation. These parts are then used to develop game-theoretic interactions methodologically, so that simulation experiments are easier to repeat. Then, using the explanation framework, explanations are also generated for user debugging.
Original language | English |
---|---|
Qualification | Ph.D. |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 1 Nov 2023 |
Publication status | Unpublished - 2023 |
Keywords
- game-theoretic simulation
- cognitive agent simulation
- cognitive simulation platform
- Explainable simulations
- game-theoretic agent simulations
- game-theoretic cognitive agent simulations