Besides information and an intelligent agent, the third fundamental notion in a theory that describes intelligent behaviour is a game.
<wikipediaQuote>Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers. Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome.<wikipediaQuote/>
This abstract specification can be concretized by using one example, such as chess. Chess is a game for two players who alternate turns, white, that moves first, and black. Actions available to players at each decision point are legal piece moves in current position on the board, according to the rules of the game. When playing over the board, there is also manual pressing of the clock, after playing a move, while that is done automatically by a playing platform when playing online, where however exists possibility of premoving, which is not available over the board. Information that is available to players (and relevant for the game) is board position and its historical context, that is castling and en passant possibilities, and clock state for both players. Although, facial expressions of players can sometimes be relevant to their opponents, if they can read it. Payoffs for each outcome (conditions in which case actually happens certain outcome comprise situation both on the board and on the clock, and are also defined by the rules of the game), are one point for a win, half point for a draw, and zero points for a defeat. These points give prospects of financial rewards in the nearest future, at the end of the current competition. Plus, certain number of rating points according to the rating system in charge (Elo), are taken from a loser and given to a winner, that has longer terms financial consequences as it affects future conditions and chances for competing. If there is a big enough difference in rating, in case of a draw certain smaller amount of rating points gets transferred from higher to lower rated player, too.
In any case, chess is envisaged to be adversarial game in its core, and its score system is obviously zero-sum. Still, players sometimes may switch to a cooperative mode, when draw suites them both, which is something that spectators of the show don't appreciate. Now, although the rules of chess which define how pieces can be moved on the board and removed off the board, and what is the goal of the game, are simple enough so that small children can learn it in a couple of minutes, one can learn how to play it efficiently all his life, due to enormous complexity of strategies that arise from these simple rules, and chess is not unique regarding that, Go and Shogi (among others) share the same trait. Strategies are plans for a sequence of actions, that player reevaluates afresh at each decision point in order to achieve the goal of the game. In chess theory, there is a distinction between strategy and tactics, that is not present in abstract game theory. In short, strategical and tactical plans adhere to different recognizable patterns, the former are long term, less concretely defined plans, the latter are short term exact plans.
In our theory, intelligent agent is a central notion that can play a role of a game player, if it satisfies the requirement of being able to make rational decisions needed for a particular game. Hence, its relation to two other fundamental notions can be described as:
information <--- percepts ---- Intelligent Agent (IA) ---- plays ---> game
That is, responsiveness to stimuli from an environment is a broader term that includes interaction with non intelligent agents, that cannot exert any strategic pressure towards the IA that is a subject of our analysis, although they may apply serious constraints. Games are serious matter, and vastly applicaple, the same model applies to the interaction between a lion hunting an antelope (in that game the main threat of antelope is to starve lion to death by escaping, especially if lion is alone, not fit any more and cannot initiate new hunt, the main threat of lion is to catch, kill and eat antelope, no chances to switch from adversarial to cooperative mode here), countries in war, companies on market, not only individuals but organizations too may act as IAs, even in chess, one player may consist of multiple persons if they consult each other in order to produce collectively the decision on how to act. There is a notion of universal or general intelligence, which would be an attribute of an IA that can understand and play successfully each and every game it becomes presented with, during its lifetime. In the field of producing an artificial IA of such quality, one person and his company stands out, and that is Demis Hassabis and DeepMind.
The question is what limitations exist with respect to living entities to be characterized as such IA. For example, I already wrote in my previous blog about the intelligence of single cell organisms. Their intelligence must be very modest in comparison with what we perceive as great intelligence, to the extent that we might doubt they possess it at all, but that might be a biased view. For example, we might think they can't have it because they don't have neurons, or senses comparable to those of multicellular organisms, but that doesn't seem to be the key factor. Same question applies to the intelligence of plants. They definitely exhibit intelligence and strategic behaviour regarding both other plants or animals, although they also don't have neural networks, but they do have alternative information networks.
The question is also, is it the right and complete characterization of intelligence, the ability to create and perform strategies, to decide correctly when faced with a problem? Maybe understanding of (any) problem is sufficient, but if it doesn't result in some action, it cannot be measured, or even observed at all.
Speaking of measurability and observability, some direct questions come to my mind, how come there is no Constructor Theory of Intelligent Agent, and Constructor Theory of Game, while there is Constructory Theory of Information, CT of Probability, CT of Thermodynamics and CT of Life? What did these subjects do to deserve such treatment? Is life somehow more fundamental, less emergent notion then IA, or game, and if yes, how exactly? Maybe they are not interesting enough to be incorporated into the science of physics, or maybe they don't belong there? Maybe mathematical treatment so far, elsewhere, was sufficient so that there was no reason to deal with these subjects?
Comments
Post a Comment