Abstract :
[en] In recent times, Large Language Models (LLMs) have shown to be successful in solving tasks that previously were believed to be very hard to achieve. While language and reasoning are two interlinked concepts, the reasoning capabilities of LLMs are not considered at this moment to be on par with their linguistic ones. In this work, we test how LLMs can choose moves in the popular tic-tac-toe game in order to assess their reasoning capabilities when the information to reason on is immersed in a spatial context. In order to do this, we run a number of LLMs, task them to play matches of tic-tac-toe against the well-known minimax algorithm, and compare the results. In this context, the performed task is non-trivial, as it involves recognition of combinations of text characters and a capacity that resembles reasoning based on their positions in a bi-dimensional space. Moreover, we ask the LLMs to keep track of the state of the game by listing the sequences they could use to win, in order for us to assess whether this information is used in their choices or not. One of the necessary features of consciousness in an agent is that it is able to build a model of itself and of the external world, and it acts based on these models. While we do not argue that LLMs have consciousness, we believe that it is important to monitor whether features related to consciousness appear in these LLMs, which is the final objective, not yet completed, of this research.