- Home
- News and events
- Find news
- AI agents can learn to communicate effectively
AI agents can learn to communicate effectively
On top is an illustration of the language Tsafiki with six colour words. It is spoken by the Ts谩chila people of Ecuador. The image below shows an artificial language with the same number of colour words created by the researchers' agents. The Ts谩chila people and the artificial agents seem to divide the spectrum in similar ways. A quantitative study of similarities in human and artificial language is found in the study.
A multi-disciplinary team of researchers from Chalmers and University 91探花 has developed a framework to study how language evolves as an effective tool for describing mental concepts. In a new paper, they show that artificial agents can learn how to communicate in an artificial language similar to human language. The results have been published in the scientific journal PLOS ONE.
This research lies on the border between cognitive science and machine learning. There has been an influential proposal from cognitive scientists that all human languages can be viewed as having evolved as a means to communicate concepts in a near-optimal way in the sense of classical information theory. The Gothenburg researchers' method for training the artificial agents is based on reinforcement learning, which is an area of machine learning where agents gradually learn by interacting with an environment and getting feedback. In this case, the agents start without any linguistic knowledge and learn to communicate by getting feedback on how well they succeed in communicating a mental concept.
Reconstructing colours
鈥滻n our paper we have studied how agents learn to name mental concepts and communicate by playing a several rounds of a referential game consisting of a sender and a listener. We have especially focused on the colour-domain which is well studied in Cognitive 91探花. The game works as follows; the sender sees a colour and describes it by uttering a word from a glossary to the listener which then tries to reconstruct the colour. Both agents receive a shared reward based on how precise the listener鈥檚 reconstruction was. The words in the glossary have no meaning at the outset; it is up to the agents to agree on the meaning of the words during multiple rounds of the game. We see that the resulting artificial languages are near-optimal in an information-theoretic sense and with similar properties as found in human languages鈥, says Mikael K氓geb盲ck, researcher at Sleepcycle, and whose PhD-dissertation at Chalmers contained some of the results presented in the paper.
Together with Asad Sayeed, researcher in computer linguistics at the Centre for Linguistic Theory and Studies in Probability (CLASP) at University 91探花, and Devdatt Dubhashi, professor, and Emil Carlsson, PhD student, in the Data 91探花 and AI division at the Department of Computer 91探花 and Engineering, he has now published the results.
鈥滷rom a practical viewpoint, this research provides the fundamental principles to develop conversational agents such as Siri and Alexa that communicate with human language鈥, says Asad Sayeed.
The underlying idea of learning to communicate through reinforcement learning is also interesting for research in social and cultural fields, for example for the project GRIPES, which studies dogwhistle politics, led by Asad Sayeed.
Useful in future research studies
鈥滳ognitive experiments are very time consuming, as you often need to carry out careful experiments with human volunteers. Our approach provides a very powerful, flexible and inexpensive approach to investigate these fundamental questions. The experiments are fully under our control, repeatable and totally reliable. Thus our computational framework provide a valuable tool to investigate fundamental questions in cognitive science, language and interaction. For computer scientists it is a fertile area to explore the effectiveness of various learning mechanisms鈥, says Devdatt Dubhashi.
鈥淚n the future, we want to investigate whether agents can develop communication similar to human language in other areas as well. One example is if our agents are able to reconstruct the hierarchical structures we observe in human language鈥, says Emil Carlsson.
Long-standing question
The study stems from a long-standing central question in cognitive science and linguistics: whether, in all of the vast diversity of human languages, there are common universal principles. Classic work from the 20th century indicated that there were common properties in different languages in words to describe colours. Are there underlying principles accounting for these common properties?
A recent influential proposal from cognitive scientists is that there are indeed such common universal principles when viewed from the lens of information theory when languages are viewed as a means to communicate mental concepts making the most efficient use of resources.
A series of talks given at CLASP by Ted Gibson from MIT back in 2016, where he chosen from different societies and cultures around the world, led to the question 鈥榳hat if the human subjects were substituted by artificial computer agents? Would they develop a language with similar universal properties?'
Link to the article in PLOS ONE:
91探花
, researcher in computer linguistics, Department of Philosophy, Linguistics, Theory of 91探花, asad.sayeed@gu.se
, professor, Data 91探花 and AI division, Department of Computer 91探花 and Engineering
, PhD student, Data 91探花 and AI division, Department of Computer 91探花 and Engineering
Mikael K氓geb盲ck, Sleep Cycle AB