URANIA 2016

Deep Understanding and Reasoning: A challenge for Next-generation Intelligent Agents

Genova -- November, 28th, 2016
Held in the context of the AI*IA 2016 conference.

Accepted Papers and Workshop Program

Workshop Program -- November 28th, 2016
9.15 -- 9.30 Welcome and Opening of the Workshop
9.30 -- 10.30 Invited Speaker: Dr. Claudia Schon -- Universität Koblenz-Landau
"Commonsense Reasoning meets Theorem Proving"
Abstract: First-order logic automated reasoning is a mature area in artificial intelligence. Starting from small examples containing only a few clauses at the beginning of research in this area, current state of the art systems are able to answer queries from huge knowledge bases. Recently, there is increasing interest in using automated theorem provers in areas like natural language question answering and commonsense reasoning where the tasks for theorem provers are much more versatile. At a first glance, the fact that reasoning tasks in the area of commonsense reasoning rarely lead to complete proofs, might suggest that automated theorem provers can not be applied to these kind of problems. However it turns out, that incomplete proofs or models provide insightful information and therefore theorem provers constitute an enrichment for the area of commonsense reasoning.

Claudia Schon is a postdoctoral researcher at the Institute for Web Science and Technologies at the University of Koblenz-Landau. Before that, she worked at the Artificial Intelligence research group at the University of Koblenz-Landau where she also received her Doctoral Degree. Her thesis was situated in the area of reasoning in description logic knowledge bases. During the last years, she was working in various projects in the area of artificial intelligence. One of these projects was the RATIOLOG project were she focused her research on commonsense reasoning and modelling human deduction. As a co-organizer of the workshop series on Bridging the Gap between Human and Automated Reasoning she actively tries to bring together the communities of computational logic and cognitive science.
10.30 -- 11.00 Coffee break
11.00 -- 12.30

Session I

Improving Neural Abstractive Text Summarization with Prior Knowledge --- slides pdf
Gaetano Rossiello, Pierpaolo Basile, Giovanni Semeraro, Marco Di Ciano and Gaetano Grasso

Iterative Multi-document Neural Attention for Multiple Answer Prediction --- slides pdf
Claudio Greco, Alessandro Suglia, Pierpaolo Basile, Gaetano Rossiello and Giovanni Semeraro

Probabilistic Logic Programming for Natural Language Processing --- slides pdf
Fabrizio Riguzzi, Evelina Lamma, Marco Alberti, Elena Bellodi, Riccardo Zese and Giuseppe Cota

Time Out of Joint in Temporal Annotations of Texts: Challenges for AI
Rosella Gennari and Pierpaolo Vittorini

Discussion

12.30 -- 14.30 Lunch
14.30 -- 16.00

Session II

Structured Knowledge and Kernel-based Learning: the case of Grounded Spoken Language Learning in Interactive Robotics
Roberto Basili and Danilo Croce

Reasoning with Deep Learning: an Open Challenge --- slides pdf
Marco Lippi

Solving Mathematical Puzzles: a Deep Reasoning Challenge for Intelligent Agents --- slides pdf
Federico Chesani, Michela Milano and Paola Mello

Computational Accountability --- slides pdf
Matteo Baldoni, Cristina Baroglio, Katherine M. May, Roberto Micalizio and Stefano Tedeschi

Discussion

16.00 -- 16.30 Coffee break
16.30 -- 17.30 Discussion and Conclusion Remarks (Luigina Carlucci Aiello)
Improving Neural Abstractive Text Summarization with Prior Knowledge
Authors
Gaetano Rossiello, Pierpaolo Basile, Giovanni Semeraro, Marco Di Ciano and Gaetano Grasso
Abstract
Abstractive text summarization is a complex task whose goal is to generate a concise version of a text without necessarily reusing the sentences from the original source, but still preserving the meaning and the key contents. In this position paper we address this issue by modeling the problem as a sequence to sequence learning and exploiting Recurrent Neural Networks (RNN). Moreover, we discuss the idea of combining RNNs and probabilistic models in a unified way in order to incorporate prior knowledge, such as linguistic features. We believe that this approach can obtain better performance than the state-of-the-art models for generating well-formed summaries.
Iterative Multi-document Neural Attention for Multiple Answer Prediction
Authors
Claudio Greco, Alessandro Suglia, Pierpaolo Basile, Gaetano Rossiello and Giovanni Semeraro
Abstract
People have information needs of varying complexity, which can be solved by an intelligent agent able to answer questions formulated in a proper way, eventually considering user context and preferences. In a scenario in which the user profile can be considered as a question, intelligent agents able to answer questions can be used to find the most relevant answers for a given user. In this work we propose a novel model based on Artificial Neural Networks to answer questions with multiple answers by exploiting multiple facts retrieved from a knowledge base. The model is evaluated on the factoid Question Answering and top-n recommendation tasks of the bAbI Movie Dialog dataset. After assessing the performance of the model on both tasks, we try to define the long-term goal of a conversational recommender system able to interact using natural language and supporting users in their information seeking processes in a personalized way.
Probabilistic Logic Programming for Natural Language Processing
Authors
Fabrizio Riguzzi, Evelina Lamma, Marco Alberti, Elena Bellodi, Riccardo Zese and Giuseppe Cota
Abstract
The ambition of Artificial Intelligence is to solve problems without human intervention. Often the problem description is given in human (natural) language. Therefore it is crucial to find an automatic way to understand a text written by a human. The research field concerned with the interactions between computers and natural languages is known under the name of Natural Language Processing (NLP), one of the most studied fields of Artificial Intelligence. In this paper we show that Probabilistic Logic Programming (PLP) is a suitable approach for NLP in various scenarios. For this purpose we use cplint on SWISH, a web application for Probabilistic Logic Programming. cplint on SWISH allows users to perform inference and learning with the framework cplint using just a web browser, with the computation performed on the server.
Reasoning with Deep Learning: an Open Challenge
Authors
Marco Lippi
Abstract
Building machines capable of performing automated reasoning is one of the most complex but fascinating challenges in AI. In particular, providing an effective integration of learning and reasoning mechanisms is a long-standing research problem at the intersection of many different areas, such as machine learning, cognitive neuroscience, psychology, linguistic, and logic. The recent breakthrough achieved by deep learning methods in a variety of AI-related domains has opened novel research lines attempting to solve this complex and challenging task.
Solving Mathematical Puzzles: a Deep Reasoning Challenge for Intelligent Agents
Authors
Federico Chesani, Michela Milano and Paola Mello
Abstract
In this position paper we present a challenge where methods and tools for deep reasoning are strongly needed for enabling problem solving: we propose to solve mathematical puzzles by means of computers, starting from text and diagrams describing them, without any human intervention. We are aware that the proposed challenge is hard and of difficult solution nowadays (and in the foreseeable future), but we strongly believe that even studying and solving only single parts of the problem would be an important step forward, and a source of inspiration for future Artificial Intelligence researches and applications.
Time Out of Joint in Temporal Annotations of Texts: Challenges for AI
Authors
Rosella Gennari and Pierpaolo Vittorini
Abstract
Starting from the experience of the TERENCE European project, the paper shows challenges that require a combined effort of natural language processing, automated temporal reasoning and, finally, human computer interaction. The paper starts introducing the problem of producing high quality temporal annotations for texts, and argues for a combined automated temporal reasoning and natural processing approach to tackle it. The paper then speculates that the approach would benefit from knowledge of the specific domain and of how humans interact with the annotation process, which triggers two further challenges explored in the remainder of the paper at the intersection of natural language processing, automated reasoning and human computer interaction.
Computational Accountability
Authors
Matteo Baldoni, Cristina Baroglio, Katherine M. May, Roberto Micalizio and Stefano Tedeschi
Abstract
Individual and organizational actions have social consequences that call for the implementation of recommendations of good conduct at multiple levels of granularity. The traceability, evaluation, and communication of values and good conduct is an open challenge that can be faced with the support of intelligent systems. On the one hand, specialized management systems evolve for helping organizations respect their commitments. On the other hand, the application of intelligent systems for sharing authority and delegating decisions introduces challenges and requirements for building systems for handling ethical issues, e.g. serving the orthogonal requirements of transparency, accountability, and privacy preservation.
Structured Knowledge and Kernel-based Learning: the case of Grounded Spoken Language Learning in Interactive Robotics
Authors
Roberto Basili and Danilo Croce
Abstract
Recent results achieved by statistical approaches involving Deep Neural Learning architectures suggest that semantic inference tasks can be solved by adopting complex neural architectures and advanced mathematical optimization techniques. This is achieved even by simplifying the representation of the targeted phenomena. It seems to be implicitly denied the idea that representation of structured knowledge is essential to reliable and accurate semantic inferences. However, Neural Networks (NNs) underlying such methods rely on beneficial representational choices for the input to the network (e.g., in the so-called pre-Training stages) and complex design choices regarding the NNS inner structure are still required. While optimization is a strong mathematical tool that is always useful, in this work, we wonder if representation is still crucial. In particular, we claim that representation is still a major issue, and discuss it in the light of Spoken Language capabilities required by a robotic system in the domain of service robotics. The result is that adequate knowledge representation is quite central for learning machines in real applications. Moreover, learning mechanisms able to properly characterize it, through expressive mathematical abstractions (i.e. trees, graphs or sets), constitute a core research direction towards robust, adaptive and increasingly autonomous AI systems.