I cordially invite you to my public PhD defence, the preceding symposium and the subsequent reception on Tuesday, December 11, 2018. The defence will take place in the Auditorium (KAST 01.07) of the Arenberg castle (Kasteelpark Arenberg 1, 3001 Heverlee) starting at 17:00; the reception will take place in the Prinsenzaal of the Castle.
The defence is preceded by a symposium on "Deep learning for complex relational data" featuring the talks of Sebastian Riedel (University College London, UK), Mathias Niepert (NEC Labs Heidelberg, Germany) and Guy Van den Broeck (University of California, Los Angeles, USA). The symposium will start at 13:00 in the Department of Computer Science (Celestijnenlaan 200A, 3001 Heverlee), room 05.152 (Java).
If you plan to atten the symposium and/or the defence, please confirm your attenance by filling out this form. Please note that the reception after the PhD defence is invitation only.
Symposium on Deep learning for complex relational data
Date & time: Tuesday, December 11, 2018 at 14:00
Address: Celestijnenlaan 200A, 3001 Heverlee
Location: 05.152 (Java)
[13:00] Guy Van den Broeck: Probabilistic and Logistic Circuits: A New Synthesis of Logic and Machine Learning (slides)
Abstract: This talk will discuss three lines of recent work at the intersection of logical reasoning and statistical machine learning. First, I describe tractable logical circuits and how they can be used to enforce constraints on the output of deep neural networks. Second, such circuits can be generalized to statistical classifiers, called logistic circuits, that achieve better classification accuracy on MNIST than neural networks that are an order of magnitude larger. Finally, I discuss how probabilistic circuits perform state-of-the-art probabilistic inference in factor graphs as well as imperative probabilistic programs.
Bio: Guy Van den Broeck is an Assistant Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning (Statistical Relational Learning, Tractable Learning), Knowledge Representation and Reasoning (Graphical Models, Lifted Probabilistic Inference, Knowledge Compilation), Applications of Probabilistic Reasoning and Learning (Probabilistic Programming, Probabilistic Databases), and Artificial Intelligence in general.
[14:00] Mathias Niepert: Neural Representation Learning for Graphs(slides)
Abstract: Graph-structured data is ubiquitous and occurs in numerous application domains. The talk will provide an overview of graph representation learning approaches such a graph convolutional networks. We show that these approaches can be understood from two different perspectives: as a special case of tensor factorizations and as instances of a class of algorithms that learn from local graph structures such as paths and neighborhoods. The talk will also discuss current work of our group including applications of graph neural networks.
Bio: Mathias Niepert is a chieft research scientist of the Systems and Machine Learning (SysML) group at NEC Labs Heidelberg. His research interests include representation learning for graph-structured data, unsupervised and semi-supervised learning, probabilistic graphical models, and statistical relational learning. He is co-founder of several open-source digital humanitites projects such as the Indiana Philosophy Ontology Project and the Linked Humanities Project. Before joining NEC Labs Heidelberg, he was a research at the University of Washington in Seattle and a member of the Data and Web Science Research Group at the University of Mannheim.
[15:00] Coffee break
[15:30] Sebastian Riedel: Reading and Reasoning with Neural Program Interpreters (slides)
Abstract: We are getting better at teaching end-to-end neural models how to answer questions about content in natural language text. However, progress has been mostly restricted to extracting answers that are directly stated in the text. In this talk, I will present our work towards teaching machines not only to read but also to reason with what was read and to do this in an interpretable and controlled fashion. Our main hypothesis is that this can be achieved by the development of neural abstract machines that follow the blueprint of program interpreters for real-world programming languages. We test this idea using two languages: an imperative (Forth) and a declarative (Prolog/Datalog) one. In both cases, we implement differentiable interpreters that can be used for learning reasoning patterns. Crucially, because they are based on interpretable host languages, the interpreters also allow users to easily inject prior knowledge and inspect the learnt patterns. Moreover, on tasks such as math word problems and relational reasoning, our approach compares favourably to state-of-the-art methods.
Bio: Sebastian Riedel is a reader in Natural Language Processing and Machine Learning at the University College London (UCL), where he is leading the Machine Reading lab. He is also the head of research at Bloomsbury AI and an Allen Distinguished Investigator. He works in the intersection of Natural Language Processing and Machine Learning, and focuses on teaching machines how to read and reason. He was educated in Hamburg-Harburg (Dipl. Ing) and Edinburgh (MSc., PhD), and worked at the University of Massachusetts Amherst and Tokyo University before joining UCL.
The symposium is kindly sponsored by the Arenberg Doctoral School through the Meet the Jury funding.
[17:00] Public PhD defence
Learning Symbolic Latent Representations for Relational Data (slides)
Date & time: Tuesday, December 11, 2018 at 17:00
Address: Kasteel van Arenberg, Kasteelpark Arenberg 1, 3001 Heverlee
Location: Auditorium (KAST 01.07)
Supervisor: Hendrik Blockeel
Examination Committee: Carlo Vandecasteele (chairman), Johan Suykens, Jesse Davis, David Poole, Sebastian Riedel, Mathias Niepert
The early 21st century has been largely shaped by the huge amounts of available data generated by the widespread adoption of information technology. This rapid growth of the stored data has created the need for automated tools capable of extracting useful bits from large amounts of data. In turn, such tools have changed our perspective on data from a mere record of an event to a carrier of useful information. Before insights can be drawn from the data, the data has to be first brought into a suitable form by means of feature engineering as it can rarely be processed in its raw form. This step is usually time- and labour-intensive, and often requires an extensive knowledge of the domain.
In this thesis, we tackle the problem of representation learning -- how to automate the process of feature construction? We focus on feature construction with rich relational data expressed in form of networks, e.g., biological and traffic networks, as many real-life problems can be easily expressed in this format. To be able to express complex feature over networks, the proposed methods rely on the expressive data representation language of first-order logic.
The first contribution of the thesis is a new versatile relational clustering framework that decouples various sources of relational similarity and combines them in a systematic manner. The second contribution is CUR2LED -- a relational representation learning framework that exploits approximate symmetries in relational data as features. The third contribution are Auto-encoding logic programs -- a relational generalisation of auto-encoders, one of the basic representation learning primitives. The fourth contribution is the experimental comparison of various relational representation learning methods that offers insights into the strengths and weaknesses of the existing approaches.