Collocated with ICLP 2023
London, UK, July 9-15, 2023
Important Dates | Accepted Papers | Invited Speakers | Programme Committee
July 9th 2023 | |
---|---|
Time (CEST) | Event |
Session: | Semantics |
13:30 - 14:10 | Invited Talk. Prof. Rafael Peñaloza: |
14:10 - 14:35 | Damiano Azzolini: A Brief Discussion about the Credal Semantics for Probabilistic Answer Set Programs |
Session: | Inference |
14:35 - 15:00 | Nicos Angelopoulos: Sampling and probabilistic inference in D/Slps |
15:00 - 15:25 | Bao Loi Quach and Felix Weitkämper: asymptoticplp: Approximating probabilistic logic programs on large domains |
Session: | Applications |
15:25 - 15:50 | Damiano Azzolini, Elisabetta Gentili and Fabrizio Riguzzi: Link Prediction in Knowledge Graphs with Probabilistic Logic Programming: Work in Progress |
16:00 - 16:30 | Break |
Session: | Causality |
16:30 - 17:10 | Invited Talk. Prof. Joost Vennekens: |
17:10 - 17:35 | Kilian Rückschloß and Felix Weitkämper: On the Subtlety of Causal Reasoning in Probabilistic Logic Programming: A Bug Report about the Causal Interpretation of Annotated Disjunctions |
17:35 - 18:15 | Invited Talk. Dr. Devendra Singh Dhami: |
Probabilistic logic programming (PLP) approaches have received much attention in this century. They address the need to reason about relational domains under uncertainty arising in a variety of application domains, such as bioinformatics, the semantic web, robotics, and many more. Developments in PLP include new languages that combine logic programming with probability theory, as well as algorithms that orate over programs in these formalisms.
The workshop encompasses all aspects of combining logic, algorithms, programming and probability.
PLP is part of a wider current interest in probabilistic programming. By promoting probabilities as explicit programming constructs, inference, parameter estimation and learning algorithms can be run over programs which represent highly structured probability spaces. Due to logic programming's strong theoretical underpinnings, PLP is one of the more disciplined areas of probabilistic programming. It builds upon and benefits from the large body of existing work in logic programming, both in semantics and implementation, but also presents new challenges to the field. PLP reasoning often requires the evaluation of large number of possible states before any answers can be produced thus breaking the sequential search model of traditional logic programs.
While PLP has already contributed a number of formalisms, systems and well understood and established results in: parameter estimation, tabling, marginal probabilities and Bayesian learning, many questions remain open in this exciting, expanding field in the intersection of AI, machine learning and statistics. As is traditional in this series, the workshop would be designed to foster exchange between the various communities relevant to probabilistic logic programming, including probabilistic programming and statistical relational artificial intelligence.
This workshop provides a forum for the exchange of ideas, presentation of results and preliminary work in all areas related to probabilistic logic programming; including, but not limited to:
Papers due: | |
Notification to authors: | |
Camera ready version due: | June 30th, 2023 |
Workshop date: | July 09th, 2023 |
(the deadline for all dates is intended Anywhere on Earth (UTC-12))
asymptoticplp: Approximating probabilistic logic programs on large domains
Probabilistic logic programs are logic programs in which some of the clauses are annotated with probabilistic facts. The behaviour of relations in these clauses can be very complex, leading to scalability issues. Asymptotic representations, in which queries are completely independent of the domain size and which approximate a probabilistic logic program on large domains, allow us to gain an understanding on how a probabilistic logic programs will behave for increasing domain sizes, and can be computed without actually having to execute the logic program. In particular, every probabilistic logic program under the distribution semantics is asymptotically equivalent to an acyclic probabilistic logic program consisting only of determinate clauses over probabilistic facts. We present asymptoticplp, a Prolog implementation of an algorithm which computes this. The transformation proceeds in several, modular steps which are of independent interest. These steps include rewriting the probabilistic logic program to a formula of least fixed point logic and then applying asymptotic quantifier elimination on the formula. Quantifier-free first-order formulas are then rewritten as acyclic determinate stratified DATALOG formulas, which together with the original probabilistic facts form a (probabilistic) logic program.
A Brief Discussion about the Credal Semantics for Probabilistic Answer Set Programs
Among the different logic-based programming languages, Answer Set Programming has emerged as an effective paradigm to solve complex combinatorial tasks. Since most of the real-world data are uncertain, several semantics have been proposed to extend Answer Set Programming to manage uncertainty, where rules are associated with a weight, or a probability, expressing a degree of belief about the truth value of certain atoms. In this paper, we focus on one of these semantics, the Credal Semantics, highlight some of the differences with other proposals, and discuss some possible future works. Submitted
Link Prediction in Knowledge Graphs with Probabilistic Logic Programming: Work in Progress
Since most of the real-world data are uncertain, several semantics have been proposed to extend Answer Set Programming to manage uncertainty, where rules are associated with a weight, or a probability, expressing a degree of belief about the truth value of certain atoms. In this paper, we focus on one of these semantics, the Credal Semantics, highlight some of the differences with other proposals, and discuss some possible future works. Submitted
On the Subtlety of Causal Reasoning in Probabilistic Logic Programming: A Bug Report about the Causal Interpretation of Annotated Disjunctions
In this work in progress, we give an example for a logic program with annotated disjunctions where the do-operator does not behave as intended. In particular, we see that the mutual exclusivity of heads in an annotated disjunction is not preserved after intervention.
Sampling and probabilistic inference in D/Slps
Stochastic logic programming (Slp) and Distributional logic programming (Dlp) are two closely related probabilistic logic programming formalisms that have been previously studied in the context of machine learning. The former is a restrictive form which has good parameter estimation results, while the latter is more expressive and has been used in the context of Bayesian machine learning. Although both formalisms have easily installed systems for performing the forementioned machine learning tasks, these systems have previously no facilities for more general probabilistic inference over these probabilistic languages. Here we describe newly added facilities to both systems for sampling and probabilistic inference. We present unified ways to perform sampling, where SLD resolution is replaced by stochastic clause selection. Probabilistic inference where the probability of goals is calculated using standard SLD on augmented clauses. A number of simple and more complex examples are considered and for sampling we demonstrate stochastic properties by taking advantage a couple of auxiliary libraries that work well with the probabilistic packages considered here. Finally, limitations of the expressivity of Slps are highlighted by juxtaposing it to the intuitive uniform membership program of Dlps.
Causality and Graph Neural Networks in the World of Logic
With the meteoric rise in deep learning models, their lack of reasoning capabilities and the black-box nature has also come to the fore. Causality has been termed as a critical missing ingredient in order to achieve human-level reasoning and understanding. Another critical factor in reasoning is the use of inherent connections and properties in the underlying data thus making geometric deep learning models such as graph neural networks essential. In this talk, I will give an overview how causality and graph neural networks can be approached from the world of logic to provide natural language explanations and a new class of GNNs respectively. I will also show how causality and graph neural networks can be related to each other thereby making both frameworks more powerful.
Bringing Statistics Back to Probabilistic Reasoning
One issue that tends to limit the applicability of probabilistic reasoning methods to real-world applications is the origin of the probabilistic values. When dealing with binomial properties and relationships, probabilistic methods implicitly assume that there is a complete knowledge of the population (classical methods) or that only a few instances are missing (imprecise probabilities). Yet, real-world scenarios use statistical methods which provide less perfect information. In this talk I argue, based on a business process scenario, for a more sound use of statistics in probabilistic reasoning, and provide the first theoretical basis for it. I also show the difficulties which arise from this more general framework, along with some advanced reasoning problems that can be handled, depending on the available knowledge.
The CP-logic approach to combining causality and probabilistic logic programming
Compared to other ways of constructing probabilistic models, one of the advantages of probabilistic logic programming (PLP) is that it can do so in a way which is more easily intelligible for humans. As emphasised in particular by the recent work of Pearl, the concept of causality plays a key role in how humans understand the world. Integrating causal concepts into PLP might therefore make its models more intelligible. In this talk, I give an overview of how causality has been studied in the context of the PLP formalism of CP-logic, discussing topics such as interventions, counterfactuals, explanations and actual causation.