The seminar of M2F department usually takes place on Tuesdays at 2pm. The seminar is not regular since there are many other talks in the department (see M2F department calendar).
Contact: Igor Walukiewicz
Measuring the informational content of real numbers has been a significant area of inquiry in algorithmic information theory. Finite-state compressibility (or finite-state dimension) of a real number is a value in [0, 1] which quantifies the amount of information/randomness in the real number as measured using finite-state automata. Finite-state dimension is the lower asymptotic ratio of compression achievable on an infinite string using information-lossless finite-state compressors. Interestingly, the finite-state dimension of a real number is also equal to the block Shannon entropy rate of the infinite sequence representing the expansion of the real number in a base b. A line of work, originating from Schnorr and Stimm (1972), has established that a number is Borel normal in base b if and only if its base b expansion has finite-state compressibility equal to 1, i.e., is incompressible. Hence, normal numbers are precisely the class of numbers that are incompressible using finite-state compressors.
Most of the prior research on finite-state dimension has relied on combinatorial methods. In this talk we explore how tools from Fourier analysis can be employed to gain new insights into the compressibility of real numbers, including the resolution of an open question:
One of the most powerful classical tools for investigating normal numbers (numbers having finite-state dimension 1) is the 1916 Weyl criterion, which characterizes normality in terms of exponential sums. It was unknown whether Weyl criterion could be generalized from characterizing numbers with finite-state dimension 1 to characterizing numbers with arbitrary finite-state dimensions in [0,1]. Such a generalization could make it a quantitative tool for studying data compression, prediction, etc. In this part of the talk, we generalize the Weyl criterion (1916) for normal numbers to characterize sequences having arbitrary finite-state dimension in [0,1]. We also demonstrate several applications of this formulation.
Absolutely normal numbers, being finite-state incompressible in every base of expansion, are precisely those numbers which have finite-state dimension equal to 1 in every base. At the other extreme, for example, every rational number has 0 finite-state dimension in every base. Generalizing this, Lutz and Mayordomo asked the following question: Does there exist any s strictly between 0 and 1 and a real number r such that r has finite state dimension equal to s in every base?. In this part of the talk, we use several tools involving exponential sums and techniques from Schmidt's work in 1960 to construct, for any given s in (0,1], a real number r having finite-state compressibility equal to s in every base. We thereby answer the open question affirmatively.
TBA
In this talk, we focus on unary temporal logic over words (UTL). It is known that languages definable in this logic have several equivalent characterizations based on different formalisms. Additionally, it is known that one can decide whether a regular language can be expressed in UTL.
First, I will present these various characterizations. Then, I will introduce a general approach that encompasses all known results and allows us to obtain new ones. This approach relies on the use of an "operator" whose repeated application yields a hierarchy of classes of regular languages. Finally, I will present some results about this new hierarchy, in particular its relationship with the so-called concatenation hierarchy. This is joint work with Thomas Place.
Probabilistic programs are used to describe randomized algorithms, security protocols and can be used for Bayesian learning. Elementary questions are however “more undecidable” than for classical programs. We will present a deductive verification approach using weakest precondition reasoning and give some insights on techniques to automate some of the analysis.
It is often easy to write down the specification of a system as a relation between inputs and outputs. But implementing the system is a functional problem: to provide functions that produce outputs from inputs. The question we ask is if we can automatically synthesize such a function from the given relation? This question has generated a lot of interest in recent years, especially in the Boolean setting, where despite theoretical hardness results, many techniques and tools have been developed that now scale surprisingly well. In this talk, we shine a light on this problem from a Knowledge Representation perspective. We identify structural properties and develop normal forms for the specification that guarantee provably efficient synthesis. Further, we move towards a characterization of what makes Boolean functional synthesis easy and examine techniques to compile into such forms.
We refine the complexity landscape for enumeration problems by introducing very low classes defined by using Boolean circuits as enumerators. We locate well-known enumeration problems, e.g., from graph theory, Gray code enumeration, and propositional satisfiability in our classes. In this way we obtain a framework to distinguish between the complexity of different problems known to be computable with polynomial delay, for which a formal way of comparison was not possible to this day. The approach offers a different measure of efficient enumeration such as the one defined using Constant Delay in the context of enumeration for query answering. This is joint work with Nadia Creignou and Heribert Vollmer.
Security properties such as non-interference cannot be expressed as properties of individual traces of a system. Therefore, the notion of hyperproperties has been introduced to overcome this limitation: a hyperproperty can express properties about pairs or more generally, sets of traces. An extension of the LTL logic (hyperLTL) has been proposed to define hyperproperties and some positive results such as decidability for the model-checking problem have been proven.
Motivated by verifying quantitative security properties, we propose extensions of HyperLTL following the proposal of similar extensions of LTL by Almagor et al., namely LTL[F] for quantitative operators and LTL[D] for discounting temporal operators. Such extensions aim to quantify on how much a formula is satisfied by a system. We will present some algorithms for model-checking those logics.
Weighted timed games are two-player zero-sum games played in a timed automaton equipped with integer weights. We consider optimal reachability objectives, in which one of the players, whom we call Min, wants to reach a target location while minimising the cumulated weight. While knowing if Min has a (deterministic) strategy to guarantee a value lower than a given threshold is known to be undecidable (with two or more clocks), several conditions, one ofthem being the divergence or one-clock WTGs with only non-negative weights, have been given to recover decidability. We, first, extend this list by considering arbitrary weights by showing that the value function can be computed in exponential time (if weights are encoded in unary).
Next, in such weighted timed games (like in untimed weighted games in the presence of negative weights), Min may need finite memory to play (close to) optimally. This is thus tempting to try to emulate this finite memory with other strategic capabilities. In particular, we allow the players to use stochastic decisions, both in the choice of transitions and timing delays. We give, for the first time, a definition of the expected value in weighted timed games, overcoming several theoretical challenges. We then show that, in divergent weighted timed games, the stochastic value is indeed equal to the classical (deterministic) value, thus proving that Min can guarantee the same value while only using stochastic choices and no memory.
Finally, even stochastic strategies, almost optimal strategies are not implementable since they use infinite precision of clocks in the choice of the delay or in the knowledge about the configurations. Robustness is a known process to encode the imprecision of delays in strategies: robustness allows to Max player (the opponent of Min) to lightly modify the delay chosen by Min. In the literature, two robust semantics exist: conservative semantic checks a guard after the perturbation, and excessive semantic checks a guard before. As for deterministic strategies, it was known that if Min has a (robust) strategy to guarantee a value lower than a given threshold is known to be undecidable. By adapting the process of value iteration introduced by Alur, we compute the (robust) value in acyclic WTG under conservative and excessive semantics.
This is joint work with Benjamin Monmege and Pierre-Alain Reynier.
We study the task, for a given language L, of enumerating the (generally infinite) sequence of its words, without repetitions, while bounding the delay between two consecutive words. To allow for delay bounds that do not depend on the current word length, we assume a model where we produce each word by editing the preceding word with a small edit script, rather than writing out the word from scratch. In particular, this witnesses that the language is orderable, i.e., we can write its words as an infinite sequence such that the Levenshtein edit distance between any two consecutive words is bounded by a value that depends only on the language. For instance, (a+b) is orderable (with a variant of the Gray code), but a+b* is not.
We characterize which regular languages are enumerable in this sense, and show that this can be decided in PTIME in an input deterministic finite automaton (DFA) for the language. In fact, we show that, given a DFA A, we can compute in PTIME automata A1,…,At such that L(A) is partitioned as L(A1)⊔…⊔L(At) and every L(Ai) is orderable in this sense. Further, we show that the value of t obtained is optimal, i.e., we cannot partition L(A) into less than t orderable languages.
In the case where L(A) is orderable (i.e., t=1), we show that the ordering can be produced by a bounded-delay algorithm: specifically, the algorithm runs in a suitable pointer machine model, and produces a sequence of bounded-length edit scripts to visit the words of L(A) without repetitions, with bounded delay -- exponential in |A| -- between each script. In fact, we show that we can achieve this while only allowing the edit operations push and pop at the beginning and end of the word, which implies that the word can in fact be maintained in a double-ended queue.
This talk presents a recently introduced framework for active learning (of Mealy machines and DFAs) in practice, when faced with noise and modifications of the system. Based on this, it expands a research project that would use a generalization of this framework to create some bridges between learning and verification, from theory to the design of industry-ready black-box tools that can be used to verify systems even in the absence of a suitable model of their behaviour.
Inductive Logic Programming (ILP) is the field that studies approaches to the machine learning of logic programs from examples and background knowldge. Meta-Interpretive Learning (MIL), the subject of this talk, is a new form of ILP capable of learnig arbitrary logic programs with recursion and with invented predicates from very few examples and without the limitations of earlier approaches. What distinguishes MIL from other ILP approaches is its use of second-order background knowledge and SLD-Resolution as a proof procedure. In this talk I will go briefly over the short history of MIL and describe the different stages of its evolution in both theory and implementation. I will describe the emerging theoretical understanding of MIL as Second-Order SLD-Resolution and the theoretical and practical ramifications of this new understanding. I will sketch out a proof of the inductive soundness and completeness, and the efficiency, of Second-Order SLD-Resolution in MIL. I will discuss existing implementations of MIL and their ongoing application to practical problems such as generalised planning for robotics, and machine vision currently underway at the University of Surrey. Finally, I will examine potential future applications to other classical AI tasks such as formal methods, verification and model checking. The talk is designed to be accessible to computer scientists with a background in logic and general knowledge of logic programming and machine learning.
Inductive logic programming (ILP) is a form of program synthesis. The goal is to learn logic programs that generalise examples. ILP has attractive features such as strong data efficiency and high expressivity. The challenge lies in efficiently searching large hypothesis spaces. We present approaches to improve the learning performance of ILP systems, building on recent progress in constraint programming. We demonstrate the scalability of our approaches to complex problems involving noise or infinite numerical domains.
In this talk we show how to efficiently utilize structure for combinatorial problem solving. We mainly focus on the prominent measure treewidth and the answer set programming framework, thereby establishing tight runtime upper and lower bounds for treewidth, under reasonable assumptions in complexity theory. These bounds will be presented in the context of decomposition-guided reductions, which are polynomial-time reductions that are guided along a specific structural representation, i.e., a tree decomposition, of the instance. In the course of this talk we also show empirical results in order to underline the role of structure in solving computationally hard reasoning modes like counting.
Vector Addition Systems with States (VASS) are a long-studied model that are equivalent to Petri nets. The coverability problem asks whether there exists a run from a given initial configuration to a configuration that is at least a given target configuration. Coverability is in EXPSPACE (Rackoff '78) and is EXPSPACE-hard already under unary encodings (Lipton '76). Rackoff's upper bound is derived by considering the necessary length of runs that witness coverability. In this presentation, I will present an improved upper bound on the lengths of such runs. The run length bound can be used to obtain two algorithms: an optimal exponential space algorithm and a conditionally optimal double-exponential time algorithm. I aim to show the double-exponential time lower bound that is conditioned upon the Exponential Time Hypothesis (ETH).
In this talk, I propose an automated procedure for proving polyhedral abstractions for Petri nets. Polyhedral abstraction is a new type of state-space equivalence based on the use of linear integer constraints. The approach relies on an encoding into a set of SMT formulas whose satisfaction implies that the equivalence holds. The difficulty, in this context, arises from the fact that we need to handle infinite-state systems. For completeness, we exploit a connection with a class of Petri nets that have Presburger-definable reachability sets. This talk will also be an opportunity to present new results on the use of polyhedral abstraction for verifying reachability properties. In particular, I will introduce a new variable elimination procedure that can project a property, about an initial Petri net, into an equivalent formula that only refers to the reduced version of this net.
I shall introduce this prominent and exciting research direction in machine learning, and illustrate it by means of a recent joint work with Dmitry Chistikov and Matthias Englert. Familiarity with machine learning or neural networks will not be necessary for following my talk.
The Rabin tree theorem yields an algorithm to solve the satisfiability problem for monadic second-order logic over infinite trees. Recently, we solved the probabilistic variant of this problem. Namely, we showed how to compute the probability that a randomly chosen tree satisfies a given formula. We additionally showed that this probability is an algebraic number. This closes a line of research where similar results were shown for formalisms weaker than the full monadic second-order logic. The talk will be based on a joint paper with Damian Niwiński and Michał Skrzypczak (LICS 2023).
An important question in automata theory is to precisely understand natural classes of languages defined by restricting the common definitions of regular languages (such as regular expressions, automata, monadic second-order logic or finite monoids). Of course, « understanding a class » is not a precise objective. A typical approach is to show that the investigated class has decidable membership: given a regular language as input, decide if it belongs to the class. Rather than the procedure itself, the motivation is that obtaining such an algorithm requires a deep understanding of the class. In this talk, we shall also consider the more general separation problem: given two regular languages L1 and L2 as input, decide whether there exists a third language that belongs to the investigated class, includes L1 and is disjoint from L2. This problem has been getting a lot of attention recently. The motivation is twofold. First, it has been shown that separation is a key ingredient for solving some of the most difficult membership questions. Second, separation is actually more rewarding than membership (albeit more difficult) with respect to our main goal: « understanding classes ». Roughly, a membership algorithm for a class C can only detect the languages in C, while a separation algorithm provides insight on how arbitrary regular languages interact with C.
We are interested in the class of the group languages. Among the prominent ones, this class is rather unique as it only admits « machine based » definitions. The first one is algebraic: the group languages are those recognized by a morphism into a finite group. The second one is based on automata: the group languages are those recognized by an automaton in which each letter induces a permutation on the set of states. On the other hand, no definition based on regular expressions or logic is known for the group languages. In the talk, we are mainly interested in the separation problem for this class. We shall explain why this question comes-up naturally when looking at other prominent classes of regular languages. Moreover, we shall present a simple algorithm for this problem and explain why it is quite different from those existing for other classes.
An infinite set is orbit-finite if, up to permutations of the underlying structure of atoms, it has only finitely many elements. We study a generalisation of linear programming where constraints are expressed by an orbit-finite system of linear inequalities over an orbit-finite set of unknowns. As our principal contribution we provide a decision procedure for checking if such a system has a real solution, and for computing the minimal/maximal value of a linear objective function over the solution set. We also show undecidability of these problems in case when only integer solutions are considered. Therefore orbit-finite linear programming is decidable, while orbit-finite integer linear programming is not.
Logics for Hyperproperties have received increasing attention in the last decade due to their importance e.g. for security analyses. Past approaches have focussed on synchronous properties, i.e. techniques in which different paths are explored lockstepwise. More recently automata models and logics supporting also asynchronous hyperproperties have been studied. In this talk I will survey our recent research on logics for asynchronous hyperproperties. This is joint work with Jens Gutsfeld and Christoph Ohrem.
In this talk, we will discuss the problem of automatically constructing computer programs from input-output examples, especially when the target language is domain-specific and defined using a context-free grammar. I will introduce a theoretical framework called distribution-based search, discuss its challenges, and present several search strategies based on learning the weights of a probabilistic context-free grammar (PCFG) and then using this PCFG to enumerate the most promising candidate programs efficiently. The presentation will be based on the following paper published at AAAI'2022: https://arxiv.org/abs/2110.12485. Joint work with Nathanaël Fijalkow, Théo Matricon, Kevin Ellis, Pierre Ohlmann, Akarsh Potta
Motivated by the increasing appeal of robots in information-gathering missions, we study multi-agent path planning problems in which the agents must remain interconnected. We model an area by a topological graph specifying the movement and the connectivity constraints of the agents. In the first part of the talk, we study the theoretical complexity of the reachability and the coverage problems of a fleet of connected agents. We also introduce a new class called sight-moveable graphs which admit efficient algorithms. In the second part, we discuss several algorithms to solve connected multi-agent path finding.
This talk presents reactive synthesis problems over infinite data domains. The goal of reactive synthesis is to automatically generate, from a specification of valid executions, a reactive system interacting with its environment, which enforces only valid executions, no matter what its environment does. There is an extensive literature on reactive synthesis from regular specifications, which semantically are regular languages of infinite words over a finite alphabet. This talk intends to give an overview of several extensions of synthesis to the more general setting of infinite alphabets, allowing to model data (e.g. integers), an aspect which is usually ignored by classical reactive synthesis methods. The presented work is based on a collaboration with Léo Exibard, Ayrat Khalimov and Pierre-Alain Reynier.
Data examples can be a useful tool when a formal specification must be synthesized or communicated. They sometimes provide a more convenient medium for communication than the formal specification itself. We all know this from daily life: think about teaching someone a card game. Simply reading out loud the rules of the game is not an effective way of doing this. Instead, what works best in practice is to give examples of valid and invalid game plays. In data management, data examples have been proposed and used in the context of the interactive design of schema mappings, as well as query synthesis, and query refinement and debugging. In knowledge representation, data examples arise, for instance, in the context of algorithms for learning concept expressions and ontologies. The talk will be about data examples for conjunctive queries, and we will cover some fundamental questions, such as: (a) when can a given query be uniquely characterized by a small number of data examples (where a data example is a database instance satisfying given integrity constraints), and (b) how to construct, from a given set of data examples, a "good" fitting query (for various notions of goodness). To answer these questions, we draw on, and refine, techniques from the literature on combinatorial graph theory and constraint satisfaction problems.
Automatic structures are infinite structures that are finitely represented by synchronized finite-state automata and enjoy pleasant algorithmic properties such as decidability of the first-order theory and effective closure under first-order interpretations. We investigate Ramsey quantifiers over automatic structures, which express the existence of an infinite clique. Interesting connections between Ramsey quantifiers and two problems in verification are firstly observed: (1) reachability with Büchi and generalized Büchi conditions in regular model checking can be seen as Ramsey quantification over transitive automatic graphs, (2) checking monadic decomposability (recognizability) of automatic relations can be viewed as Ramsey quantification over co-transitive automatic graphs. We provide a comprehensive complexity landscape of Ramsey quantifiers in these three cases (general, transitive, co-transitive). In turn, this yields a wealth of new results with precise complexity, e.g., verification of subtree/flat prefix rewriting, as well as monadic decomposability over tree-automatic relations.
This is joint work with Pascal Bergsträßer, Anthony W. Lin, and Georg Zetzsche.
We show that the problem of whether a query is equivalent to a query of tree-width k is decidable, for the class of Unions of Conjunctive Regular Path Queries with two-way navigation (UC2RPQs). A previous result by Barceló, Romero, and Vardi has shown decidability for the case k=1, and here we show that decidability in fact holds for any arbitrary k>1. The algorithm is in 2ExpSpace, but we show that the complexity drops to the second level of the polynomial hierarchy for a restricted but practically relevant case of queries obtained by only allowing simple regular expressions. We also investigate the related problem of approximating a UC2RPQ by queries of small tree-width. We exhibit an algorithm which, for any fixed number k, builds the maximal under-approximation of tree-width k of a UC2RPQ. The maximal under-approximation of tree-width k of a query q is a query q' of tree-width k which is contained in q in a maximal and unique way, that is, such that for every query q'' of tree-width k, if q'' is contained in q then q'' is also contained in q'. Joint work with Diego Figueira.
Distributed algorithms are central to many domains such as scientific computing, telecommunications and the blockchain. Even when they aim at performing simple tasks, their behaviour is hard to analyze, due to the presence of faults (crashes, message losses, etc.) and to the asynchrony between the processes. Parameterized verification techniques have been developed to prove correctness of distributed algorithms independently of actual setup, i.e. the number of processes and the potential failures. In this talk, we present a CEGAR approach to checking safety and liveness properties for fault tolerant distributed algorithms that use threshold conditions, typically on the number of received messages of a given type.
MSO transductions are binary relations between classes of structures which are defined using monadic second-order logic. For example, the binary relation {(G,T) | T is some spanning tree of G } is an MSO transduction from the class of graphs to the class of trees. MSO transductions form a category, since they are closed under composition. In my talk, I will discuss this category, and show its usefulness by explaining how many notions, such as tree decompositions or recognizability, can be defined by only using MSO transductions. Part of this is rooted in classical results of Courcelle and Engelfriet, but there are new results as well.
In this talk, I will give a broad overview of the field of infinite duration games: why do we care about them, and what are the main open questions. I will then discuss a new approach to making progress on these questions by reducing them to problems about (directed) graphs.
We consider two-player stochastic games played on a finite graph for infinitely many rounds. Stochastic games generalize both Markov decision processes (MDP) by adding an adversary player, and two-player deterministic games by adding stochasticity. The outcome of the game is a sequence of distributions over the states of the game graph. We consider synchronizing objectives, which require the probability mass to accumulate in a set of target states, either always, once, infinitely often, or always after some point in the outcome sequence; and the winning modes of sure winning (if the accumulated probability is equal to 1) and almost-sure winning (if the accumulated probability is arbitrarily close to 1).
We present algorithms to compute the set of winning distributions for each of these synchronizing modes, showing that the corresponding decision problem is PSPACE-complete for synchronizing once and infinitely often, and PTIME-complete for synchronizing always and always after some point. These bounds are remarkably in line with the special case of MDPs, while the algorithmic solution and proof technique are considerably more involved, even for deterministic games. This is because those games have a flavour of imperfect information, in particular they are not determined and randomized strategies need to be considered, even if there is no stochastic transitions in the game graph. Moreover, in combination with stochasticity in the game graph, finite-memory strategies are not sufficient in general (for synchronizing infinitely often).
Directed model checking is a bug-finding technique that emerged in the late 1990s, primarily applied to finite-state systems and infinite-state systems with finite quotient graphs such as timed automata. Recent progress in the areas of optimisation modulo theories and arithmetic abstractions of infinite-state systems makes it possible to apply this technique to efficiently (semi-)deciding reachability in inherently infinite-state systems that may even have undecidable reachability problems. In this talk, I will give an introduction to the ideas underlying directed model checking and demonstrate how it can be used to semi-decide reachability problems in large-scale Petri nets. This talk is based on joint work with M. Blondin and Ph. Offtermatt (Sherbrooke, CA)
Existential rules are a well studied ontology-mediated query language for which the chase represents a generic computational approach for query answering. It is straightforward that existential rule queries exhibiting chase termination are decidable and can only recognize properties that are preserved under homomorphisms. In this paper, we show the converse: every decidable query that is closed under homomorphism can be expressed by an existential rule set for which the standard chase universally terminates. Membership in this fragment is not decidable, but we show via a diagonalisation argument that this is unavoidable.
Traditionally, most verification efforts have focused on the satisfaction of trace properties, such as that an assertion is satisfied at a particular program location or that the computation terminates eventually. Many policies from information-flow security, like observational determinism or noninterference, and many other system properties including promptness and knowledge can, however, not be expressed as trace properties, because these properties are hyperproperties, i.e., they relate multiple execution traces. In this talk, I will give an overview on recent efforts to develop specification logics and model checking algorithms for hyperproperties. The two principal ideas are the addition of variables for traces and paths in temporal logics, like LTL and CTL*, and the introduction of the equal-level predicate in first-order and second-order logics, like monadic first-order logic of order and MSO. Both extensions have a profound impact on the expressiveness of the logics, resulting in a hierarchy of hyperlogics that differs significantly from the classical hierarchy. Model checking remains decidable for a large part of the new hierarchy. Satisfiability is in general undecidable for most hyperlogics, but there are useful decidable fragments. I will report on first successes in translating these encouraging theoretical results into practical verification tools.
In this talk, we present logical formalisms in which reasoning about concrete domains is embedded in formulae at the atomic level. These mainly include temporal logics with concrete domains and description logics with concrete domains.
We present a categorical approach to learning automata over words, in the sense of the L-algorithm of Angluin. This yields a new generic L-like algorithm which can be instantiated for learning deterministic automata, automata weighted over fields, as well as subsequential transducers. The generic nature of our algorithm is obtained by adopting an approach in which automata are simply functors from a particular category representing words to a computation category. We establish that the sufficient properties for yielding the existence of minimal automata (that were disclosed in a previous paper), in combination with some additional hypotheses relative to termination, ensure the correctness of our generic algorithm.
I will discuss vector spaces spanned by orbit-finite sets. These spaces are infinite-dimensional, but their sets of dimensions are so highly symmetric that the spaces have many properties enjoyed by finitely-dimensional spaces.
Given two finite automata A1, A2, recognising languages L1, L2, respectively, the state complexity of union (or intersection, or complement, etc.) is how many states (in terms of the number of states in A1, A2) may be needed in the worst case for an automaton that recognises L1 union L2 (or L1 intersect L2, or the complement of L1, etc.). The state complexity of complementing unambiguous finite automata has long been open, and as late as 2015 Colcombet asked whether unambiguous automata can be complemented with a polynomial blowup.
I will report on progress on this question in recent years, both in terms of lower and upper bounds. For the currently best lower bound, techniques from communication complexity play an essential role.
Regular languages can be actively learned with membership and equivalence queries in polynomial time. The learning algorithm, called the L^ algorithm, constructs iteratively the right congruence relation of a given regular language L, and returns the minimal DFA recognizing L. The L^ algorithm has been adapted to various types of automata: tree automata, weighted automata, nominal automata. However, an extension to infinite-word automata has been elusive.
In reactive synthesis, the goal is to automatically generate an implementation from a specification of the reactive and non-terminating input/output behaviours of a system. Specifications are usually modelled as logical formulas or automata over infinite sequences of signals (omega-words), while implementations are represented as transducers. In the classical setting, the set of signals is assumed to be finite.
A sorted list of query answers may be much larger than the size of the input database. If its use-case does not entail going over this list one-by-one, but rather it is needed in order to compute the median, a boxplot, or another task that requires jumping arbitrarily to answers by their indices, computing this entire list is unnecessary and inefficient. In this talk, we inspect the question of when a sorted array of query answers can be efficiently simulated. We call this task ranked direct access and focus on near-optimal time guarantees. We ask in which cases ranked direct access can be achieved with only logarithmic factors as overhead on top of the linear time required to determine whether there is an answer, and the constant time required per accessed answer. Thus, we ask which CQs and orders can be answered with quasilinear preprocessing and polylogarithmic access time. We show algorithms for lexicographic and sum-of-weight orders, and prove conditional lower bounds implying that (under some complexity assumptions) our algorithms capture all tractable cases for self-join-free CQs.
Learning techniques for deterministic finite automata (DFA) have been developed starting from the 1970ies. The two main settings are the construction of DFA from examples (finite sets of words that should be accepted or rejected by the DFA), and from queries to an oracle. These problems are already well understood for DFA, and various learning algorithms for these two settings exist. Deterministic automata on infinite words define languages of infinite words, and are syntactically very similar to DFA. However, certain key properties of DFA that are used in learning algorithms do not hold for automata on infinite words. Therefore, only few results for learning automata over infinite words have been obtained up to now. In this talk, I present an algorithm for the construction of deterministic omega-automata from examples that is obtained by adapting an algorithm called RPNI from the setting of finite words.
We introduce Concurrent NetKAT (CNetKAT), an extension of the network programming language NetKAT with multiple packets and with operators to specify and reason about concurrency and state in a network. We provide a model of the language based on partially ordered multisets, well-established mathematical structures in the denotational semantics of concurrent languages. We prove that CNetKAT is a sound and complete axiomatization of this model, and we illustrate the use of CNetKAT through various examples. More generally, CNetKAT is an algebraic framework to reason about programs with both local and global state. In our model these are, respectively, the packets and the global variable store, but the scope of applications is much more general, including reasoning about hardware pipelines inside an SDN switch.
We study context-bounded verification of liveness properties of multi-threaded, shared-memory programs, where each thread can spawn additional threads. Our main result shows that context-bounded fair termination is decidable for the model; context-bounded implies that each spawned thread can be context switched a fixed constant number of times. Our proof is technical, since fair termination requires reasoning about the composition of unboundedly many threads each with unboundedly large stacks. In fact, techniques for related problems, which depend crucially on replacing the pushdown threads with finite-state threads, are not applicable. Instead, we introduce an extension of vector addition systems with states (VASS), called VASS with balloons (VASSB), as an intermediate model; it is an infinite-state model of independent interest. A VASSB allows tokens that are themselves markings (balloons). We show that context bounded fair termination reduces to fair termination for VASSB. We show the latter problem is decidable by showing a series of reductions: from fair termination to configuration reachability for VASSB and thence to the reachability problem for VASS. For a lower bound, fair termination is known to be non-elementary already in the special case where threads run to completion (no context switches).
I will present FO+, a restriction of first-order logic on words, where letter predicates are required to appear positively. The words considered here are on a powerset alphabet: predicates a(x) and b(x) can be true simultaneously. We will ask a syntax versus semantics question: FO+-definable languages are monotone in the letters (with respect to inclusion), but can every FO-definable monotone language be expressed in FO+ ? On general structures, Lyndon's theorem gives a positive answer to this question, but it is known to fail on finite structures. We will see that it also fails on finite words, by giving a simple counter-example language. This gives a new proof for the failure of Lyndon's theorem on finite structures, that is much more elementary than previous proofs. Finally we will see that surprisingly, FO+-definability is undecidable for regular languages.
We present a memory-efficient algorithm for multi-objective model checking problems on Markov decision processes (MDPs) with multiple cost structures. The key problem at hand is to check whether there exists a scheduler for a given MDP such that all objectives over cost vectors are fulfilled. We cover multi-objective reachability and expected cost objectives, and combinations thereof. An empirical evaluation using a prototypical implementation on top of the Storm model checker shows the scalability of our approach both in terms of memory consumption and runtime.
Abstract: We adapt the L* algorithm to learn bimonoids recognising pomset languages. We then show how to convert between bimonoids and a class of pomset automata that accepts precisely the class of pomset languages recognised by bimonoids.
Complexity of the reachability problem in Vector Addition Systems (VASes) was a long standing problems for a few decades. Very recently two proofs of Ackermann-hardness were obtained independently
Using a combination of automata-theoretic and game-semantic techniques, we propose a method for analysing higher-order concurrent programs. Our language of choice is Finitary Idealised Concurrent Algol (FICA) due to its relatively simple fully abstract game model.
We consider the cyclotomic identity testing (CIT) problem: given a polynomial f(x_1,...,x_k),
In LTL synthesis, the task is to construct a reactive system producing an output stream ensuring a given formula of linear temporal logic is satisfied for any input stream. Recent results on translating LTL to automata open avenues to learning-based approaches and heuristics for such problems. In particular, the automata-theoretic approach to LTL synthesis can utilize not only topological information through standard graph algorithms, but can now also profit from semantic information through learning algorithms. As a result, in many cases an optimal solution can be obtained without any computation; in more general cases, the approach might yield more explainable controllers and scale better.
In this talk I will introduce a model of register automata over infinite trees with extrema constraints. Such an automaton can store elements of a linearly ordered domain in its registers, and can compare those values to the suprema and infima of register values in subtrees. We will see that the emptiness problem for these automata is decidable. As an application, I will outline how the satisfiability problem for two-variable logic with arbitrary predicates, two of them interpreted by linear orders, can be decided.
The use of shuffling together decks of cards as a metaphor for interleaving execution of processes is well-established in computer science, and has allowed for the transfer of concepts and tools from pure mathematics to areas such as the analysis of Race Conditions. This talk aims to demonstrate that, when we extend this intuition and related models to the infinitary case, we find further connections with many topics in theoretical computing. These range from logical models to cryptographic platforms and questions of complexity and computability.
The context-free language (CFL) reachability problem on graphs, as well as
We discuss the fine-grained complexity of enumerating the answers to a query over a relational database. With the ideal guarantees, linear time is required before the first answer to read the input and determine its existence, and then we need to print the answers one by one. Consequently, we wish to identify the queries that can be solved with linear preprocessing time and constant or logarithmic delay between answers. A known dichotomy classifies CQs into those that admit such enumeration and those that do not. The computationally expensive component of query answering is joining tables, which can be done efficiently if and only if the join query is acyclic. However, the join query usually does not appear in a vacuum; for example, it may be part of a larger query, or it may be applied to a database with dependencies. We inspect how the complexity changes in these settings and chart the borders of tractability within. In addition, we consider the task of enumerating query answers with a uniformly random order, and we propose to do so using an efficient random-access structure for representing the set of answers. We also prove conditional lower bounds showing that our algorithms capture all tractable queries in some cases. Among our results, we show that a union of tractable conjunctive queries may be intractable w.r.t. random access; on the other hand, a union of intractable conjunctive queries may be tractable w.r.t. enumeration.
It is shown that if a pushdown system is bisimulation equivalent to a finite system, there is already such a finite system whose size is elementary in the description size of the pushdown system. As a consequence, it is elementarily decidable if a pushdown system is bisimulation-finite. This is joint work with Pawel Parys.
We consider reachability in dynamical systems with discrete linear updates, but with fixed decimal precision, i.e., such that values of the system are rounded at each step. Given a matrix M in Q^{d*d}, an initial vector x in Q^d, a granularity g in Q_+ and a rounding operation [·] projecting a vector of Q^d onto another vector whose every entry is a multiple of g, we are interested in the behaviour of the orbit O=
Answering queries using views is a classical and well-studied topic in database theory. Some key associated problems are (monotone) determinacy – can the query Q be expressed as a (monotone) function of the set of views? -- and rewritability – can the query Q be rewritten in certain language over the schema of the views?
In this talk, I will present a construction that takes as input a Muller automaton and transforms it into a parity automaton in an optimal way. More precisely, the resulting parity automaton has minimal size and uses a minimal number of priorities among those automata that admit a locally bijective morphism to the original Muller automaton. This transformation and the optimality result can also be applied to games and other types of transition systems.
In this talk, I will introduce a few computational models, known as (algebraic) circuits and formulas, and as branching programs (ABPs). Restricted classes of computation, such as monotone circuits, or non-commutative ABPs, have been investigated through the past decades. In the latter setting, a family of matrices (Nisan matrices) naturally appears, their ranks are related to the optimal size of a circuit computing the target polynomial. Recently, we investigated weak monotonicity (a relaxation of the monotonicity constraint), and found some explicit polynomials for which the weak-non-negative ranks don’t characterize the minimal size of an ABP computing them. The sum of ranks still gives a lower bound on minimal-size computation, and this can be used, for instance, to obtain a quadratic lower bound when computing the elementary symmetric polynomial. Joint work with Hervé Fournier, Guillaume Malod, and Sébastien Tavenas.
In January 2019 we started the DeepSynth project to understand the use of machine learning for program synthesis. Two years later, I will discuss our current understanding, the ideas we had, and some perspectives.
We consider the problem of solving random parity games. We prove that parity games exibit a phase transition threshold so that when the degree of the graph that defines the game has a degree large enough then there exists a polynomial time algorithm that solves the game with high probability when the number of nodes goes to infinity. We further propose the SWCP (Self-Winning Cycles Propagation) algorithm and show that, when the degree is large enough, SWCP solves the game with high probability. Furthermore, the complexity of SWCP is polynomial. The design of SWCP is based on the threshold for the appearance of particular types of cycles in the players' respective subgraphs. We further show that non-sparse games can be solved in polynomial time with high probability. This is a joint work with Mickael Touati. More information at https://arxiv.org/abs/2007.08387
TLA+ is a language for formal specification of all kinds of computer systems. System designers use this language to specify concurrent, distributed, and fault-tolerant protocols, which are traditionally presented in pseudo-code. At Informal Systems, we are using TLA+ to specify and reason about the protocols that are implemented in the Tendermint blockchains and Cosmos ecosystem.
In this presentation I will talk about various counting problems that naturally arise in the context of query evaluation over incomplete databases. Incomplete databases are relational databases that can contain unknown values in the form of labeled nulls. We will assume that the domains of these unknown values are finite and, for a Boolean query $q$, we will consider the following two problems: given as input an incomplete database $D$, (a) return the number of completions of $D$ that satisfy $q$; or (b) return or the number of valuations of the nulls of $D$ yielding a completion that satisfies $q$.
We consider the online computation of a strategy that aims at optimizing the expected average reward in a Markov decision process. The strategy is computed with a receding horizon and using Monte Carlo tree search (MCTS). We augment the MCTS algorithm with the notion of symbolic advice, and show that its classical theoretical guarantees are maintained. Symbolic advice are used to bias the selection and simulation strategies of MCTS. We describe how to use QBF and SAT solvers to implement symbolic advice in an efficient way. We illustrate our new algorithm using the popular game Pac-Man and show that the performances of our algorithm exceed those of plain MCTS as well as the performances of human players.
We formalize the problem of maximizing the mean-payoff value with high probability while satisfying a parity objective in a Markov decision process with unknown probabilistic transition function and unknown reward function. Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in single end components two combinations of guarantees on the parity and mean-payoff objectives can be achieved depending on how much memory one is willing to use.
We consider two natural subclasses of deterministic top-down tree-to-tree transducers, namely, linear and uniform-copying transducers. For both classes we show that it is decidable whether the translation of a transducer with look-ahead can be realized by a transducer from the same class without look-ahead.
Getting the deterministic finite-state automaton with the least possible number of states is an essential question in many applications such as text processing, image analysis and program verification and synthesis.
Ontologies have been used to describe knowledge in various domains, in particular, in those related to life sciences. Semi-automating the process of building an ontology has attracted researchers from various communities into a field called Ontology Learning. The process of building an ontology can be divided into two main tasks: finding the relevant vocabulary and the appropriate ontology language, and discovering how the vocabulary should be related using the logical constructs available in the chosen ontology language. In this presentation, I will provide a brief overview of five approaches from the literature which have been proposed to semi-automate the process of building an ontology formulated in description logic (DL), focusing on the second task. I will then present some results on the complexity of learning lightweight DL ontologies in the exact and probably approximately correct learning models from computational learning theory.
Proof theory is the study of proofs as mathematical objects in their own right. Infinite proofs (eg. infinite descent proofs) are pervasive in mathematics. A formal way of characterizing such proofs can be done by looking at fixed point logics (eg. mu calculus) from proof-theoretic lenses. Baelde et. al. have proposed an infinitary sequent calculus (i.e. infinitely deep, finitely wide proofs) for linear logic with fixed points (muMALL). In this talk, following a brief history of proof theory and infinite proofs, I will introduce muMALL. However, the sequent calculus of muMALL turns out to be ``too sequential. In order to achieve more liberal cut elimination, we have devised infinets which are proofs nets (à la Curien) for the multiplicative fragment (muMLL). The later part of the talk will focus on infinets and ongoing work on these nets.
Query evaluation is the problem of checking if some input data
The class of automaton (semi)groups--that is (semi)groups generated by functions defined using transducers--has been studied since the late 70's as many very interesting (semi)groups arise from it. From a computer scientist point of view it is also very nice because the underlying automaton structure allows to apply known tools, and for instance one can decide if an element represents the identity element using automaton minimization.
We study the expressive power of polynomial recursive sequences, a nonlinear extension of the well-known class of linear recursive sequences. These sequences arise naturally in the study of nonlinear extensions of weighted automata, where (non)expressiveness results translate to class separations. A typical example of a polynomial recursive sequence is b_n=n!. Our main result is that the sequence u_n=n^n is not polynomial recursive.
Bertrand et al. (2017) introduced a model of parameterised systems, where each agent is represented by a finite state system, and studied the following control problem: for any number of agents, does there exist a controller able to bring all agents to a target state? They showed that the problem is decidable and EXPTIME-complete in the adversarial setting, and posed as an open problem the stochastic setting, where the agent is represented by a Markov decision process. In this paper, we show that the stochastic control problem is decidable. Our solution makes significant uses of well quasi orders, of the max-flow min- cut theorem, and of the theory of regular cost functions.
The reachability problem is a central decision problem for formal verification based on vector addition systems with states (VASS), which are equivalent to Petri nets and form one of the most studied and applied models of concurrency. Reachability for VASS is also inter-reducible with a plethora of problems from a number of areas of computer science. In spite of recent progress, the complexity of the reachability problem remains unsettled, and it is closely related to the lengths of shortest VASS runs that witness reachability. We consider VASS of fixed dimension, and obtain three main results. For the first two, we assume that the integers in the input are given in unary, and that the control graph of the given VASS is flat (i.e., without nested cycles). We obtain a family of VASS in dimension 3 whose shortest reachability witnessing runs are exponential, and we show that the reachability problem is NP-hard in dimension 7. These results resolve negatively questions that had been posed by the works of Blondin et al. in LICS 2015 and Englert et al. in LICS 2016, and contribute a first construction that distinguishes 3-dimensional flat VASS from 2-dimensional VASS. Our third result, by means of a novel family of products of integer fractions, shows that 4-dimensional VASS can have doubly exponentially long shortest reachability witnessing runs. The smallest dimension for which this was previously known is 14. Joint work with Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Filip Mazowiecki. Paper available at https://arxiv.org/abs/2001.04327
Proof theory is the study of proofs as mathematical objects in their own right. Infinite proofs (eg. infinite descent proofs) are pervasive in mathematics. A formal way of characterizing such proofs can be done by looking at fixed point logics (eg. mu calculus) from proof-theoretic lenses. Baelde et. al. have proposed an infinitary sequent calculus (i.e. infinitely deep, finitely wide proofs) for linear logic with fixed points (muMALL). In this talk, following a brief history of proof theory and infinite proofs, I will introduce muMALL. However, the sequent calculus of muMALL turns out to be ``too sequential. In order to achieve more liberal cut elimination, we have devised infinets which are proofs nets (à la Curien) for the multiplicative fragment (muMLL). The later part of the talk will focus on infinets and ongoing work on these nets.
Knowledge compilation aims to transform knowledge on a system, often
It is well-known that every sentence of first-order logic over
We prove MSO-transductions of graphs of bounded treewidth have decidable
Reverse mathematics consists of the study of the minimal axioms needed to prove a theorem. A phenomenon that appeared at the beginning of this study is that among the natural theorems, an enormous majority are equivalent to one out of five axiomatic systems: the Big Five. The exceptions to this are mainly mainly theorems from combinatorics, the most famous being the Ramsey theorem for pairs. This makes combinatorics especially interesting in the context of reverse mathematics.
In this talk I will describe the use of strategic port graph rewriting as a basis for the implementation of visual modelling tools. The goal is to facilitate the specification and analysis of complex systems. A system is represented by an initial graph and a collection of graph rewrite rules, together with a user-defined strategy to control the application of rules. The traditional operators found in strategy languages for term rewriting have been adapted to deal with the more general setting of graph rewriting, and some new constructs have been included in the strategy language to deal with graph traversal and management of rewriting positions in the graph. We give a formal semantics for the language, examples of application in the areas of biochemistry, social networks and database design, and a brief description of its implementation: the graph transformation and visualisation tool PORGY.
The reachability problem is a central decision problem for formal verification based on vector addition systems with states (VASS), which are equivalent to Petri nets and form one of the most studied and applied models of concurrency. Reachability for VASS is also inter-reducible with a plethora of problems from a number of areas of computer science. In spite of recent progress, the complexity of the reachability problem remains unsettled, and it is closely related to the lengths of shortest VASS runs that witness reachability.
Work in collaboration with Thomas Colcombet.
Cyclic proofs are a class of formal proof systems that allow some kind of circular reasoning. Unlike classical proofs, represented by finite trees with axioms as leaves, cyclic proofs are represented by trees containing infinite branches. The Curry-Howard correspondence allows us to see these cyclic proofs as programs. We investigate the computational content of a cyclic proof system based on Kleene algebra, where we see expressions as data types. Different proofs of the same sequent e |- f can be interpreted as different programs mapping every input of type e to an output of type f. We show that depending on the particular rules allowed in the system, the computational content of proofs matches different known complexity classes: regular languages, LogSpace, primitive recursive functions, system T. Various tools are used to pinpoint these different expressive powers, including a newly introduced class of automata (Jumping Multihead Automata), and results from the field of reverse mathematics.
LZ'78 is a famous and very simple lossless data compression algorithm published by Abraham Lempel and Jacob Ziv in 1978. Although widely used in practise, we know little about its stability. The one-bit catastrophe question, introduced by Jack Lutz in the late 90s, asks whether an infinite word compressible by LZ'78 can become incompressible by adding a single bit in front of it. Our main result is to answer that question positively. We also give tight bounds on the maximal possible variation between the compression ratio of a finite word and its perturbation (when one bit is added in front of it), showing that to get a catastrophe, the initial word needs already to be close to the threshold of incompressibility.
We consider algorithms to decide the existence of strategies in MDPs for Boolean combinations of objectives. These objectives are omega-regular properties that need to be enforced either surely (whatever happens), almost surely (with probability one), existentially (it can happen), or with non-zero probability. Such a combination of properties could be e.g. that an agent reaches a target with high probability while guaranteeing it will not crash. We provide algorithms to solve the general case of Boolean combinations and we also investigate relevant subcases. We provide algorithms to solve the general case of Boolean combinations and we also investigate relevant subcases. We also report on complexity lower-bounds for these problems.
In this talk, we introduce a condition number of stochastic mean payoff games. To do so, we interpret these games as feasibility problems over tropically convex cones. In this setting, the condition number is defined as the maximal radius of a ball in Hilbert's projective metric that is included in the (primal or dual) feasible set. We show that this conditioning controls the number of value iterations needed to decide whether a mean payoff game is winning. In particular, we obtain a pseudopolynomial bound for the complexity of value iteration provided that the number of random positions is fixed. We also discuss the implications of these results for convex optimization problems over nonarchimedean fields and present possible directions for future research.
Computing the state complexity of regular operations is usually a messy business. Every new operation needs to be carefully considered and the associated computations need to be tweaked. We present an attempt to generalize this process on a large class of rational operations, and use this framework to present new results.
Probabilities are extensively used in Computer Science. Algorithms use probabilistic choices for solving problems that are untracktable deterministically or for improving efficiency. Recently, (Functional) Probabilistic Programming has been introduced for applications in Machine Learning and Artificial Intelligence. Probabilistic programs are used to describe statistical models and for developing probabilistic data analysis.
Markov models comprise states with probabilistic transitions..
In this talk I will present a solution to the following problem: given a set of strings, learn the underlying probabilistic context-free grammar generating these strings.
Monadic second order logic (MSO) is usually studied over specific kinds of structures, be it finite words, infinite words, finite or infinite trees, total orders of various shapes, etc. A monad is a rather abstract notion of a kind of structures that covers these and many other examples. One can formulate an abstract definition of MSO for a generic monad. I will explain how this is done, and I will describe some conditions that a monad should satisfy to ensure a basic sanity check: that every definable language is recognized by a finite algebra.
Asynchronous programing is a widely spread technique offering some simple concurrent programing primitives that are restricted enough so that the resulting concurrent programs are, to some extent, dead-lock free. In this talk, I shall present the notion of monadic references that allows for formally defining a model of asynchronous concurrent programming as an extension of the usual model of (say) sequential monad programing.
Introduced by Cleggs et al. (STOC'96) to capture Gröbner basis computations, Polynomial Calculus Resolution (PCR) is an algebraic proof system for certifying the unsatisfiability of CNF formulas. Impagliazzo et al. (CC'99) established that if an unsatisfiable k-CNF formula over n variables has a refutation of small size in PCR (that is, polynomial size), then this formula also has a refutation of small degree, i.e., O(sqrt(n log n)). A natural question is to know whether we can achieve both small size and small degree in the same refutation.
« Parcoursup » est la plateforme nationale d’admission en première année de l’enseignement supérieur, mise en place en 2018 suite au vote de la loi ORE, en remplacement d’APB (Admission Post-Bac). Cette plateforme assure la mise en relation des formations du supérieur (licences, BTS, IUT, écoles, prépas, etc…) avec les candidats à ces formations, près de 900.000 en 2019.
The notion of graph covering, from which we get that of a universal covering (an infinite tree) is important in the theory of distributed computing.
We look at words which are mappings from a countable linear ordering to a finite alphabet. Finite words, Omega words etc satisfy the above condition. In this talk, we study the languages (of words) definable by different logics. We consider monadic second order logic, first order logic, linear temporal logic etc.
A simple stochastic game, SSG for short, is a two-player zero-sum game, a turn-based version of stochastic games. SSGs were introduced by Condon and provide a general framework that allows to study algorithmic complexity issues underlying reachability objectives. The best algorithm so far for solving SSGs is Ludwig’s randomized algorithm which works in expected 2^O(sqrt(n)) time. We first give a simpler iterative variant of this algorithm, using Bland’s rule from the simplex algorithm, which uses exponentially less random bits than Ludwig’s version. Then, we show how to adapt this method to the algorithm of Gimbert and Horn whose worst case complexity is O(k!), where k is the number of random nodes. Our algorithm has an expected running time of 2^O(k) , and works for general random nodes with arbitrary outdegree and probability distribution on outgoing arcs.
Path-feasibility is an important problem in the symbolic execution of
The talk will be a survey on some recent results about string transducers.
We will start with another view on alternating automata over finite
This talk is about the trade-offs between different acceptance
We consider families of symmetric linear programs (LPs) that decide a property of graphs in the sense that, for each size of graph, there is an LP defining a polyhedral lift that separates the integer points corresponding to graphs with the property from those corresponding to graphs without the property. We show that this is equivalent, with at most polynomial blow-up in size, to families of symmetric Boolean circuits with threshold gates. In particular, when we consider polynomial-size LPs, the model is equivalent to definability in a non-uniform version of fixed-point logic with counting. This is joint work with Albert Atserias and Anuj Dawar.
In this talk, I will present a joint work with Antoine Amarilli, published at ICALP 18 about what we call the constrained topological sorting problem (CTS): given a regular language K and a directed acyclic graph G with labeled vertices, determine if G has a topological sort that forms a word in K.
Higher-Order Model-Checking (HOMC) has recently emerged as an approach
The Schnorr-Stimm dichotomy theorem concerns finite-state gamblers that bet on infinite sequences of symbols taken from a finite alphabet. The theorem asserts that, for each such sequence S, the following two things are true. 1. If S is normal in the sense of Borel (meaning that any two strings of equal length appear with equal asymptotic frequency in S), then every finite-state gambler loses money at an exponential rate betting on S. 2. If S is not normal, then there is a finite-state gambler that wins money at an exponential rate betting on S.
The synthesis problem asks, given a specification that relates possible inputs to allowed outputs, whether there is a program realizing the specification, and if so, construct one.
We prove that the reachability relation of two-counter machines with one zero-test and one reset is Presburger-definable and effectively computable. Our proof is based on the introduction of two classes of Presburger-definable relations effectively stable by transitive closure. This approach generalizes and simplifies the existing different proofs and it solves an open problem introduced by Finkel and Sutre in 2000.
We undertake an abstract study of certification in security protocols, concentrating on the logical properties and derivability of certificates. Specifically, we extend the Dolev-Yao model with a new class of objects called ‘assertions’, along with an associated algebra for deriving new assertions from old ones. We also provide a case study via the FOO e-voting protocol, and provide algorithms for the derivability problem and the active intruder problem for this system.
We study the computational complexity of solving mean payoff games. This class of games can be seen as an extension of parity games, and they have similar complexity status: in both cases solving them is in NP and coNP and not known to be in P. In a breakthrough result Calude, Jain, Khoussainov, Li, and Stephan constructed in 2017 a quasipolynomial time algorithm for solving parity games, which was quickly followed by two other algorithms with the same complexity. Our objective is to investigate how these techniques can be extended to the study of mean payoff games. We construct two new algorithms for solving mean payoff games. Our first algorithm depends on the largest weight N (in absolute value) appearing in the graph and runs in sublinear time in N, improving over the previously known linear dependence in N . Our second algorithm runs in polynomial time for a fixed number k of weights.
The ternary betweenness relation on a tree, B(x,y,z), indicates that y is on the unique path between x and z. This notion can be extended to order-theoretic trees defined as partial orders such that the set of nodes greater than any node is linearly ordered. In such generalized trees, the unique path between two nodes can have infinitely many nodes. We axiomatize in first-order or monadic second-order logic several betweenness relations in order-theoretical trees.
We look at properties of graphs that can be expressed in first order (FO) logic. Given such a property A and a class G of random graphs, we are interested in the limiting probability that a graph in G satisfies A, when the number of vertices goes to infinity.
TBA
The computational model of programs over monoids, introduced by Barrington and Thérien in the late 1980s, gives a way to generalise the notion of (classical) recognition through morphisms into monoids in such a way that almost all open questions about the internal structure of the complexity class NC^1 can be reformulated as understanding what languages (and, in fact, even regular languages) can be program-recognised by monoids taken from some given variety of finite monoids. Unfortunately, for the moment, this finite semigroup theoretical approach did not help to prove any new result about the internal structure of NC^1 and, even worse, any attempt to reprove well-known results about this internal structure (like the fact that the language of words over the binary alphabet containing a number of 1s not divisible by some fixed integer greater than 1 is not in AC^0) using techniques stemming from algebraic automata theory failed.
The problem of ontology-mediated query answering (OMQA) has gained significant interest in recent years. One popular ontology language for OMQA is OWL 2 QL, a W3C standardized language based upon the DL-Lite description logic. This language has the desirable property that OMQA can be reduced to database query evaluation by means of query rewriting. In this talk, I will consider two fundamental questions about OMQA with OWL 2 QL ontologies: 1) How does the worst-case complexity of OMQA vary depending on the structure of the ontology-mediated query (OMQ)? In particular, under what conditions can we guarantee tractable query answering? 2) Is it possible to devise query rewriting algorithms that produce polynomial-size rewritings? More generally, how does the succinctness of rewritings depend on OMQ structure and the chosen format of the rewritings?
Karp and Miller's algorithm is a well-known decision procedure that solves the termination and boundedness problems for vector addition systems with states (VASS), or equivalently Petri nets. This procedure was later extended to a general class of models, well-structured transition systems, and, more recently, to pushdown VASS. In this paper, we extend pushdown VASS to higher-order pushdown VASS (called HOPVASS), and we investigate whether an approach à la Karp and Miller can still be used to solve termination and boundedness.
In this talk, we focus on the concept of rational behaviour in multi-player games on finite graphs, taking the point of view of a player that has access to the structure of the game but cannot make assumptions on the preferences of the other players. In the qualitative setting, admissible strategies have been shown to fit the rationality requirements, as they coincide with winning strategies when these exist, and enjoy the fundamental property that every strategy is either admissible or dominated by an admissible strategy. However, as soon as there are three or more payoffs, one finds that this fundamental property does not necessarily hold anymore: one may observe chains of strategies that are ordered by dominance and such that no admissible strategy dominates any of them. Thus, to recover a satisfactory rationality notion (still based on dominance), we depart from the single strategy analysis approach and consider instead chains of strategies as families of behaviours. We establish a sufficient criterion for games to enjoy a similar fundamental property, ie, all chains are below some maximal chain, and, as an illustration, we present a class of games where admissibility fails to capture some intuitively rational behaviours, while our chain-based analysis does. Based on a joint work with N.Basset, I. Jecker, A. Pauly and J.-F. Raskin, presented at CSL'18.
In the literature we find many computation models whose expressiveness goes beyond finite automata, however without attaining the full power of Turing machines. The common practice is to enrich finite automata with some internal memory (e.g. counters, clocks, stacks, etc.) that can be used to store, manipulate, and compare data from a potentially infinite domain. An intriguing model that results from this practice is the model of register automaton, which is essentially a finite automaton equipped with a finite number of registers. Register automata are used to recognize languages over infinite alphabets. The deterministic, unambiguous, and non-deterministic variants of these automata form a hierarchy of strictly increasing expressive power, where the bottom and top levels have, respectively, decidable and undecidable equivalence problems. Accordingly, the intermediate class of unambiguous register automata is an interesting object of study, since it is believed to be robust and algorithmically well-behaved.
The chase is a sound and complete (albeit non-terminating) algorithm for conjunctive query answering over ontologies of existential rules. On the theoretical side, we develop sufficient conditions to guarantee its termination (i.e., acyclicity notions), and study several restrictions that furthermore ensure its polynomiality. On the practical side, we empirically study the generality of these conditions and we extend the Datalog engine VLog to develop an efficient implementation of the chase. Furthermore, we conduct an extensive evaluation, and show that VLog can compete with the state of the art, regarding runtime, scalability, and memory efficiency.
The emptiness and containment problems for probabilistic automata are natural quantitative generalisations of the classical language emptiness and inclusion problems for Boolean automata. It is well known that both problems are undecidable. In this paper we provide a more refined view of these problems in terms of the degree of ambiguity of probabilistic automata. We show that a gap version of the emptiness problem (that is known be undecidable in general) becomes decidable for automata of polynomial ambiguity. We complement this positive result by showing that the emptiness problem remains undecidable even when restricted to automata of linear ambiguity. We then turn to finitely ambiguous automata. Here we show decidability of containment in case one of the automata is assumed to be unambiguous while the other one is allowed to be finitely ambiguous. Our proof of this last result relies on the decidability of the theory of real exponentiation, which has been shown, subject to Schanuel's Conjecture, by Macintyre and Wilkie.
Programming by example is the problem of synthesising a program from a small set of pairs input and output. Despite having found applications in several areas it is notoriously computationally expensive. Recent works have considered hybrid approaches combining ML and PL based techniques. These techniques require generating a training dataset, which leads to significant difficulties related to finding the most informative inputs to characterise a given programme.
The rate of randomness (or dimension) of a binary string x is the ratio C(x)/|x| where C(x) is the Kolmogorov complexity of x. While it is known that a single computable transformation cannot increase the rate of randomness of all strings, Fortnow et al. showed that for any 0
Following Géraud Sénizergues' seminal results twenty years ago on the decidability of language equivalence of deterministic pushdown automata and of (weak) bisimilation equivalence of (epsilon-popping) pushshdown automata, several works have attempted to provide complexity bounds for these problems. For instance, some significant simplifications over the original proofs were provided by Colin Stirling and Petr Jancar, using in particular the formalism of first-order grammars instead of pushdown automata, and resulting in Tower upper bounds for the language equivalence problem in deterministic systems. But no complexity bounds were known for the bisimulation equivalence problem.
Petri nets, also known as vector addition systems, are a long established and widely used model of concurrent processes. The complexity of their reachability problem is one of the most prominent open questions in the theory of verification. That the reachability problem is decidable was established by Mayr in his seminal STOC 1981 work, and the currently best upper bound is non-primitive recursive cubic-Ackermannian of Leroux and Schmitz from LICS 2015. We show that the reachability problem is not elementary. Until this work, the best lower bound has been exponential space, due to Lipton in 1976.
Complex event processing (CEP) emerges as a unified technology for efficiently processing data streams. Contrary to data streams management systems, CEP query languages model data streams as a continuous sequence of events and CEP queries define sets of events (complex events) that are of interest for the final user.
Message sequence charts (MSCs) naturally arise as executions of communicating finite-state machines (CFMs), in which finite-state processes exchange messages through unbounded FIFO channels. We study the first-order logic of MSCs, featuring Lamport's happened-before relation. We introduce a star-free version of propositional dynamic logic (PDL) with loop and converse. Our main results state that (i) every first-order sentence can be transformed into an equivalent star-free PDL sentence (and conversely), and (ii) every star-free PDL sentence can be translated into an equivalent CFM. This answers an open question and settles the exact relation between CFMs and fragments of monadic second-order logic. As a byproduct, we show that first-order logic over MSCs has the three-variable property.
We look at words which are mappings from a countable linear ordering to a finite alphabet. Finite words, Omega words etc satisfy the above condition. We will also look at other kind of words.
A standard approach to define k-ary word relations over a finite alphabet A is through k-tape finite state automata that recognize regular languages L over {1, ... , k} x A, where (i,a) is interpreted as reading letter a from tape i. Accordingly, a word w in L denotes the tuple (u_1, ... , u_k) of words over A in which u_i is the projection of w onto i-labelled letters. While this formalism defines the well-studied class of Rational relations, enforcing restrictions on the reading regime from the tapes, which we call synchronization, yields various sub-classes of relations. Such synchronization restrictions are imposed through regular properties on the projection of the language L onto {1, ... , k}. In this way, for each regular language C over the alphabet {1, ... , k}, one obtains a class Rel(C) of relations. Synchronous, Recognizable, and Length-preserving rational relations are all examples of classes that can be defined in this way.
Evaluating queries in the presence of background knowledge has been extensively studied in several communities. In database theory, it is known as query answering under integrity constraints: given a finite database instance and a set of constraints, determine answers to a query that are certain to hold over any extension of the given instance that satisfies the constraints. In the knowledge representation community, the database instance and the set of constraints are treated as a single object, called an ontology, but otherwise the problem remains the same, except that different kinds of constraints are interesting. While in database theory constraints are usually very simple, like functionality of relations, or inclusions between relations, in knowledge representation more expressive logics are used. I will focus on so called description logics, which are a family of extensions of modal logic. I will cover some basic techniques, a highly non-trivial result by Rudolph and Glimm (2010), as well as some recent results obtained with Tomek Gogacz (U Warsaw) and Yazmin Ibanez-Garcia (TU Wien).
In 2015, Jeandel and Rao showed by exhaustive computer search that every Wang tile set of cardinality
Semi-deterministic Büchi automata (sDBA) are useful for example in model checking of probabilistic systems or in termination analysis. While in probabilistic model checking sDBA represent the set of behaviours of interest, in termination analysis they represent terminating behaviours of programs and are often complemented to perform a language difference. In my talk, I first introduce the class of semi-deterministic Büchi automata (sDBA). Then I will explain how can we convert nondeterministic Büchi automata (NBA) into sDBA, followed by a discussion on how to efficiently convert generalized Büchi automata into sDBA. After we learn how to build sDBA, I will introduce a complementation algorithm sDBA. The algorithm produces a complement automata with at most 4^n states while the best upper bound on complementation of NBA is O((0.76n)^n). Further, our algorithm produces automata with a very low degree of nondeterminism, indeed, the resulting automata are even unambiguous.
We prove that MSO on omega-words becomes undecidable if allowing to quantify over sets of positions that are ultimately periodic, i.e., sets X such that for some positive integer p, ultimately either both or none of positions x and x+p belong to X. We obtain it as a corollary of the undecidability of MSO on omega-words extended with the second-order predicate U1(X) which says that the distance between consecutive positions in a set X of naturals is unbounded. This is achieved by showing that adding U1 to MSO gives a logic with the same expressive power as MSO+U, a logic on omega-words with undecidable satisfiability.
In a mean-payoff parity game, one of the two players aims both to achieve a qualitative parity objective and to minimize a quantitative long-term average of payoffs (aka. mean payoff). The game is zerosum and hence the aim of the other player is to either foil the parity objective or to maximize the mean payoff.
We consider asynchronous programs consisting of multiple recursive threads (modeled as pushdown systems) running in parallel. Each of the threads is equipped with a multi-set. The threads can create tasks and post them onto the multi-sets or read a task from their own. In addition, they can synchronize through a finite set of locks. We examine the decidability of the state reachability problem for this model. The problem is already known to be undecidable for a system consisting of two recursive threads (and no tasks) and we examine a decidable subclass.
Replicated data stores typically sacrifice strong consistency guarantees in favour of availability and partition tolerance. These data stores usually provide specific weaker consistency guarantees, such as eventual consistency, monotonic reads or causal consistency.
Since Muller and Schupp's result it is well-known that the finitely generated virtually free groups are precisely the context-free groups. The isomorphism problem for virtually free groups, has shown to be decidable by Kristic. In the special case that the input groups are either given as context-free grammars for their word problems or as so-called virtually free presentations, it is primitive recursive by the work of Sénizergues.
We give a syntactic correspondence between non-associative arithmetic circuits and acyclic weighted tree automata. We may then export results from automata theory to non-associative circuits and characterize the size of a minimal circuit for a given polynomial as the rank of a Hankel matrix. We will then show how this can be used to re-obtain Nisan's theorem on Algebraic Branching Programs as well as recent results on Unique Parse Tree circuits. Lastly, we will highlight a new way of obtaining lower bounds for general (associative) arithmetic circuits.
We study the complexity of languages of finite words using automata theory. To go beyond the class of regular languages, we consider infinite automata and the notion of state complexity defined by Karp. We look at alternating automata as introduced by Chandra, Kozen and Stockmeyer: such machines run independent computations on the word and gather their answers through boolean combinations.
I present results on rotating Q-automata, which are (memoryless) automata with weights in Q that can read the input tape from left to right several times. We show that the series realized by valid rotating Q-automata are Q-Hadamard series (which are the closure of Q-rational series by pointwise inverse), and that every Q-Hadamard series can be realized by such an automaton. We prove that, although validity of rotating Q-automata is undecidable, the equivalence problem is decidable on rotating Q-automata. Finally, we prove that every valid two-way Q -automaton admits an equivalent rotating Q-automaton. The conversion, which is effective, implies the decidability of equivalence of two-way Q-automata.
Joint seminar with Graphes et Optimisation
Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parametrized systems, and are also used as abstract models of programs in resource bound analysis. We study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. First, I will present a characterization for VASS with linear asymptotic complexity. I will also show that if a complexity of a VASS is not linear, it is at least quadratic.
The reachability problem for vector addition systems is one of the most difficult and central problem in theoretical computer science. The problem is known to be decidable, but despite instance investigations during the last four decades, the exact complexity is still open. For some sub-classes, the complexity of the reachability problem is known. Structurally bounded vector addition systems, the class of vector addition systems with finite reachability sets from any initial configuration, is one of those classes. In fact, the reachability problem was shown to be polynomial-space complete for that class by Praveen and Lodaya in 2008. Surprisingly, extending this property to vector addition systems with states is open. In fact, there exist vector addition systems with states that are structurally bounded but with Ackermannian large sets of reachable configurations. It follows that the reachability problem for that class is between exponential space and Ackermannian. In this paper we introduce the class of polynomial vector addition systems with states, defined as the class of vector addition systems with states with size of reachable configurations bounded polynomially in the size of the initial ones. We prove that the reachability problem for polynomial vector addition systems is exponential-space complete. Additionally, we show that we can decide in polynomial time if a vector addition system with states is polynomial. This characterization introduces the notion of iteration scheme with potential applications to the reachability problem for general vector addition systems.
The Hydra game was introduced in 1982 by the mathematicians L. Kirby and J. Paris in their article: Accessible Independence Results for Peano Arithmetic.
A natural approach to defining binary word relations over a finite alphabet A is through two-tape finite state automata, which can be seen as regular languages L over the alphabet {1,2}xA, where (i,a) is interpreted as reading letter a from tape i. Thus, a word w of the language L denotes the pair (u_1,u_2) in A x A in which u_i is the projection of w onto i-labelled letters. While this formalism defines the well-studied class of Rational relations (a.k.a. non-deterministic finite state transducers), enforcing restrictions on the reading regime from the tapes, that we call synchronization, yields various sub-classes of relations. Such synchronization restrictions are imposed through regular properties on the projection of the language onto {1,2}. In this way, for each regular language C contained in {1,2}*, one obtains a class Rel(C) of relations, such as the classes of Regular, Recognizable, or length-preserving relations, as well as (infinitely) many other classes.
Unambiguous non-deterministic finite automata have intermediate
We discuss the complexity of decision problems on regular languages represented by morphisms to finite semigroups. There are two canonical ways of specifying the semigroup: giving its multiplication table or an implicit description as the subsemigroup of a transformation semigroup (generated by the images of the given morphism). For both representations, we will consider
We define a simple and sound mathematical framework for describing temporal media programming language semantics based on the various concepts offered by semigroup theory. As a result a fairly general programming scheme can be defined in order to specify, compose and render both spatial media objects (e.g. 3D drawings) and timed media objects (e.g. Animation or Music). As an example, a simple monoid based semantics model of the turtle command language of Logo is detailed and extended throughout.
We present three pumping lemmas for three classes of functions definable by fragments of weighted automata over the min-plus semiring and the semiring of natural numbers. As a corollary we show that the hierarchy of functions definable by unambiguous, finitely-ambiguous, polynomially-ambiguous weighted automata, and the full class of weighted automata is strict for the min-plus semiring.
On s'intéresse à la question de la simulation exacte de lois de probabilités continues sur les réels, et plus précisément, à identifier quelles lois sont simulables exactement en n'utilisant qu'une mémoire finie - le modèle naturel étant une variante d'automates probabilistes, mais il est facile de voir que divers modèles sont équivalents (au moins au sens probabiliste de presque sûrement).
Partie II:
L'objectif de cette série d'exposés est de faire découvrir un résultat aussi élégant que surprenant du calcul distribué, à savoir la 3-coloration des n-cycles en temps log*(n). On démontrera l'optimalité de ce résultat ainsi que ces généralisations au cas des graphes arbitraires.
The recent breakthrough paper by Calude et al. (a winner of STOC 2017 Best Paper Award) has given the first algorithm for solving parity games in quasi-polynomial time, where previously the best algorithms were mildly sub-exponential. We devise an alternative quasi-polynomial time algorithm based on progress measures, which allows us to reduce the space required from quasi-polynomial to nearly linear. Our key technical tools are a novel concept of ordered tree coding, and a succinct tree coding result that we prove using bounded adaptive multi-counters, both of which are interesting in their own right.
Model checking with interval temporal logics is emerging as a viable alternative to model checking with standard point-based temporal logics, such as LTL, CTL, CTL, and the like. The behavior of the system is modelled by means of (finite) Kripke structures, as usual. However, while temporal logics which are interpreted point-wise describe how the system evolves state-by-state, and predicate properties of system states, those which are interpreted interval-wise express properties of computation stretches, spanning a sequence of states. A proposition letter is assumed to hold over a computation stretch (interval) if and only if it holds over each component state (homogeneity assumption). The most well-known interval temporal logic is Halpern and Shoham's modal logic of time intervals HS, which features one modality for each possible ordering relation between a pair of intervals, apart from equality. In the seminar, we provide an overview of the main results on model checking with HS and its fragments under the homogeneity assumption. In particular, we show that the problem turns out to be non-elementarily decidable and EXPSPACE-hard for full HS, but it is often computationally much better for its fragments. Then, we briefly compare the expressiveness of HS in model checking with that of LTL, CTL, CTL. We conclude by discussing a recent generalization of the proposed MC framework that allows one to use regular expressions to define the behavior of proposition letters over intervals in terms of the component states.
Computational models of biological networks aim at reporting the indirect influences between the different molecular entities acting within the cell (genes, RNA, proteins, ...). In this talk, I will give an overview of methods for the formal assessment of dynamics of biological networks by static analysis. After an introduction to Boolean networks and their relevance for modelling cell signalling and gene regulatory networks, I'll present an abstract interpretation of their trajectories based on a causal analysis. Then, I'll show how we can combine this abstraction with SAT approches to address systems biology challenges, such as model identification and cell reprogramming.
Higher-order pushdown systems and ground tree rewriting systems can be seen as extensions of suffix word rewriting systems. Both classes generate infinite graphs with interesting logical properties. Indeed, the satisfaction of any formula written in monadic second order logic (respectively first order logic with reachability predicates) can be decided on such a graph.
Modal mu-calculus is one of the central logics for verification. In his seminal paper, Kozen proposed an axiomatization for this logic, which was proved to be complete, 13 years later, by Kaivola for the linear-time case and by Walukiewicz for the branching-time one. These proofs are based on complex, non-constructive arguments, yielding no reasonable algorithm to construct proofs for valid formulas. The problematic of constructiveness becomes central when we consider proofs as certificates, supporting the answers of verification tools. We provide a new completeness argument for the linear-time mu-calculus which is constructive, i.e. it builds a proof for every valid formula. To achieve this, we decompose this difficult problem into several easier ones, taking advantage of the correspondence between the mu-calculus and automata theory. More precisely, we lift the well-known automata transformations (non-determinization for instance) to the logical level. To solve each of these smaller problems, we perform first a proof-search in a circular proof system, then we transform the obtained circular proofs into proofs in Kozen's axiomatization. This yields a constructive proof for the full linear-time mu-calculus.
we develop a general theory of timed domains and timed morphisms that aims at offering a versatile and sound mathematical framework for the study of timed denotational semantics of networks of timed programs. The proposed compositional semantic model accounts for the fact that every non trivial computation step necessarily takes some non zero time. This is achieved by defining timed domains as classical domains (directed complete posets) where time appears everywhere: every increase of knowledge necessarily refers to the passage of time. Timed morphisms are defined as functions between timed domains which uniformly act on the underlying time scales. The resulting category is a (bi)cartesian closed category with (mostly) internal henceforth timed least fixpoint operators. Moreover, by allowing (almost) arbitrary posets as time scales, the proposed frame- work also covers typical features of parallel or concurrency theory such as parallel, indenpendant or conflicting computations. In other words, timed domains and timed morphisms provide a fully featured mathematical framework for the study of computable spatio-temporal functions.
A graph database is a directed graph where each edge is additionally labeled with a symbol from a finite alphabet. Several data models, such as the ones occurring in the Semantic Web or semi-structured data, can be naturally captured via graph databases. In this context, one is not only interested in traditional queries, such as conjunctive queries, but also in navigational queries that take the topology of the data into account.
Many natural computational problems, such as satisfiability and systems of equations, can be expressed in a unified way as constraint satisfaction problems (CSPs). In this talk I will show that the usual reductions preserving the complexity of the constraint satisfaction problem preserve also its proof complexity. As an application, I will present two gap theorems, which say that CSPs that admit small size refutations in some classical proof systems are exactly the constraint satisfaction problems which can be solved by Datalog.
This talk is about transductions, which are binary relations on words. We are interested in various models computing transductions (ie, transducers), namely two-way automata with outputs, streaming string transducers and string-to-string MSO transductions. We observe that each of these formalisms provides more than just a set of pairs of words. Indeed, one can also reconstruct origin information, which says how positions of the output string originate from positions of the input string. On the other hand, it is also possible to provide any pair of words in a relation with an origin mapping, indicating an origin input position for each output position, in a similar way. This defines a general object called origin graph. We first show that the origin semantic is natural and corresponds to the intuition we have of the run of a transducer, and is stable from translation from one model to another. We then characterise the families of origin graphs which corresponds to the semantics of streaming string transducers.
Continuation of his precedent talk.
In this talk I will present joint work with Nils Timm (and students) from the University of Pretoria on how to make model-checking of concurrent systems more effective and more efficient. In the first part of the talk, which is based on our SBMF'16 paper, I show how bounded model-checking over a three-valued truth domain {T:true, F:false, U:unknown} can be translated into a classical Boolean satisfiability problem which can then be given to any classical SAT solver. In the second part of the talk, which is based on our recent FSEN'17 paper, I speak about efficiency-increasing heuristics which are based on the availability of structural knowledge about the original system to be model-checked. On the basis of such structural knowledge the SAT solver can be guided into 'promising' search paths, whereby the probability of unnecessarily exploring fruitless paths is considerably diminished. The SBMF'16 paper was acknowledged as the 2nd-best paper of the conference, and the FSEN'17 paper was nominated among the top three papers of the conference.
We give sound and complete axiomatizations for XPath with data tests by equality or inequality, and containing the single child axis. This data-aware logic predicts over data trees, which are tree-like structures whose every node contains a label from a finite alphabet and a data value from an infinite domain. The language allows us to compare data values of two nodes but cannot access the data values themselves (i.e., there is no comparison by constants).
Given an integer base b
We present the first linear lower bound for the number of bits required to be accessed in the worst case to increment an integer in an arbitrary space-optimal binary representation. The best previously known lower bound was logarithmic. It is known that a logarithmic number of read bits in the worst case is enough to increment some of the integer representations that use one bit of redundancy, therefore we show an exponential gap between space-optimal and redundant counters.
This talk will be based on the paper from POPL 2017:
We present the ideal framework [FG09a,BFM14] which was recently used to obtain new deep results on Petri nets and extensions. If time, we will present the proof of the famous but unknown Erdös-Tarski theorem. We argue that the theory of ideals prompts a renewal of the theory of WSTS by providing a way to define a new class of monotonic systems, the so-called Well Behaved Transition Systems, which properly contains WSTS, and for which coverability is still decidable by a forward algorithm.
Deterministic two-way transducers define the robust class of regular functions which is, among other good properties, closed under composition.
I will give a survey of various classes of automata that have
Directed complete partial orders (cpos) are used in denotational semantics for describing the way each value is incrementally computed, passing from a completely unknown value to a completely known value. Then, continuous functions between cpos propagate increase of knowledge on their inputs to increase of knowledge on their outputs.
Suppose we have a probabilistic algorithm given as a black box and we have access to an output of this algorithm. There are two - related - questions one could ask. (1) Is it possible to make a plausible guess as to which algorithm is in the box? (2) Can we use the output of this algorithm as a random number generator by extracting `pure’ randomness from it? We will look at these questions from the point of view of computability and algorithmic learning theory. [Based on joint work with S. Figueira, B. Monin, and A. Shen]
I will present some preliminary results towards a proof of a decomposition theorem for streaming string transducers (SSTs). Roughly, the conjectured decomposition theorem states that every SST that associates at most k outputs to each input can be effectively decomposed as a finite union of functional SSTs. Such a result would imply, among other things, the decidability of the equivalence problem for the considered class of transducers as well as a correspondence with the classical two-way transducers. I will present a proof of this decomposition theorem in the special case of SSTs with 1 register. The proof heavily relies on a combinatorial result by Kortelainen concerning word equations with iterated factors.
In functional programs, also called higher-order programs, functions may take functions themselves as arguments. As a result, their model-checking relies in most approaches on semantic or type-theoretic tools. In this talk, I will explain how an analysis based on linear logic of a model-checking result of 2009 by Kobayashi and Ong led Melliès and I to the construction of a model for model-checking. This model is such that, when interpreting a term with recursion representing the tree of traces of a functional program, its denotation determines whether it satisfies a MSO property of interest. A related and similar model was obtained independently by Salvati and Walukiewicz.
Electronic money is a quite old problem in cryptology (Chaum, 1982) but recent discoveries lead to the birth of a new type of digital currency such as Bitcoins or Ethereum. Most of the new crypto-currencies are based on the concept of Blockchain which is used to maintain a trusted consensus in a distributed manner thanks to cryptographic primitives.
Zero automata are a probabilistic extension of parity automata on infinite trees.
The talk is based on joint work with Krishnendu Chatterjee, Amir Kafshdar Goharshady, Prateesh Goyal, and Andreas Pavlogiannis.
The liveness problem for timed automata asks if a given automaton has an infinite run visiting an accepting state infinitely often. In this talk, we will show that if P is not equal to NP, the liveness problem is more difficult than the reachability problem - more precisely, we will exhibit a family of automata for which reachability is in P whereas liveness is NP-hard. We will then present a new algorithm to solve the liveness problem, and compare it with existing solutions.