A formula φ' is said to be a uniform Σ-interpolant of φ, if for any ψ over the signature Σ, ψ is implied by φ' if and only if ψ is implied by φ. Uniform interpolation has been the subject of many recent research papers due to its potential use in applications. In this talk, we introduce uniform interpolation and its variants, and present a saturation-based inference system that computes a local uniform interpolant for a formula and a "keep" signature.
This paper addresses the verification of Bitcoin smart contracts using the interactive theorem prover Agda. It focuses on two standard smart contracts that govern the distribution of Bitcoins, Pay to Public Key Hash (P2PKH) and Pay to Multisig (P2MS). Both are written in Bitcoin’s low-level language script, and provide the security property of access control to the distribution of Bitcoins. The paper introduces an operational semantics of the script commands used in P2PKH and P2MS, and formalises it in the Agda proof assistant using Hoare triples. It advocates weakest preconditions in the context of Hoare triples as the appropriate notion for verifying access control. Two methodologies for obtaining human-readable weakest preconditions are discussed in order to close the validation gap between user requirements and formal specification of smart contracts: (1) a step-by-step approach, which works backwards instruction by instruction through a script, sometimes stepping over several instructions in one go; (2) symbolic execution of the code and translation into a nested case distinction, which allows to read off weakest preconditions as the disjunction of accepting paths. A syntax for equational reasoning with Hoare Triples is defined in order to formalise those approaches in Agda.
Users of computer systems face a constant threat of cyberattacks by malware designed to cause harm or disruption to services, steal information, or hold the user to ransom. Cyberattacks are becoming increasingly prevalent on mobile devices like Android. Attacks become more sophisticated along with countermeasures in an ever-increasing arms race. A novel attack method is 'collusion', where the attack gets hidden by distributing the steps through many malicious software actors.
We investigate the use of runtime verification to detect collusion attacks on the end-users device. We have developed a novel algorithm called RRH that is a variation of an existing algorithm by Grigore Roȿu and Klaus Havelund. Our approach is computationally efficient enough to detect collusion in realtime on the Android device and does not require prior knowledge of malware source code. Thus, it can detect future malware without modification to the detection system or the software under scrutiny.
Wireless communications is transforming every sector of the economy as it enables billions of people to keep in touch with work and family via their smart devices. However, the current technology cannot meet the demands of the future (i.e., explosive growth in number of devices, better coverage and connection rate). For future wireless communications, the key technology that has the potential to enhance connectivity for billions of users is referred to as Cell Free Massive Multiple-Input Multiple-Output (CF-mMIMO). One of the many challenges in CF-mMIMO is how to efficiently manage limited resources (spectrum, energy, and power) – the so called resource allocation problem in wireless communications. This talk will give a survey of some of the problems arising in this context, as well as mathematical techniques that have been explored to tackle them.
The last years have seen a radical transformation in railways. Among the new infrastructure, signalling software is a key technology to increase rail capacity and automation. However, verifying the safety of this software is a challenge for the railway industry. Siemens Mobility and Swansea University have developed an effective automated verifier for railway interlocking programs, which our project sets out to improve. The purpose of our research is deal efficiently with the false positive problem, where verification fails although the property under discussion holds. This problem arises due to the specific algorithmic approach taken. We are investigating if the false-positive problem can be addressed by Bradley’s IC3 algorithm and its parallel and symbolic variations. The overall goal of our research is to refine it offer a solution that scales to a challenge in rail industry.
In railway, the placement of equipment on a track plan is safety-critical and underlies many constraints (more than 300 are known in industry). For ETCS Level 2, our industrial partner has a design tool that lets engineers place equipment on track plans. The objective of our project is to visualise constraint violations in this tool, discovered through SMT solving. There is an approach under development that translates data from the design tool into a directed graph, which is then encoded in the SMTLib2 format, and finally checked against a set of design rules using the Z3 SMT solver. Our contribution is to take the unsatisfiable core of such check and visualise it on the track plan in the design tool, by reversing the two model transformations that made the track data accessible to SMT solving. Our work provides a new instance of the challenge to translate from the language of a prover back into the language of the domain from which the problem originated.
eXplainable Artificial Intelligence (XAI) aims to provide intelligible explanations to users to support decision making. Current XAI methods, such as SHAP, LIME, and Scoped Rules, focus on calculating feature importance in predictions. Although XAI has attracted increasing attention, applying XAI techniques to healthcare to inform clinical decision making is still challenging. Our experimental results show that the studied XAI methods circumstantially generate different top features. The identified features including shared top important features demonstrate clinical significance in making diagnosis under different scenarios. Our work shows that medical diagnostics is a promising domain for applying XAI technologies. We aim to further this coalition of important features and the respective impact to surrounding features, by introducing peer-influence as a means of identifying underlying relations.
This talk will introduce Stabbing Planes as a proof system, discuss its relevance, and indicate a few methods of finding lower bounds.
For a given dataset and machine learning (ML) task, various predictors can be constructed with different reasoning that produce near-optimal test performance. However, due to this variance in reasoning, some predictors can generalise whilst some perform unexpectedly on unseen data. The existence of multiple equally performing predictors exhibits underspecification of the ML pipeline used for producing such predictors and presents challenges for credibility.
In this work, we propose identifying underspecification by estimating the variance of reasoning within a set of near-optimal predictors produced by a pipeline. We implement our approach by producing a set of well-performing predictors, computing their Shapley additive explanations and then measuring various distance and correlation metrics between them. We demonstrate our approach on multiple datasets drawn from the literature, and in a COVID-19 virus transmission case study.
In this project, we study the Nash equilibria in three-player non-cooperative games in which each player has two strategies. We present a program that outputs a full description of these equilibria even for degenerate games, which has not been done before. These games are the simplest games with more than two players. Then the players' best-response functions are non-linear and a new formulation for these functions is proposed. The procedure of finding Nash equilibria is divided into two main algorithms: one to compute the partially mixed equilibria, and one compute the completely mixed equilibria. Both algorithms are implemented in Python with a 3D representation of best-response surfaces. The program will be added to the Game Theory Explorer, which is a web application for analyzing different types of games.
A non-local game consists of a group of spatially separated players who are trying to convince an adversary (the referee) that they know the answers to a set of questions. Ever since the pioneering work of Bell in the 1960s this rather abstract framework has proven to be one of our most useful tools for understanding quantum advantage, that is, the ability of a quantum system to outperform a classical one. My PhD focuses on analysing non-local games by using techniques traditionally associated with the field of logic in computer science. I will present an overview of the questions in this area, and the progress made thus far in solving them.
Temporal graphs naturally model graphs whose underlying topology changes over time. Recently, the problems Temporal Vertex Cover (or TVC) and Sliding-Window Temporal Vertex Cover (or Δ-TVC for time-windows of a fixed-length Δ) have been established as natural extensions of the classic Vertex Cover problem on static graphs with connections to areas such as surveillance in sensor networks.
In this paper we initiate a systematic study of the complexity of TVC and Δ-TVC on sparse graphs. Our main result shows that for every Δ ≥ 2, Δ-TVC is NP-hard even when the underlying topology is described by a path or a cycle.
This resolves an open problem from literature and shows a surprising contrast between Δ-TVC and TVC for which we provide a polynomial-time algorithm in the same setting.
To circumvent this hardness, we present a number of exact and approximation algorithms for temporal graphs whose underlying topologies are given by a path, that have bounded vertex degree in every time step, or that admit a small-sized temporal vertex cover.
Feature attribution XAI algorithms enable their users to gain insight into the underlying patterns of large data sets through their feature importance calculation. Existing feature attribution algorithms treat all features in a data set homogeneously, which may lead to misinterpretation of consequences of changing feature values. In this work, we consider partitioning features into controllable and uncontrollable features, and propose the Controllable fActor Feature Attribution (CAFA) approach to compute the relative importance of controllable features. CAFA is different from the existing feature attribution algorithms in mainly in the following aspects. First, as already mentioned, we distinguish between controllable and uncontrollable features instead of treating them homogeneously. Secondly, we compute the relative importance of controllable features through selective perturbation and global-for-local interpretation. We applied CAFA to two medical datasets, a lung cancer dataset and a breast cancer dataset. The experimental results show that after using CAFA, though the model is built over all features, the explanations of controllable features do not interfere with the uncontrollable ones. We further studied CAFA on the COVID-19 non-pharmaceutical control measures dataset, which was collected by ourselves. This study also showed that the controllable and uncontrollable features are relatively independent.
Financial Networks consist of banks (nodes) connected by debts (edges). We introduce the Interval Debt Model, in which debts are due at integer time intervals on a temporal graph with lifetime T. We define valid payment schedules as ones consistent with the obligations of the nodes in the input, and study the problem of finding a schedule which minimizes (or maximizes) the number of nodes which go bankrupt. We also study the problem of bailout minimization, in which the aim is to minimize the bailout payments made by a Financial Authority to make possible a schedule with no bankruptcies.
We find that bankruptcy minimization and maximization are NP-hard even on directed acyclic graphs of bounded degree; that bailout minimization is NP-hard on graphs of bounded degree; and that bankruptcy minimization remains weakly NP-hard on graphs with O(1) vertices.
Temporal graphs consist of an underlying graph (G,E) and an assignment t of timesteps to edges that specifies when each edge is active. This allows us to model spread through a network which is time-sensitive. We will keep the set of timesteps the same and explore reordering them to optimise reachability. Previous work has mainly explored minimising spread for applications such as epidemiology. Here, we will be looking at the opposite problem of increasing movement through a graph. In particular, our goal is to reorder the timesteps assigned to the edges in our graph such that the minimum number of vertices reachable from any starting vertex is maximised. We will discuss which analogous structural and algorithmic results from the minimisation case hold in this case. Maximising spread can be useful in situations where we would like information or resources to be shared efficiently, such as advertising or even vaccine rollout.
In the balanced allocations setting, we have to allocate m tasks (balls) sequentially into n servers (bins), so as to minimise the gap, i.e. the difference of the maximum load to the mean m/n. The simple process of allocating each ball to a random bin, achieves w.h.p. a Θ() gap when m ≫ n. A great improvement over this process, is the Two-Choice process, where each ball queries the load of two randomly chosen bins and is then placed in the least loaded bin. It achieves w.h.p. a log2 log n + Θ(1) gap for m ≥ n.
In the first part of this talk we give some background on balanced allocation processes. In the second part, we investigate variants of the Two-Choice process where allocations are made under incomplete information, giving insights into this dramatic improvement of Two-Choice over One-Choice.
This is joint work with Thomas Sauerwald and John Sylvester.
In this talk we consider a variation of the three-dimensional Stable Roommates problem, which can also be thought of as a coalition formation game. Agents specify preference lists over their peers, and the task is to partition the agents into sets of size 3 based on these preferences. Instead of seeking a “stable” partition, our aim is to construct a partition in which no agent has “envy” for another. There is a close relation to stability. In this talk we assume that agents have additive preferences. For three possible definitions envy-freeness, we show that it is NP-hard in general to find envy-free partitions, but we present new results relating to restrictions of each problem in which an envy-free partition can be found in polynomial time. These results help us explore the boundary between NP-hardness and polynomial-time solvability in problems of coalition formation.
The internet can be viewed as a big game played on a graph, where each vertex makes forwarding decisions based on selfish preferences over the path the data takes. Changes in forwarding decisions by one vertex may cause others to change their decisions, causing a ripple effect through the graph, resulting in degraded performance.
Path formation games formalise and model this scenario along with other problems of interest. Although not every instance of Path Formation games has a solution, we prove how some special cases of are guaranteed to have an efficiently computable solution.
Originally introduced for Stable Paths Problems, we also show how lacking a structure known as a dispute wheel is a sufficient condition for a game to have a solution.
This is a joint work with Dr Arno Pauly.
Pebble games are a type Spoiler-Duplicator game used in finite model theory to compare models and, as a result, examine expressive powers of logical systems. Recently, these games were shown to provide an equivalent intermediary between fragments of different logical languages and categorical structures named comonads. This realises a unified language that can be used to study combinatorics categorically and to introduce alternative consolidated proofs for results with a common general form. This talk will begin with a current overview on this research project, originating from the seminal work of Abramsky, Dawar and Wang. The second part of the talk will focus on a new type of pebble game which utilises the idea of a hidden pebble and yields a novel homomorphism-counting theorem via a comonad whose coalgebras characterise pathwidth. This will follow with concluding remarks regarding possible research directions involving this comonad.
Logics of confluence, defined by generalisations of the axiom of confluence, cover a wide range of standard modal logics including modal logic K and single axiom extensions with the axioms T, B, 5, BAN, alt1, G0111, G and D, which are simple instances of a generalised pattern of axioms of confluence. Two kinds of tableau calculi for this class of logics using respectively structural rules and propagation rules are studied. Structural rules ensure that the constructed models satisfy the characteristic frame conditions of the logic, whereas propagation rules only construct pre-models which are sufficient to discover unsatisfiability and can be completed or adapted to give concrete models.
Linearly ordered colouring is a type of special hypergraph colouring. In normal hypergraph colouring, one must colour each vertex with one of K different colours such that no hyperedge is monochromatic. In linearly ordered colouring, one must colour each edge so that it has a unique maximum colour. In this talk we discuss several interesting results about such colourings -- both tractability of certain cases, and hardness of others. In particular, we will discuss "promise" problems: rather than trying to find a linearly ordered colouring with K colours for arbitrary hypergraphs, we will assume that the hypergraph admits a linearly ordered colouring with fewer than K colours.
The growth of Artificial Intelligence (AI) in various aspects of human society has affected the landscape of human cognition, social order and political power. Machine Learning (ML) algorithms are increasingly becoming responsible for decision-making in many industry sectors owing to their efficiency in performing tasks. However, the growing worry is that ML algorithms are fraught with opacity and bias. Some legal scholars and data scientists have claimed that the right to explanation exists and is a persuasively appealing approach for challenging and providing redress to ML opacity. However, the existence or otherwise of such right is at the centre of global debate. This paper investigates whether there is a legal right to explanation of ML algorithms in the context of GDPR. The paper analyses the argument from various scholars on both sides of the divide with a view to ascertaining whether the right actually exists.
We study a scheduling problem arising in demand response management in smart grid. Consumers send in power requests with a flexible feasible time interval during which their requests can be served. The grid controller, upon receiving power requests, schedules each request within the specified interval. The electricity cost is measured by a convex function of the load in each time slot. The objective is to schedule all requests with the minimum total electricity
cost. Previous work has studied a special case where the power requirement and the duration a request demands are both unit-size. We extend this case by allowing jobs to have precedence constraints. We present a polynomial time offline algorithm that gives an optimal solution when the feasible time intervals for the jobs in order of precedence are the same.
This is a joint work with Wing-Kai Hon, Hsiang-Hsuan Liu, Deyu Sun, and Prudence W.H. Wong.
Parallelisation of SAT is an area that faces many challenges, including the sharing of information between threads causing restrictions to performance. In this work, we present steps in massively parallelising SAT checking using new GPGPU architectures in a manner that is both loosely coupled and fully scalable. We demonstrate techniques for successful parallelization of SAT in these special environments and present early results in favour of their use.
It is useful to be able to reduce one set of boolean constraints to another, which is simpler relative to some complexity measure, in such a way that solutions can be translated back to the original problem.
One operation which achieves this and has been utilised in both theoretical and practical work on the propositional satisfiability problem is Davis-Putnam reduction, which eliminates a variable v from a clause-set F by replacing the clauses containing v with their resolvents (on v).
In general, while reducing the number of variables, DP-reduction may increase the number of clauses. However, the special case of singular DP-reduction, in which the eliminated variable occurs in one of its polarities only once, decreases the number of clauses as well as variables. My talk will cover some structural properties of singular DP-reduction on minimally unsatisfiable clause-sets.
Imagine the following we have a polygon-shaped platform P and only one static spotlight outside of P. Which direction should the spotlight face to light most of the platform? More formally, we define and provide an algorithm for the problem of finding the maximum intersection between a polygon and a rotating field of view. If we were to guess the direction that yields the maximum intersection this means that we would have to go through an infinite number of guesses. We provide an FPTAS that approximates the rotation that yields the maximum intersection.
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boo-lean functions on inputs arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic motifs. We also obtain lower bounds on function probabilities, and show that these results agree with those obtained by others.
Nondeterministic Good-for-MDP (GFM) automata are for MDP model checking and reinforcement learning what good-for-games automata are for synthesis: a more compact alternative to deterministic automata that displays nondeterminism, but only so much that it can be resolved locally, such that a syntactic product can be analysed. GFM has recently been introduced as a property for reinforcement learning, where the simpler Büchi acceptance conditions it allows to use is key. However, while there are classic and novel techniques to obtain automata that are GFM, there has not been a decision procedure for checking whether or not an automaton is GFM. We show that GFM-ness is decidable and provide an EXPTIME decision procedure as well as a PSPACE-hardness proof.