Group Overview
The Swansea Theory Group is internationally renowned for its research in Logic in Computer Science. The active research areas are: Computability Theory, Computational Complexity, Proof Theory, Type Theory, Game Theory, Algorithms, Formal Methods (Automated and Interactive Theorem Proving), Cyber Security, Blockchain Technology, Verification of Railway Control Systems), Artificial Intelligence (Satisfiability Solving, Multi-agent Systems, Argumentation Theory, Trustable Machine Learning, AI and Law).
The research on Railway Control Systems has led to the formation of the Swansea Railway Verification Group which also delivered two Impact Case Studies. Another recently formed sub group studies the Educational, and Historical and Philosophical Foundation of Computer Science.
News and Events
Computability
The Computability Group in Swansea investigates the fundamental aspects of computation and their applications in computer science and mathematics.
We are a leading group in computable analysis, which is concerned with the theory of computation with objects such as real numbers and continuous functions. We are exploring how computable existence theorems from mathematics, primarily using Weihrauch complexity as the framework.
Algorithms and Complexity
The Algorithms and Complexity group investigates the interplay between formal logic and algorithmic design through the lens of computational complexity, bridging theoretical and applied perspectives. We analyse foundational questions about computational limits and verification frameworks, while exploring concrete applications such as solver algorithms for combinatorial puzzles (e.g., Sudokus) and automated theorem proving systems. Theoretical work focuses on deriving upper/lower bounds for problems and characterizing proof-system expressivity, while practical efforts develop efficient solvers and verification tools. By unifying mathematical rigor with software implementation, we advance understanding of how logical structures constrain or enable efficient computation, studying proofs both as theoretical objects (length, complexity) and practical artifacts (generation, validation). This dual focus connects abstract complexity theory with real-world problem-solving.
Semantics, Specification, and Verification
We study formal methods, e.g., in the context of establishing a system’s safety or its cyber security. To this end, we study and formulate the semantics of logics, specification languages, and programming languages. This allows us to devise concise system models and specify expected properties. Utilizing semantics, we develop analysis methods of mathematical strength and prove them to be correct and (possibly) complete. We implement our analysis methods and apply them in industrial contexts, e.g., for safety analyses in railway signalling, in the cyber security of mobile computing, or access security of cryptocurrencies. We are interested also in the philosophical considerations of such methods, for example what is meant by trustworthiness, or a convincing proof; and the historically contingent ways in which such ideals are contested.
Proof theory, Type Theory, Machine Assisted Theorem Proving
The Proof Theory and Type Theory Group at Swansea focuses on the ordinal analysis of mathematical theories to quantify their consistency strength. A key area of research is Martin-Löf Type Theory (MLTT) and its extensions, such as the Mahlo Universe, induction recursion, and induction-induction.
Our work includes extracting programs from proofs, encompassing, apart from functional, also concurrent and imperative programs. We develop proof-theoretically strong extensions of MLTT, which are predicatively justified as part of a revised Hilbert program. We also verify the correctness of programs, including those for electronic voting and smart contracts, using interactive theorem provers like Coq, Agda, Minlog, Lean, and Prawf.
We explore dependently typed programming, focusing on coinduction, object-based programs, and verified programs with graphical user interfaces. Additionally, we develop proof checkers for automated theorem provers, such as SAT/SMT solvers, and investigate the foundations of coinduction to formalize infinite structures. Our research also delves into proof-theoretic semantics and the transformation of proofs. In addition, we use theorem provers to construct vote-counting algorithms and cryptographic protocols and prove them correct.
Trustworthy AI
Trustworthy AI often refers to generic policy concepts of: fairness and inclusiveness, privacy and security, transparency, accountability, reliability and safety, inclusiveness. It may also relate to certification (verification and validation of systems). These concepts bear on the extent to which users trust a computational system to consistently and reliably serve as intended and behave within acceptable bounds.
Trustworthy AI is often seen from the perspective of Law applied to the computational system; that is, can a legal professional argue that some system has abided by the law or not and if not, which party to hold liable. For example, whether or not some computational system has breached an individual’s data. The question is whether some policy has been violated in the course of executing the system. The impact in Computer Science might be that the developers or maintainers of the system are liable for violations of the policies, particularly with respect to reporting operations such that legal professionals can evaluate the system.
However, more relevant to Computer Science research is to build in the policy concepts to the computational system. That is, the operating system abides by the policy concepts or, in some instances, violates them. Some concepts are under development, e.g., security, privacy, transparency, reliability, safety, and certification. Other concepts, e.g., fairness and inclusivity, may relate more to data than the operating system. More challenging is to address specifically legally defined behaviour, which underlies the concepts. For instance, in autonomous vehicles, what does it mean for the vehicle to abide by the Highway Code and, in some instances, to violate specific laws and avoid penalty given mitigating circumstances?
For this, we require formal representations of the law which can be implemented and reasoned with. As such, it draws on and provides use cases for the work of other theory subgroups. For example, formal representation, formal methods (verification and validation), theorem proving, SAT, and other theoretical approaches may be brought to bear in order to augment the trustworthiness of the system. In this sense, Trustworthy AI provides an application area for theoretical work. By the same token, Trustworthy AI may identify problems as yet unaddressed in theoretical approaches, e.g., vagueness or open texture, non-monotonicity, violability, or others.
Current work explores the following:
- Modelling of the UK Highway Code in Prolog and tying the model to a multi-agent driving simulator.
- Formal theories to represent fine-grained preferences (values) which are used to reason about and prioritise action choice relative to the law.
- Methods to extract (rule-based or with LLMs) and represent (in Semantic Web languages such as LegalRuleML, OWL, SHACL-SPARQL, or some specialised annotation language) legal information from legislation and regulation in Semantic Web languages ().