Here you can find all informations about the mini-workshop for theoretical computer science.

 

Mini-Workshop

 

Theoretical Computer Science in Dortmund

  

A bi-yearly mini-workshop for theoretical computer science at the University of Dortmund. In each session there will be 2-3 conference-length (25 min.) talks.

Next session

The 31th TCS mini-workshop will be hosted by Chair 1.
 
 Time:   16.15, July 24, 2017
 Place:   OH12-3.031
 Organizers:   Thomas Schwentick and Nils Vortmeier
  

Program:

  • Johannes Fischer: Lempel-Ziv Compression in a Sliding Window
    • Abstract: We present new algorithms for the sliding window Lempel-Ziv (LZ77) problem and the approximate rightmost LZ77 parsing problem. Our main result is a new and surprisingly simple algorithm that computes the sliding window LZ77 parse in O(w) space and either O(n) expected time or O(n log log w + z log log σ) deterministic time. Here, w is the window size, n is the size of the input string, z is the number of phrases in the parse, and σ is the size of the alphabet. This matches the space and time bounds of previous results while removing constant size restrictions on the alphabet size. To achieve our result, we combine a simple modification and augmentation of the suffix tree with periodicity properties of sliding windows. We also apply this new technique to obtain an algorithm for the approximate rightmost LZ77 problem that uses O(n(log z + log log n)) time and O(n) space and produces a (1+ε)-approximation of the rightmost parsing (any constant ε>0). While this does not improve the best known time-space trade-offs for exact rightmost parsing, our algorithm is significantly simpler and exposes a direct connection between sliding window parsing and the approximate rightmost matching problem.
  • Thomas Liebig: Dynamic Transfer Patterns for Fast Multi-modal Route Planning
    • Abstract: Route planning makes direct use of geographic data and provides beneficial recommendations to the public. In real-world the schedule of transit vehicles is dynamic and delays in the schedules occur. Incorporation of these dynamic schedule changes in multi-modal route computation is difficult and requires a lot of computational resources. Our approach extends the state-of-the-art for static transit schedules, Transfer Patterns, for the dynamic case. Therefore, we amend the patterns by additional edges that cover the dynamics. Our approach is implemented in the open-source routing framework OpenTripPlanner and compared to existing methods in the city of Warsaw. Our results are an order of magnitude faster then existing methods.
    • Joint work with Sebastian Peter, Maciej Grzenda and Konstanty Junosza-Szaniawski
  • Nils Vortmeier: A Strategy for Dynamic Programs: Start over and Muddle through
    • Abstract: A strategy for constructing dynamic programs is introduced that utilises periodic computation of auxiliary data from scratch and the ability to maintain a query for a limited number of change steps. It is established that if some program can maintain a query for log n change steps after an AC1-computable initialisation, it can be maintained by a first-order dynamic program as well, i.e., in DynFO. As an application, it is shown that decision and optimisation problems defined by monadic second-order (MSO) and guarded second-order logic (GSO) formulas are in DynFO, if only change sequences that produce graphs of bounded treewidth are allowed. To establish this result, Feferman–Vaught-type composition theorems for MSO and GSO are established that might be useful in their own right.
    • Joint work with Samir Datta, Anish Mukherjee, Thomas Schwentick and Thomas Zeume

Previous sessions

MW 30, January 19, 2017, Chair 1

  • Daan Apeldoorn: Exploiting Explicit Knowledge in the Context of Learning Agents
    • Abstract: According to psychological models, learned knowledge can be distinguished into implicit and explicit knowledge. The former can be exploited (e.g., to solve a task), but it cannot be verbalized easily (e.g., to explain it to another person). The latter is accessible in an explicit form: it can comprise generalized, rule-based knowledge which can be verbalized and explained to others. During a learning process, the learned knowledge starts in implicit form and gets explicit as the learning process progresses, and humans can benefit from exploiting such generalized, rule-based knowledge when learning. In this talk, the incorporation of implicit and explicit knowledge is investigated in the context of learning agents. It will be shown experimentally that learning agents are also able to benefit from explicit knowledge, even in early phases of a learning process.
    • Joint work with Gabriele Kern-Isberner
  • Henrik Björklund (Umeå Universitet, Sweden): Restricted Hyperedge Replacement Grammars and Parsing Complexity
    • Abstract: Hyperedge Replacement Grammars are a useful and expressive formalism for generating graph languages. Unfortunately, the uniform parsing problem for such grammars is NP-hard. We investigate restrictions which allow polynomial time parsing while still retaining enough expressive power to generate interesting languages. In particular, our search for suitable restrictions is guided by applications in natural language processing.
  • Dominik Köppl: Computing All Distinct Squares in Linear Time for Integer Alphabets
    • Abstract: Given a string on an integer alphabet, we present an algorithm that computes the set of all distinct squares belonging to this string in time linear to the string length. As an application, we show how to compute the tree topology of the minimal augmented suffix tree in linear time. Asides from that, we elaborate an algorithm computing the longest previous table in a succinct representation using compressed working space.
    • Joint work with Hideo Bannai and Shunsuke Inenaga

 

 MW 29, June 27, 2016, Chair 1

  • Andre Droschinsky: Faster Algorithms for the Maximum Common Subtree Isomorphism Problem
    • Abstract: The maximum common subtree isomorphism problem asks for the largest possible isomorphism between subtrees of two given input trees. This problem is a natural restriction of the maximum common subgraph problem, which is NP-hard in general graphs. Confining to trees renders polynomial time algorithms possible and is of fundamental importance for approaches on more general graph classes. Various variants of this problem in trees have been intensively studied. We consider the general case, where trees are neither rooted nor ordered and the isomorphism is maximum w.r.t. a weight function on the mapped vertices and edges. For trees of order n and maximum degree D our algorithm achieves a running time of O(n^2 D) by exploiting the structure of the matching instances arising as subproblems. Thus our algorithm outperforms the best previously known approaches. No faster algorithm is possible for trees of bounded degree and for trees of unbounded degree we show that a further reduction of the running time would directly improve the best known approach to the assignment problem. Combining a polynomial-delay algorithm for the enumeration of all maximum common subtree isomorphisms with central ideas of our new algorithm leads to an improvement of its running time from O(n^6 + T n^2) to O(n^3 + T n D), where n is the order of the larger tree, T is the number of different solutions, and D is the minimum of the maximum degrees of the input trees. Our theoretical results are supplemented by an experimental evaluation on synthetic and real-world instances.
    • Joint work with Nils M. Kriege and Petra Mutzel 
  • Anish Mukherjee (Chennai Mathematical Institute): Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
    • Abstract: We present a Logspace Approximation Scheme (LSAS), i.e. an approximation algorithm for maximum matching in planar graphs (not necessarily bipartite) that achieves an approximation ratio arbitrarily close to one, using only logarithmic space. This deviates from the well known Baker’s approach for approximation in planar graphs by avoiding the use of distance computation - which is not known to be in Logspace. Our algorithm actually works for any “recursively sparse” graph class which contains a linear size matching and also for certain other classes like bounded genus graphs.

      The scheme is based on an LSAS in bounded degree graphs which are not known to be amenable to Baker’s method. We solve the bounded degree case by parallel augmentation of short augmenting paths. Finding a large number of such disjoint paths can, in turn, be reduced to finding a large independent set in a bounded degree graph. The bounded degree assumption allows us to obtain a Logspace algorithm.
    • Joint work with Samir Datta and Raghav Kulkarni
  • Martin Schuster: Transducer-based Rewriting Games for Active XML
    • Abstract: Context-free games are two-player rewriting games that are played on nested strings representing XML documents with embedded function symbols. These games were introduced to model rewriting processes for intensional documents in the Active XML framework, where input documents are to be rewritten into a given target schema by calls to external services.

      This talk is based on a paper which studies the setting where dependencies between inputs and outputs of service calls are modelled by transducers. The paper defines transducer models operating on nested words and studies their properties, as well as the computational complexity of the winning problem for transducer-based context-free games in several scenarios. While the complexity of this problem is quite high in most settings (ranging from NP-complete to undecidable), some tractable restrictions are also identified.

 

 

MW 28, December 10, 2015, Chair 1

  • Christian Eichhorn: Sceptical Inference Based on C-representations and its Characterization as a Constraint Satisfaction Problem
    • Abstract: The axiomatic system P is an important standard for plausible, nonmonotonic inferences that is, however, known to be too weak to solve benchmark problems like irrelevance, or subclass inheritance (so-called drowning problem). Spohn's ranking functions which provide a semantic base for system P have often been used to design stronger inference relations, like Pearl's system Z, or c-representations. While each c-representation shows excellent inference properties and handles particularly irrelevance and subclass inheritance properly, it is still an open problem which c-representation is the best. In this paper, we focus on the generic properties of c-representations and consider the sceptical inference relation (c-inference) that is obtained by taking all c-representations of a given knowledge base into account. In particular, we show that c-inference preserves the properties of solving irrelevance and subclass inheritance which are met by every single c-representation. Moreover, we characterize sceptical c-inference as a constraint satisfaction problem so that constraint solvers can be used for its implementation.
    • Joint work with Christoph Beierle and Gabriele Kern-Isberner
  • Joachim Biskup: Optimality and Complexity of Inference-Proof Data Filtering and CQE
    • Abstract: The ample literature on confidentiality-preserving data publishing - and controlled query evaluation (CQE) in particular - leaves several questions open. Are the greedy data-filtering algorithms adopted in the literature maximally cooperative? Can novel secure view formats or answer distortion methods improve security or cooperativeness? What is the inherent complexity of confidentiality-preserving data publishing under different constraints, such as cooperativeness and availability? Can the theoretical results on CQE be systematically extended to more general settings? In this paper we answer the above questions using a completely generic, abstract data filtering framework, independent from any syntactic details and data source encodings, and compatible with all possible distortion methods. Some of the main results are: Refusal-based filterings can be adopted as a normal form for all kinds of filterings; greedy refusal-based filterings are optimal; cooperativeness checks and some availability checks are coNP-hard in the simplest case.
    • Joint work with Piero A. Bonatti, Clemente Galdi, and Luigi Sauro
  • Dominik Köppl: Efficiently Finding All Maximal α-gapped Repeats
    • Abstract: For α≥1, an α-gapped repeat in a word w is a factor uvu of w such that |uv|≤α|u|; the two factors u in such a repeat are called arms, while the factor v is called gap. Such a repeat is called maximal if its arms cannot be extended simultaneously with the same symbol to the right or, respectively, to the left. In this paper we show that the number of maximal α-gapped repeats that may occur in a word is upper bounded by 18αn. This allows us to construct an algorithm finding all the maximal α-gapped repeats of a word in O(αn); this is optimal, in the worst case, as there are words that have Θ(αn) maximal α-gapped repeats. Our techniques can be extended to get comparable results in the case of α-gapped palindromes, i.e., factors uvuT with |uv|≤α|u|.
    • Joint work with Paweł Gawrychowski, Tomohiro I, Shunsuke Inenaga, and Florin Manea

 

MW 27, June 10th, 2015, Chair 1

  • Claudio Moraga: Design of circuits for Reversible Computing
    • Abstract: Reversible digital circuits are characterized by a lower heat dissipation as compared with classical digital circuits. Therefore they are achieving increasing interest as the saturation of the Moore's Law is approaching. Furthermore in the quantum mechanics formalisms, all quantum computing operations must be reversible: elementary circuits ("gates") realize unitary matrices. Therefore advances in the design of reversible computing circuits are also in the perspective of quantum computing important, hoping that a "quantum technology" will some day be available. Since it has been shown that to decide whether a reversible circuit with a bounded number of auxiliary lines is realizable with a given number of gates is NP-complete, heuristicsfor the design are being developed. Particular aspects of the design of reversible digital circuits will be covered in the talk.
  • Johannes Fischer: Computing and Approximating the Lempel-Ziv-77 Factorization in Small Space
    • Abstract: The Lempel-Ziv-77 algorithm greedily factorizes a text of length n into z maximal substrings that have previous occurrences, which is particularly useful for text compression. We review two recent algorithms for this task:
        1. A linear-time algorithm using essentially only one integer array of length n in addition to the text.
        2. An even more space-conscious algorithm using O(z) space, computing a 2-approximation of the LZ77 parse in O(n lg n) time w.h.p.
  • Gaetano Geck: Parallel-Correctness and Transferability for Conjunctive Queries
    • Abstract: A dominant cost for query evaluation in modern massively distributed systems is the number of communication rounds. For this reason, there is a growing interest in single-round multiway join algorithms where data is first reshuffled over many servers and then evaluated in a parallel but communication-free way. The reshuffling itself is specified as a distribution policy. We introduce a correctness condition, called parallel-correctness, for the evaluation of queries w.r.t. a distribution policy. We study the complexity of parallel-correctness for conjunctive queries as well as transferability of parallelcorrectness between queries. We also investigate the complexity of transferability for certain families of distribution policies, including, for instance, the Hypercube distribution.

MW 26, December 8th, 2014, Chair 1

  • Christian Eichhorn: LEG networks for ranking functions
    (joint work with Gabriele Kern-Isberner)
    • Abstract: When using representations of plausibility for semantical frameworks, the storing capacity needed is usually exponentially in the number of variables. Therefore, network-based approaches that decompose the semantical space have proven to be fruitful in environments with probabilistic information. For applications where a more qualitative information is preferable to quantitative information, ordinal conditional functions (OCF) offer a convenient methodology. Here, Bayesian-like networks have been proposed for ranking functions, so called OCF-networks. These networks not only suffer from similar problems as Bayesian networks, in particular, allowing only restricted classes of conditional relationships, it also has been found recently that problems with admissibility may arise. In this paper we propose LEG networks for ranking functions, also carrying over an idea from probabilistics. OCF-LEG networks can be built for any conditional knowledge base and filled by local OCF that can be found by inductive reasoning. A global OCF is set up from the local ones, and it is shown that the global OCF is admissible with respect to the underlying knowledge base.
  • Henrik Björklund: Abstract Meaning Representations and Graph Grammars
    • Abstract: Abstract Meaning Representation (AMR) is a formalism for describing the semantics of natural language sentences as directed acyclic graphs. We discuss various types of graph grammars for generating AMRs and the computational complexity of their corresponding parsing problems.
  • Thomas Zeume: Maintaining Reachability in Dynamically Changing Graphs using First-Order Logic
    (joint work with Samir Datta, Raghav Kulkarni, Anish Mukherjee and Thomas Schwentick)
    • Abstract: In this talk we study the dynamic complexity of Reachability. As elementary change operations we allow insertion and deletion of edges of a graph. We show that Reachability is in DynFO, where DynFO allows first-order definable updates of a polynomial-size auxiliary data structure. This result confirms a two decade old conjecture of Immerman and Patnaik (1997).

 

MW 25, June 30th, 2014, Chair 1

Wim Martens: Efficient Separability of Regular Languages by Subsequences and Suffixes 
(joint work with Wojciech Czerwiński and Tomáš Masopust )

  • Abstract: When can two regular word languages K and L be separated by a simple language? We investigate this question and consider separation by piecewise- and suffix-testable languages and variants thereof. We give characterizations of when two languages can be separated and present an overview of when these problems can be decided in polynomial time if K and L are given by nondeterministic automata.

Samir Datta: Dynamic Complexity of Reachability and Matching
(joint work with William Hesse and Raghav Kulkarni)

  • Abstract: We study some well-known graph problems such as reachability and matching in the context of dynamic complexity. In particular, we show the maintainability of:

(a) maximum matching in non-uniform DynTC^0,
(b) digraph reachability in non-uniform DynAC^0[2], and,
(c) embedded planar digraph reachability in DynFO.
For (a) we show the technique from [Hesse] to maintain directed reachability in DynTC^0 can in fact be generalized using the characterization of determinants by [MahajanVinay] to maintain the determinant of a matrix in DynTC^0.
For (b) we extend this technique with the help of two more ingredients namely isolation (cf. [AllenderReinhardtZhou]) and truncated approximation using rational polynomials to achieve a DynAC^0[2] bound.
For (c) we use the duality between cuts and cycles in embedded planar graphs to maintain the number of crossings between carefully chosen primal and dual paths.

Sven Rahmann:  Non-Negative Matrix Factorization: A Need for Theory and Exact Algorithms

  • Abstract:  see here

MW 24, November 18th, 2013, Chair 1

  1. Matthias Niewerth: Reasoning about XML Constraints based on XML-to-relational mappings
    (joint work with Thomas Schwentick)
    1. Abstract: We introduce a simple framework for the specification of constraints for XML documents in which constraints are specified by (1) a mapping that extracts a relation from every XML document and (2) a relational constraint on the resulting relation. The mapping has to be generic with respect to the actual data values and the relational constraints can be of any kind. Besides giving a general undecidability result for first-order definable mappings and a general decidability result for MSO definable mappings for restricted functional dependencies, we also study the complexity of the implication problem for XML constraints that are specified by tree pattern queries and functional dependencies. Furthermore, we highlight how other specification languages for XML constraints can be formulated in the framework.
  2. Jakob Rehof: Subtype entailment
    1. Abstract: The subtype entailment problem, which has been open since 1996, is the question of decidability of the following decision problem: Given a finite set C of subtyping constraints (partial order constraints between simple type expressions) and a subtyping constraint c, does the set C entail the constraint c (in the sense that every valuation satisfying all constraints in C must also satisfy c)? Entailment is considered over a particular partial order (subtyping relation) on trees, and valuations map type variables to trees. In the talk we will focus on a reduction of the entailment problem to the universality problem for a special pushdown automaton model.
  3. Kristian Kersting: Towards Spectral Color Refinement (joint work with Martin Mladenov and Martin Grohe)
    1. Abstract: Colour refinement is a basic algorithmic routine for graph isomorphism testing that can be used as a subroutine in almost all practical isomorphism solvers. It partitions the vertices of a graph into "colour classes" in such a way that all vertices in the same colour class have the same number of neighbours in every colour class. Recently it has been shown to also have major implication on inference within probabilistic models. For instance, applied to loopy belief propagation (LBP), it partitions the random variables of the graphical model in such a way that all variables in the same colour class have the same computations trees. Consequently, one can construct a lifted model, where each node represents a set of random variables that all pass the same messages during LBP. Running LBP on this network can be significantly faster.

MW 23, June 24th, 2013, Chair 1

  1. Ahmet Kara: Dynamic Communicating Automata and Branching High-Level MSCs
    (joint work with Benedikt Bollig, Aiswarya Cyriac, Loïc Hélouët and Thomas Schwentick)
    1. Abstract: We study dynamic communicating automata (DCA), an extension of classical communicating finite-state machines that allows for dynamic creation of processes. The behavior of a DCA can be described as a set of message sequence charts (MSCs). While DCA serve as a model of an implementation, we propose branching high-level MSCs (bHMSCs) on the specification side. Our focus is on the implementability problem: given a bHMSC, can one construct an equivalent DCA? As this problem is undecidable, we introduce the notion of executability, a decidable necessary criterion for implementability. We show that executability of bHMSCs is EXPTIME-complete. We then identify a class of bHMSCs for which executability effectively implies implementability.
  2. Marc Gillé: OBDD-Based Representation of Interval Graphs
    1. Abstract: Symbolic/Implicit OBDD-based graph algorithms deals with the characteristic function χ E of the edge set of a graph G = (V, E) given as an OBDD to get a (hopefully) compact representation and solve graph optimization problems by mainly using functional operations. While the representation size can not be small in general, the idea is that it can be provable small for special graph classes and then also lead to fast algorithms. In this paper, we show that the OBDD size of unit interval graphs is O(|V |/ log |V |) and the OBDD size of interval graphs is O(|V | log |V |) which both improve a known result from Nunkesser and Woelfel [23]. Furthermore, we can show that using our variable order and node labeling for interval graphs the worst-case OBDD size is Ω(|V | log |V |). We use the structure of the adjacency matrices to prove these bounds. This method may be of independent interest and can be applied to other graph classes. We also develop a maximum matching algorithm on unit interval graphs using O(log |V |) operations and evaluate this algorithm empirically.
  3. Marco Wilhelm: Probabilistic Knowledge Representation Using Gröbner Basis Theory
    (joint work with Gabriele Kern-Isberner and Christoph Beierle)
    1. Abstract: An often used methodology for reasoning with probabilistic conditional knowledge bases is provided by the principle of maximum entropy (so-called MaxEnt principle) that realises an idea of informational economy. We exploit the fact that MaxEnt distributions can be computed by solving nonlinear equation systems that reflect the conditional logical structure of these distributions. Further, we apply the theory of Gröbner bases that is well known from computational algebra to the polynomial system which is associated with a MaxEnt distribution, in order to obtain results for reasoning with maximum entropy. A necessary condition for knowledge bases to be consistent, as well as approaches to answering MaxEnt queries, can be presented. Finally, we discuss computational methods to establish general MaxEnt inference rules.

MW 22, February 20th, 2013, Chair 1

  1. Melanie Schmidt: Coresets, k-means Clustering and Projective Clustering
    1. Abstract: Coresets are a technique to compress huge amounts of data in a way that still allows us to solve a specific optimization problem. The k-means clustering problem is a geometric clustering problem, where we are given a set of points P in the Euclidean space and ask for k centers that minimize the sum of the squared distances of every point to its closest center. Here, a coreset is a (small) weighted set of points, that has approximately the same cost as the original input for every set of centers. This implies that solving the problem on the coreset yields an approximation for the original problem. The coreset presented in this talk is of constant-size, more specifically, the number of points in the coreset only depends on k and the precision parameter.
  2. Moritz Martens: NP-Completeness of Intersection Type Matching
    1. Abstract: Type matching problems occur in a number of contexts, including library search, component composition, and inhabitation. The intersection type matching problem under the standard notion of subtyping for intersection types will be considered: Given types tau and sigma, where sigma is a constant type, does there exist a type substitution S such that S(tau) is a subtype of sigma? We show that the matching problem is NP-complete. NP-hardness holds already for the restriction to atomic substitutions. Membership in NP is challenging, and we provide an algorithm engineered for efficiency. Our algorithm is based on a nondeterministic polynomial time normalization procedure for subtype constraint systems with intersection types.
  3. Thomas Zeume: On the quantifier-free dynamic complexity of Reachability
    (joint work with Thomas Schwentick)
    1. Abstract: The dynamic complexity of the reachability query is studied in the dynamic complexity framework of Patnaik and Immerman, restricted to quantifier-free update formulas. We show that, with this restriction, the reachability query cannot be dynamically maintained, neither with binary auxiliary relations nor with unary auxiliary functions, and that ternary auxiliary relations are more powerful with respect to graph queries than binary auxiliary relations. We also establish that Reachability cannot be maintained with auxiliary relations or functions of arbitrary arity in the scenario, where update sequences are applied to a non-empty graph and the initialization of the auxiliary relations is permutation-invariant. Furthermore, the inexpressibility results are transferred to some other queries and some normal forms are established.

MW 21, November 26th, 2012, Chair 1

  1. Joachim Biskup: Signature-Based Inference-Usability Confinement for Relational Databases under Functional and Join Dependencies
    1. Abstract: Inference control of queries for relational databases confines the information content and thus the usability of data returned to a client, aiming to keep some pieces of information confidential as specified in a policy, in particular for the sake of privacy. In general, there is a tradeoff between the following factors: on the one hand, the expressiveness offered to administrators to declare a schema, a confidentiality policy and assumptions about a client’s a priori knowledge; on the other hand, the computational complexity of a provably confidentiality preserving enforcement mechanism. We propose and investigate a new balanced solution for a widely applicable situation: we admit relational schemas with functional and join dependencies, which are also treated as a priori knowledge, and select-project sentences for policies and queries; we design an efficient signature-based enforcement mechanism that we implement for an Oracle/SQL-system. At declaration time, the inference signatures are compiled from an analysis of all possible crucial inferences, and at run time they are employed like in the field of intrusion detection.
  2. Martin Schuster: On optimum left-to-right strategies for context-free games
    1. Abstract: Active context-free games are two-player games on strings over finite alphabets with one player trying to rewrite the input string to match a target specification. These games have been investigated in the context of exchanging Active XML (AXML) data. While it was known that the rewriting problem is undecidable in general, it is shown here that it is EXPSPACE-complete to decide for a given context-free game, whether all safely rewritable strings can be safely rewritten in a left-to-right manner, a problem that was previously considered by Abiteboul et al. Furthermore, it is shown that the corresponding problem for games with finite replacement languages is EXPTIME-complete.
  3. Ingo Battenfeld: Observationally induced eff ects in cartesian closed categories
    1. Abstract: Alex Simpson has suggested an observationally-induced approach towards obtaining monads for computational effects in denotational semantics. The underlying idea of this approach is to use a single observation algebra as computational prototype and to obtain a computational monad as a free algebra construction derived from this prototype. Recently, it has been shown that free observationally-induced algebras exist in the category of continuous maps between topological spaces for arbitrary pre-chosen computational prototypes. In this work we transfer these results to cartesian closed categories. In particular, we show that, provided the category under consideration satisfies suitable completeness conditions, it supports a free observationally-induced algebra construction for arbitrary computational prototypes. We also show that the free algebras are obtained as certain subobjects of double exponentials involving the computational prototype as result type. Finally, we apply these results to show that in topological domain theory an observationally-induced lower powerspace construction over a QCB-space X is given by the space of nonempty closed subsets of X topologised suitably.

MW 20, June 20th, 2012, Chair 1

  1. Gabriele Kern-Isberner: A Constructive Approach to Independent and Evidence Retaining Belief Revision by General Information Sets
    (joint work with Patrick Krümpelmann)
    1. Abstract: Recent years have seen a lot of work towards extending the established AGM belief revision theory with respect to iterating revision, preserving conditional beliefs, and handling sets of propositions as new information. In particular, novel postulates like independence and evidence retainment have been brought forth as new standards for revising epistemic states by (sets of) propositional information. In this paper, we propose a constructive approach for revising epistemic states by sets of (propositional and conditional) beliefs that combines ideas from nonmonotonic reasoning with conditional belief revision. We also propose a novel principle called enforcement that covers both independence and evidence retainment, and we show our revision operator to comply with major postulates from the literature. Moreover, we point out the relevance of our approach for default reasoning.
  2. Boris Düdder: Bounded Combinatory Logic
    (joint work with Moritz Martens, Jakob Rehof, and Paweł Urzyczyn)
    1. Abstract: In combinatory logic one usually assumes a fixed set of basic combinators (axiom schemes), usually K and S. In this setting the set of provable formulas (inhabited types) is PSPACE complete in simple types and undecidable in intersection types. When arbitrary sets of axiom schemes are considered, the inhabitation problem is undecidable even in simple types (this is known as Linial-Post theorem). Bounded combinatory logic (BCL _k) arises from combinatory logic by imposing the bound k on the depth of types (formulae) which may be substituted for type variables in axiom schemes. We consider the inhabitation (provability) problem for BCL_k: Given an arbitrary set of typed combinators and a type, is there a combinatory term of type in k-bounded combinatory logic? Our main result is that the problem is (k + 2)-EXPTIME complete for BCL _k with intersection types, for every fixed k (and hence non-elementary when k is a parameter). We also show that the problem is EXPTIME-complete for simple types, for all k. Theoretically, our results give new insight into the expressive power of intersection types. From an application perspective, our results are useful as a foundation for composition synthesis based on combinatory logic.
  3. Marianna D'Addario: Designing q-Unique DNA Sequences with Integer Linear Programs and Euler Tours in De Bruijn Graphs
    1. Abstract: DNA nanoarchitechtures require carefully designed oligonucleotides with certain non-hybridization guarantees, which can be formalized as the q-uniqueness property on the sequence level. We study the optimization problem of finding a longest q-unique DNA sequence. We first present a convenient formulation as an integer linear program on the underlying De Bruijn graph that allows to flexibly incorporate a variety of constraints; solution times for practically relevant values of q are short. We then provide additional insights into the problem structure using the quotient graph of the De Bruijn graph with respect to the equivalence relation of reverse complementarity. Specifically, for odd q the quotient graph is Eulerian, and finding a longest q-unique sequence is equivalent to finding an Euler tour, hence solved in linear time (with respect to the output string length). For even q, self-complementary edges complicate the problem, and the graph has to be Eulerized by deleting a minimum number of edges. Two sub-cases arise, for one of which we present a complete solution, while the other one remains open.

MW 19, February 22th, 2012, Chair 1

  1. Chris Schwiegelshohn: Solving the minimum string cover problem
    1. Abstract: A string cover C of a set of strings S is a set of substrings from S such that every string in S can be written as a concatenation of the strings in C. Given costs assigned to each substring from S, the Minimum String Cover (MSC) problem asks for a cover of minimum total cost. This NP-hard problem has so far only been approached from a purely theoretical perspective. We propose a flow-based ILP formulation, whose structure is particularly favorable for a Lagrangian relaxation approach. By making use of the strong bounds obtained through a repeated shortest path computation in a branch-and-bound manner, we show for the first time that non-trivial MSC instances can be solved to provable optimality in reasonable time.
  2. Sven Rahmann: Welche Herausforderungen bietet die Genomanalyse der theoretischen Informatik?
    1. Abstract: Wir stellen aktuelle Probleme der Genominformatik vor und diskutieren, welche grundlegenden theoretischen Fragestellungen sich dahinter verbergen (könnten).
  3. Ahmet Kara: Decidability Results for Infinite State Systems
    1. Abstract: The positive achievements in the field of automatic verification of finite state systems motivated the research on algorithmic methods for infinite state systems.

      In this talk we will be concerned with general conditions on infinite state systems which are sufficient for the termination of several algorithmic procedures. We will introduce well-structured transition systems (WSTSs) as defined by Finkel and Schnoebelen. These are infinite state systems with a well-quasi ordering on the set of states which is compatible with the transition relation. For WSTSs it can be shown that algorithmic problems like reachability, i.e. the question whether a certain set of states is reachable from an initial state, are decidable. Many transition systems like Petri nets and lossy channel systems from various fields of computer science can be seen as WSTSs. We will give some examples from current research where the concept of WSTSs helps to get decidability results.

MW 18, August 9th, 2011, Chair 1

  1. Gabriele Kern-Isberner: A Default Logical Semantics for Defeasible Argumentation
    (joint work with Guillermo Simari)

    1. Abstract: Defeasible argumentation and default reasoning are usually perceived as two similar, but distinct approaches to commonsense reasoning. In this paper, we combine these two fields by viewing (defeasible resp. default) rules as a common crucial part in both areas. We will make use of possible worlds semantics from default reasoning to provide examples for arguments, and carry over the notion of plausibility to the argumentative framework. Moreover, we base a priority relation between arguments on the tolerance partitioning of system Z and obtain a criterion phrased in system Z terms that ensures warrancy in defeasible argumentation.

  2. Melanie Schmidt: Adaptive Sampling for Clustering Problems

    1. Abstract: Clustering is the problem to partition a given set of objects into subsets called clusters such that objects in the same cluster are similar and objects in different clusters are dissimilar. The k-means problem is a well-known clustering problem where similarity is measured by squared Euclidean distance. The k-means algorithm is a very popular heuristic for this problem which unfortunately does not provide any guality guarantee. In this talk, we review recent improvements of the k-means algorithm by Arthur and Vassilvitzki and then consider possible directions for further research. In particular, we are interested in applications to a generalization of the k-means problem.
  3. Peter Padawitz: Algebraic, relational and order-based co/induction

    1. Abstract: Algebraic induction is a proof rule for showing properties of initial models. Its soundness follows from the fact that initial models do not have proper substructures. Algebraic coinduction is a proof rule for showing equations in final models. Its soundness follows from the fact that final models do not have proper quotients. A proof by algebraic co/induction can be carried out by relational co/induction, which is a proof rule for showing properties of least resp. greatest - not only unary or binary - relations in arbitrary models. Relational co/induction specializes order-based co/induction to the lattice of interpretations of the predicates of the underlying signature. Order-based co/induction works in any complete lattice or co/chain-complete poset. As an example of the use of algebraic coinduction, we show how it simplifies proofs of language equivalence, in particular of conjectures stating that a given context-free grammar generates a given language.

MW 17, Mai 4th, 2011, Chair 1

  1. Thomas Schwentick: Two-variable logic and Key Constraints on Data Words

    1. Abstract: We introduce key constraints for data words and show that it is decidable whether, for a given two-variable sentence \(\varphi\) that can refer to the successor relation on positions and a set K of key constraints, there is a data string w that satisfies \(\varphi\) and respects K. Here, the formula is allowed to refer to the successor relation but not to the linear order on the positions of the word. As a byproduct, a self-contained exposition of an algorithm that decides satisfiability of such formulas (without key constraints) in 2-NEXPTIME is given.

  2. Johannes Köster: Proteinhypernetzwerke

    1. Abstract: Protein-protein interactions are the fundamental building blocks of biochemical reaction systems underlying cellular functions. Importantly, the complexity and functionality of such systems emerge not from the protein interactions themselves but from the dependencies between these interactions. However, a comprehensive approach for large scale integration of such dependencies is still rather lacking.

      We present an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining protein hypernetworks. We illustrate their potential by demonstrating that they improve protein complex prediction in yeast. By modeling the effects of protein perturbations in hypernetworks we derive a perturbation impact score, a novel measure of a protein's functional importance.

  3. Patrick Krümpelmann: On Influence and Contractions in Defeasible Logic Programming

    1. Abstract: We investigate the problem of contraction in Defeasible Logic Programming (DeLP), a logic-based approach for defeasible argumentation. We develop different notions of contraction based on both, the different forms of entailment implicitly existent in argumentation-based formalisms and the influence literals exhibit in the reasoning process. We give translations of widely accepted rationality postulates for belief contraction to our framework. Moreover, we discuss on the applicability of contraction for defeasible argumentation and the role of influence in this matter.

 
 

MW 16, January 19th, 2011, Chair 1

 
Speaker Subject    
Thomas Zeume, LS 1 Temporal Logics on Words with Multiple Data Values abstract  
Henrik Björklund Recognizing Shuffled Languages abstract  
Peter Padawitz, LS 1 How to benefit from algebra when reasoning about languages abstract  
 
 

MW 15, July 14th, 2010, Chair 1

Speaker Subject    
Wim Martens, LS 1 Simplifying XML Schema: Single-Type Approximations of Regular Tree Languages abstract  
Jan-Hendrik Lochner, LS 6 Efficient Inference Control for Open Relational Database Queries abstract  
Matthias Thimm, LS 1 Classification and Strategical Issues of Argumentation Games on Structured Argumentation Frameworks abstract  
 

MW 14, July 14th, 2010, Chair 1

 
 
Speaker Subject    
Frank Hellweg, LS 2 Testing Euclidean Spanners abstract  
Thomas Schwentick, LS 1 Two-Variable Logic with Two Order Relations abstract  
Christoph Schubert, LS 10 Expressivity of modal logics over general measurable spaces abstract  
 

 

 

MW 13, February 23th, 2010, Chair 1:

 
 
Speaker Subject    
Matthias Thimm, LS 1 Relational Probabilistic Conditionals and Maximum Entropy abstract  
Tobias Marschall, LS 11 Exact Analysis of Horspool's and Sunday's Pattern Matching Algorithms with Probabilistic Arithmetic Automata abstract  
Wim Martens, LS 1 Schema Design for XML Repositories: Complexity and Tractability abstract  
 
 

 

 
 

MW 12, December 17th, 2008, Chair 1:

 
 
Speaker Subject  
Sven Rahman, LS 11 Efficient exhaustive motif discovery abstract
Chunlai Zhou, Peking Finite approximation of labelled Markov processes through filtration  
Patrick Krümpelmann, LS 6 Towards a general framework for updates in logic programming.  
 
 

MW 11, July 9th, 2008, Chair 1:

 
 
Speaker Subject  
Lena Wiese, LS6 preCQE: preprocessing for controlled query evaluation. Inferenzkontrolle in pädikatenlogischen Datenbanken. abstract
Ernst-Erich Doberkat, LS 11 Bisimilarity of distributionally equivalent Markov transition sytems. abstract
Marcel Marquardt, LS 1 Dynamische Komplexität formaler Sprachen.  
 
 
 
 

MW 10, May 7th, 2008, Chair 1:

 
 
Speaker Subject  
Felix Klaedtke, ETH Zürich On the automata size for linear arithmetics abstract (ca. 1 hour talk)
R. Ramanujam, IMSC Chennai Verifiying that an e-election is "Free and Fair" abstract
 
 
 
 

MW 9, April 9th, 2008, Chair 1:

 
 
Speaker Subject
Ingo Battenfeld Computational effects in topoligical domain theory
Peter Padawitz Hyperdocuments are coalgebraic
Matthias Thimm On the relationship of defeasible argumentation and answer set programming
 
 
 
 

MW 8, November 21st, 2007, Chair 1:

 
 
Speaker Subject
Manuela Ritterskamp Using preferance fusion in default reasoning
Wim Martens Conjunctive query containment over trees
 
 
 
 

MW 7, June 27th, 2007, Chair 2:

 
 
Speaker Subject
Christoph Schubert Bisimilarity, behavioral and logical equivalance for stochastic right coalgebras
Torben Weibert Confidentiality policies for controlled query evaluation
Henrik Björklund Bounded depth data trees
 
 
 
 

MW 6, February 28th, 2007, Chair 1:

 
 
Speaker Subject
Horst Wedde Concurrency in distributed systems under autonomous and enforced actions
Ernst-Erich Doberkat Eine bemerkenswerte erzeugende Funktion
Wim Martens Conjunctive query containment over trees
 
 
 
 

MW 5, December 20th, 2006, Chair 10

 
 
Speaker Subject  
Peter Buchholz Bisimulation für gewichtete Automaten  
Gabriele Kern-Isberner Probabilistic Abducation without priors abstract
Henrik Björklund Towards regular data languages abstract
 
 
 
 

MW 4, September 4th, 2006, math. dep. chair of Prof. Skutella:

 
 
Speaker Subject  
Lena Wiese On finding an inference-proof complete database for controlled query evaluation  
Volker Weber Bounded-variable fragments of hybrid logics slides
Martin Skutella Solving evacuation problems efficiently: earliest arrival flows with multiple source slides
 
 
 
 

MW 3, July 10th, 2006, Chair 1:

 
 

The workshop took place directly after a guest lecture of Dr. Thomas Eiter (TU Wien), a guest of Prof. Petra Mutzel. Titel: Datenintegration and answer set programming abstract

 
 
Speaker Subject  
Carsten Witt Laufzeitanalysen von randomisierten Suchheuristiken für das Partitionsproblem slides
Peter Padawitz Di-algebraic picture generation: A case study in multi-level data abstraction slides
 
 
 
 

MW 2, May 22nd, 2006, Chair 11:

 
 
Speaker Subject
E.-E. Doberkat Randomisierte Morphismen für aktionsmarkierte Markoffsche Transitionssysteme
Martin Sauerhoff Approximate Counting und Häufigkeitsmomente von langen Datenströmen
Gabriele Kern-Isberner A note on comparing semantics for conditionals
 
 
 
 

MW 1, March 20th, 2006, Chair 1:

 
 
Speaker Subject  
Joachim Biskup Controlled query evaluation with open queries for a decidable relational submodel slides
Markus Chiamani Non-planar core reduction  
Thomas Schwentick Pebble automata on trees slides
 
 
 
 
Recognizing Shuffled Languages