Publications

2025
Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations
Fuchs, Johannes, Frings, Alexander, Heinle, Maria Viktoria, Keim, Daniel A., Di Bartolomeo, Sara
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 31; Issue: 1; Pages: 1039-1049
Show Abstract
Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at https://osf.io/9mt8r/?view_only = b70dfbe550e3404f83059afdc60184c6.

Link to Repositum

Large and Parallel Human Sorting Networks
Szeider, Stefan
Type: Inproceedings; In: Creative Mathematical Sciences Communication; Vol: 15229; Pages: 194-204
Show Abstract
This paper presents two innovative extensions of the classic Human Sorting Network (HSN) activity from the CS Unplugged program. First, we describe the implementation of a large-scale HSN with 50 input nodes, realized with high school students in Vienna, Austria. We detail the logistical challenges and solutions for creating an HSN of this magnitude, including location selection, network layout, and participant coordination. Second, we report on using parallel 6-input HSNs, which introduce a competitive element and enhance engagement. This parallel setup allows for races between teams and can be adapted for various age groups and knowledge levels. Both extensions aim to increase the educational impact and enjoyment of the HSN activity. We provide comprehensive insights into our experiences, enabling other educators and researchers to replicate or further develop these HSN variants.

Link to Repositum

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization
Piccolotto, Nikolaus, Wallinger, Markus, Miksch, Silvia, Bögl, Markus
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 31; Issue: 1; Pages: 196-206
Show Abstract
In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to methods such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial methods, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.

Link to Repositum

A Biased Random Key Genetic Algorithm for Solving the Longest Common Square Subsequence Problem
Reixach, Jaume, Blum, Christian, Djukanovic, Marko, Raidl, Günther R.
Type: Article; In: IEEE Transactions on Evolutionary Computation; Vol: Early Access
Show Abstract
This paper considers the longest common square subsequence (LCSqS) problem, a variant of the longest common subsequence (LCS) problem in which solutions must be square strings. A square string can be expressed as the concatenation of a string with itself. The LCSqS problem has applications in bioinformatics, for discovering internal similarities between molecular structures. We propose a metaheuristic approach, a biased random key genetic algorithm (BRKGA) hybridized with a beam search from the literature. Our approach is based on reducing the LCSqS problem to a set of promising LCS problems. This is achieved by cutting each input string into two parts first and then evaluating such a transformed instance by solving the LCS problem for the obtained overall set of strings. The task of the BRKGA is, hereby, to find a set of good cut points for the input strings. For this purpose, the search is carefully biased by problem-specific greedy information. For each cut point vector, the resulting LCS problem is approximately solved by the existing beam search approach. The proposed algorithm is evaluated against a previously proposed state-of-the-art variable neighborhood search (VNS) on random uniform instances from the literature, new non-uniform instances, and a real-world instance set consisting of DNA strings. The results underscore the importance of our work, as our novel approach outperforms former state-of-the-art with statistical significance. Particularly, they evidence the limitations of the VNS when solving non-uniform instances, for which our method shows superior performance.

Link to Repositum

An introduction to and survey of biological network visualization
Ehlers, Henry, Brich, Nicolas, Krone, Michael, Nöllenburg, Martin, Yu, Jiacheng, Natsukawa, Hiroaki, Yuan, Xiaoru, Wu, Hsiang-Yun
Type: Article; In: COMPUTERS & GRAPHICS-UK; Vol: 126
Show Abstract
Biological networks describe complex relationships in biological systems, which represent biological entities as vertices and their underlying connectivity as edges. Ideally, for a complete analysis of such systems, domain experts need to visually integrate multiple sources of heterogeneous data, and visually, as well as numerically, probe said data in order to explore or validate (mechanistic) hypotheses. Such visual analyses require the coming together of biological domain experts, bioinformaticians, as well as network scientists to create useful visualization tools. Owing to the underlying graph data becoming ever larger and more complex, the visual representation of such biological networks has become challenging in its own right. This introduction and survey aims to describe the current state of biological network visualization in order to identify scientific gaps for visualization experts, network scientists, bioinformaticians, and domain experts, such as biologists, or biochemists, alike. Specifically, we revisit the classic visualization pipeline, upon which we base this paper’s taxonomy and structure, which in turn forms the basis of our literature classification. This pipeline describes the process of visualizing data, starting with the raw data itself, through the construction of data tables, to the actual creation of visual structures and views, as a function of task-driven user interaction. Literature was systematically surveyed using API-driven querying where possible, and the collected papers were manually read and categorized based on the identified sub-components of this visualization pipeline’s individual steps. From this survey, we highlight a number of exemplary visualization tools from multiple biological sub-domains in order to explore how they adapt these discussed techniques and why. Additionally, this taxonomic classification of the collected set of papers allows us to identify existing gaps in biological network visualization practices. We finally conclude this report with a list of open challenges and potential research directions. Examples of such gaps include (i) the overabundance of visualization tools using schematic or straight-line node-link diagrams, despite the availability of powerful alternatives, or (ii) the lack of visualization tools that also integrate more advanced network analysis techniques beyond basic graph descriptive statistics.

Link to Repositum

Wiggle! Wiggle! Wiggle! Visualizing uncertainty in node attributes in straight-line node-link diagrams using animated wiggliness
Ehlers, Henry, Pahr, Daniel, Di Bartolomeo, Sara, Filipov, Velitchko, Wu, Hsiang-Yun, Raidou, Renata G.
Type: Article; In: COMPUTERS & GRAPHICS-UK; Vol: 131
Show Abstract
Uncertainty is common to most types of data, from meteorology to the biomedical sciences. Here, we are interested in the visualization of uncertainty within the context of multivariate graphs, specifically the visualization of uncertainty attached to node attributes. Many visual channels offer themselves up for the visualization of node attributes and their uncertainty. One controversial and relatively under-explored channel, however, is animation, despite its conceptual advantages. In this paper, we investigate node “wiggliness”, i.e. uncertainty-dependent pseudo-random motion of nodes, as a potential new visual channel with which to communicate node attribute uncertainty. To study wiggliness’ effectiveness, we compare it against three other visual channels identified from a thorough review of uncertainty visualization literature—namely node enclosure, node fuzziness, and node color saturation. In a larger-scale, mixed method, Prolific-crowd-sourced, online user study of 160 participants, we quantitatively and qualitatively compare these four uncertainty encodings across eight low-level graph analysis tasks that probe participants’ abilities to parse the presented networks both on an attribute and topological level. We ultimately conclude that all four uncertainty encodings appear comparably useful—as opposed to previous findings. Wiggliness may be a suitable and effective visual channel with which to communicate node attribute uncertainty, at least for the kinds of data and tasks considered in our study.

Link to Repositum

NODKANT: exploring constructive network physicalization
Pahr, D., Di Bartolomeo, S., Ehlers, H., Filipov, V. A., Stoiber, C., Aigner, W., Wu, H.-Y., Raidou, R. G.
Type: Article; In: Computer Graphics Forum
Show Abstract
Physicalizations, which combine perceptual and sensorimotor interactions, offer an immersive way to comprehend complex data visualizations by stimulating active construction and manipulation. This study investigates the impact of personal construction on the comprehension of physicalized networks. We propose a physicalization toolkit—NODKANT—for constructing modular node-link diagrams consisting of a magnetic surface, 3D printable and stackable node labels, and edges of adjustable length. In a mixed-methods between-subject lab study with 27 participants, three groups of people used NODKANT to complete a series of low-level analysis tasks in the context of an animal contact network. The first group was tasked with freely constructing their network using a sorted edge list, the second group received step-by-step instructions to create a predefined layout, and the third group received a pre-constructed representation. While free construction proved on average more time-consuming, we show that users extract more insights from the data during construction and interact with their representation more frequently, compared to those presented with step-by-step instructions. Interestingly, the increased time demand cannot be measured in users' subjective task load. Finally, our findings indicate that participants who constructed their own representations were able to recall more detailed insights after a period of 10–14 days compared to those who were given a pre-constructed network physicalization. All materials, data, code for generating instructions, and 3D printable meshes are available on https://osf.io/tk3g5/.

Link to Repositum

BattleGraphs: Forge, Fortify, and Fight in the Network Arena
Ehlers, Henry, Pahr, Daniel, Di Bartolomeo, Sara, Stoiber, C., Filipov, Velitchko
Type: Inproceedings; In: Visgames 2025: EuroVis Workshop on Visualization Play, Games, and Activities
Show Abstract
Constructive visualization enables users to create personalized data representations and facilitates early insight generation and sensemaking. Based on NODKANT, a toolkit for creating physical network diagrams using 3D printed parts, we define a competitive network physicalization game: BattleGraphs. In BattleGraphs, two players construct networks independently and compete in solving network analysis benchmark tasks. We propose a workshop scenario where we deploy our game, collect strategies for interaction and analysis from our players, and measure the effectiveness of the strategy with the success of the player to discuss in a reflection phase. Printable parts of the game, as well as instructions, are available through the Open Science Framework at -- https://osf.io/x6zv7/ -- All proceedings (including this submission) available on the eurographics digital library: https://diglib.eg.org/collections/d1483cdb-603e-46b6-b315-d9a6e750427e

Link to Repositum

2024
Improving Temporal Treemaps by Minimizing Crossings
Dobler, Alexander, Nöllenburg, Martin
Type: Article; In: Computer Graphics Forum; Vol: 43; Issue: 3
Show Abstract
Temporal trees are trees that evolve over a discrete set of time steps. Each time step is associated with a node-weighted rooted tree and consecutive trees change by adding new nodes, removing nodes, splitting nodes, merging nodes, and changing node weights. Recently, two-dimensional visualizations of temporal trees called temporal treemaps have been proposed, representing the temporal dimension on the x-axis, and visualizing the tree modifications over time as temporal edges of varying thickness. The tree hierarchy at each time step is depicted as a vertical, one-dimensional nesting relationships, similarly to standard, non-temporal treemaps. Naturally, temporal edges can cross in the visualization, decreasing readability. Heuristics were proposed to minimize such crossings in the literature, but a formal characterization and minimization of crossings in temporal treemaps was left open. In this paper, we propose two variants of defining crossings in temporal treemaps that can be combinatorially characterized. For each variant, we propose an exact optimization algorithm based on integer linear programming and heuristics based on graph drawing techniques. In an extensive experimental evaluation, we show that on the one hand the exact algorithms reduce the number of crossings by a factor of 20 on average compared to the previous algorithms. On the other hand, our new heuristics are faster by a factor of more than 100 and still reduce the number of crossings by a factor of almost three.

Link to Repositum

Slim Tree-Cut Width
Ganian, Robert, Korchemna, Viktoria
Type: Article; In: Algorithmica; Vol: 86; Issue: 8; Pages: 2714-2738
Show Abstract
Tree-cut width is a parameter that has been introduced as an attempt to obtain an analogue of treewidth for edge cuts. Unfortunately, in spite of its desirable structural properties, it turned out that tree-cut width falls short as an edge-cut based alternative to treewidth in algorithmic aspects. This has led to the very recent introduction of a simple edge-based parameter called edge-cut width [WG 2022], which has precisely the algorithmic applications one would expect from an analogue of treewidth for edge cuts, but does not have the desired structural properties. In this paper, we study a variant of tree-cut width obtained by changing the threshold for so-called thin nodes in tree-cut decompositions from 2 to 1. We show that this "slim tree-cut width" satisfies all the requirements of an edge-cut based analogue of treewidth, both structural and algorithmic, while being less restrictive than edge-cut width. Our results also include an alternative characterization of slim tree-cut width via an easy-to-use spanning-tree decomposition akin to the one used for edge-cut width, a characterization of slim tree-cut width in terms of forbidden immersions as well as approximation algorithm for computing the parameter.

Link to Repositum

Splitting Plane Graphs to Outerplanarity
Gronemann, Martin, Nöllenburg, Martin, Villedieu, Anaïs
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 28; Issue: 3; Pages: 31-48
Show Abstract
Vertex splitting replaces a vertex by two copies and partitions its incident edges amongst the copies. This problem has been studied as a graph editing operation to achieve desired properties with as few splits as possible, most often planarity, for which the problem is NP-hard. Here we study how to minimize the number of splits to turn a plane graph into an outerplane one. We tackle this problem by establishing a direct connection between splitting a plane graph to outerplanarity, finding a connected face cover, and finding a feedback vertex set in its dual. We prove NP-completeness for plane biconnected graphs, while we show that a polynomial-time algorithm exists for maximal planar graphs. Additionally, we show upper and lower bounds for certain families of maximal planar graphs. Finally, we provide a SAT formulation for the problem, and evaluate it on a small benchmark.

Link to Repositum

Revisiting Causal Discovery from a Complexity-Theoretic Perspective
Ganian, Robert, Korchemna, Viktoria, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence; Pages: 3377-3385
Show Abstract
Causal discovery seeks to unveil causal relationships (represented as a so-called causal graph) from observational data. This paper investigates the complex relationship between the graph structure and the efficiency of constraint-based causal discovery algorithms. Our main contributions include (i) a near-tight characterization of which causal graphs admit a small d-separating set for each pair of vertices and thus can potentially be efficiently recovered by a constraint-based causal discovery algorithm, (ii) the explicit construction of a sequence of causal graphs on which the influential PC algorithm might need exponential time, although there is a small d-separating set between every pair of variables, and (iii) the formulation of a new causal discovery algorithm which achieves fixed-parameter running time by considering the maximum number of edge-disjoint paths between variables in the (undirected) super-structure as the parameter. A distinguishing feature of our investigation is that it is carried out within a more fine-grained model which more faithfully captures the infeasibility of performing accurate independence tests for large sets of conditioning variables

Link to Repositum

The most general structure of graphs with hamiltonian or hamiltonian connected square
Ekstein, Jan, Fleischner, Herbert
Type: Article; In: Discrete Mathematics; Vol: 347; Issue: 1
Show Abstract
On the basis of recent results on hamiltonicity, [5], and hamiltonian connectedness, [9], in the square of a 2-block, we determine the most general block-cutvertex structure a graph G may have in order to guarantee that G² is hamiltonian, hamiltonian connected, respectively. Such an approach was already developed in [10] for hamiltonian total graphs.

Link to Repositum

The Complexity of Optimizing Atomic Congestion
Brand, Cornelius, Ganian, Robert, Kalyanasundaram, Subrahmanyam, Mc Inerney, Fionn
Type: Inproceedings; In: Proceedings of the 38th AAAI Conference on Artificial Intelligence; Vol: 38, 18; Pages: 20044-20052
Show Abstract
Atomic congestion games are a classic topic in network design, routing, and algorithmic game theory, and are capable of modeling congestion and flow optimization tasks in various application areas. While both the price of anarchy for such games as well as the computational complexity of computing their Nash equilibria are by now well-understood, the computational complexity of computing a system-optimal set of strategies—that is, a centrally planned routing that minimizes the average cost of agents—is severely understudied in the literature. We close this gap by identifying the exact boundaries of tractability for the problem through the lens of the parameterized complexity paradigm. After showing that the problem remains highly intractable even on extremely simple networks, we obtain a set of results which demonstrate that the structural parameters which control the computational (in)tractability of the problem are not vertex-separator based in nature (such as, e.g., treewidth), but rather based on edge separators. We conclude by extending our analysis towards the (even more challenging) min-max variant of the problem.

Link to Repositum

Computing Twin-Width Parameterized by the Feedback Edge Number
Balabán, Jakub, Ganian, Robert, Rocton, Mathis
Type: Inproceedings; In: 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024); Vol: 289; Pages: 7:1-7:19
Show Abstract
The problem of whether and how one can compute the twin-width of a graph - along with an accompanying contraction sequence - lies at the forefront of the area of algorithmic model theory. While significant effort has been aimed at obtaining a fixed-parameter approximation for the problem when parameterized by twin-width, here we approach the question from a different perspective and consider whether one can obtain (near-)optimal contraction sequences under a larger parameterization, notably the feedback edge number k. As our main contributions, under this parameterization we obtain (1) a linear bikernel for the problem of either computing a 2-contraction sequence or determining that none exists and (2) an approximate fixed-parameter algorithm which computes an ℓ-contraction sequence (for an arbitrary specified ℓ) or determines that the twin-width of the input graph is at least ℓ. These algorithmic results rely on newly obtained insights into the structure of optimal contraction sequences, and as a byproduct of these we also slightly tighten the bound on the twin-width of graphs with small feedback edge number.

Link to Repositum

Circuit Minimization with QBF and SAT-Based Exact Synthesis
Szeider, Stefan
Type: Presentation
Show Abstract
In this talk, we will present new methods for re-synthesizing Boolean circuits for minimizing the number of gates. The proposed method rewrites small subcircuits with exact synthesis, where Individual synthesis tasks are encoded as Quantified Boolean Formulas (QBFs) or as SAT (propositional satisfiability) instances. A key aspect of this approach is how it handles "don't cares," which provides additional flexibility. A prototype implementation allowed us to break the record on the number of gates for some benchmark instances. Joint work with Franz-Xaver Reichl and Friedrich Slivovsky.

Link to Repositum

The combinatorics of monadic stability, monadic dependence, and related notions
Dreier, Jan Niclas
Type: Presentation
Show Abstract
Algorithmic Meta-Theorems are results that solve whole families of algorithmic problems on well-behaved classes of instances. The tractable instances are usually described using graph theory, and the families of algorithmic problems are typically described in terms of logic. In this context, we say a graph class is "tractable" if the first-order model-checking problem is FPT (fixed-parameter tractable) on this class. More precisely, a graph class C is "tractable" if there is an algorithm that decides for a given graph G from C and a sentence ɸ of first-order logic, in time f(ɸ)⋅|G|c if ɸ holds on G (where c is a small constant independent of G and ɸ). Landmark results show that nowhere dense graph classes, hereditary stable classes, and classes of ordered graphs of bounded twin-width are tractable. The central conjecture in the field is that for hereditary graph classes (those closed under induced subgraphs), a graph class is tractable if and only if it is "dependent". Here, "dependence" refers to a very general notion originating from model theory, where it is usually considered in the infinite. In this course, I will introduce an ongoing program that aims to prove the central conjecture by first rediscovering dependence (and related notions) as purely combinatorial, finitary, graph-theoretic concepts, and then using these concepts to develop a model-checking algorithm. In this context, most relevant approaches were originally developed for nowhere dense graph classes, then generalized to stable graph classes, and finally extended to dependent classes. Similarly, we start this course by exploring nowhere dense classes, then stable classes, and finally dependent classes.

Link to Repositum

Hot off the Press: The First Proven Performance Guarantees for the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) on a Combinatorial Optimization Problem
Cerf, Sacha, Doerr, Benjamin, Hebras, Benjamin, Kahane, Yakob, Wietheger, Simon
Type: Inproceedings; In: GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion; Pages: 27-28
Show Abstract
Recently, the first mathematical runtime guarantees have been obtained for the NSGA-II, one of the most prominent multi-objective optimization algorithms, however only for synthetic benchmark problems.In this work, we give the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem. More specifically, we show that the NSGA-II with population size N ≥ 4((n - 1)wmax + 1) computes all extremal points of the Pareto front in an expected number of O(m2nwmax log(nwmax)) iterations, where n is the number of vertices, m the number of edges, and wmax is the maximum edge weight in the problem instance. This result confirms, via mathematical means, the good performance of the NSGA-II observed empirically. It also paves the way for analyses of the NSGA-II on complex combinatorial optimization problems.As a side result, we also obtain a new analysis of the performance of the GSEMO algorithm on the bi-objective minimum spanning tree problem, which improves the previous best result by a factor of |F|, the number of points in the convex hull of the Pareto front, a set that can be as large as nwmax. The main reason for this improvement is our observation that both algorithms find the different extremal points in parallel rather than sequentially, as assumed in the previous proofs.This paper for the Hot-off-the-Press track at GECCO 2024 summarizes the work Sacha Cerf, Benjamin Doerr, Benjamin Hebras, Jakob Kahane, and Simon Wietheger. 2023. The first proven performance guarantees for the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) on a combinatorial optimization problem. In International Joint Conference on Artificial Intelligence, TJCAI2023. ijcai.org, 5522 - 5530 [1].

Link to Repositum

Conflict-Free Coloring: Graphs of Bounded Clique-Width and Intersection Graphs
Bhyravarapu, Sriram, Hartmann, Tim A., Hoang, Phuc Hung, Kalyanasundaram, Subrahmanyam, Vinod Reddy, I.
Type: Article; In: Algorithmica; Vol: 86; Issue: 7; Pages: 2250-2288
Show Abstract
A conflict-free coloring of a graph G is a (partial) coloring of its vertices such that every vertex u has a neighbor whose assigned color is unique in the neighborhood of u. There are two variants of this coloring, one defined using the open neighborhood and one using the closed neighborhood. For both variants, we study the problem of deciding whether the conflict-free coloring of a given graph G is at most a given number k. In this work, we investigate the relation of clique-width and minimum number of colors needed (for both variants) and show that these parameters do not bound one another. Moreover, we consider specific graph classes, particularly graphs of bounded clique-width and types of intersection graphs, such as distance hereditary graphs, interval graphs and unit square and disk graphs. We also consider Kneser graphs and split graphs. We give (often tight) upper and lower bounds and determine the complexity of the decision problem on these graph classes, which improve some of the results from the literature. Particularly, we settle the number of colors needed for an interval graph to be conflict-free colored under the open neighborhood model, which was posed as an open problem.

Link to Repositum

The Parameterized Complexity Of Extending Stack Layouts
Depian, Thomas, Fink, Simon D., Ganian, Robert, Nöllenburg, Martin
Type: Inproceedings; In: 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024); Vol: 320; Pages: 12:1-12:17
Show Abstract
An ℓ-page stack layout (also known as an ℓ-page book embedding) of a graph is a linear order of the vertex set together with a partition of the edge set into ℓ stacks (or pages), such that the endpoints of no two edges on the same stack alternate. We study the problem of extending a given partial ℓ-page stack layout into a complete one, which can be seen as a natural generalization of the classical NP-hard problem of computing a stack layout of an input graph from scratch. Given the inherent intractability of the problem, we focus on identifying tractable fragments through the refined lens of parameterized complexity analysis. Our results paint a detailed and surprisingly rich complexity-theoretic landscape of the problem which includes the identification of paraNP-hard, W[1]-hard and XP-tractable, as well as fixed-parameter tractable fragments of stack layout extension via a natural sequence of parameterizations.

Link to Repositum

Introducing Fairness in Graph Visualization
Hong, Seok Hee, Liotta, Giuseppe, Montecchiani, Fabrizio, Nöllenburg, Martin, Piselli, Tommaso
Type: Inproceedings; In: 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024); Vol: 320; Pages: 49:1-49:3
Show Abstract
Information visualization tools are an essential component of many data-driven decision-making systems that rely on human feedback. The aim of this paper is to propose a novel research direction focused on fair visualizations of graphs.

Link to Repositum

SAT-Based Tree Decomposition with Iterative Cascading Policy Selection
Xia, Hai, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI-24); Vol: 38, 8; Pages: 8191-8199
Show Abstract
Solvers for propositional satisfiability (SAT) effectively tackle hard optimization problems. However, translating to SAT can cause a significant size increase, restricting its use to smaller instances. To mitigate this, frameworks using multiple local SAT calls for gradually improving a heuristic solution have been proposed. The performance of such algorithmic frameworks heavily relies on critical parameters, including the size of selected local instances and the time allocated per SAT call. This paper examines the automated configuration of the treewidth SAT-based local improvement method (TW-SLIM) framework, which uses multiple SAT calls for computing tree decompositions of small width, a fundamental problem in combinatorial optimization. We explore various TW-SLIM configuration methods, including offline learning and real-time adjustments, significantly outperforming default settings in multi-SAT scenarios with changing problems. Building upon insights gained from offline training and real-time configurations for TW-SLIM, we propose the iterative cascading policy-a novel hybrid technique that uniquely combines both. The iterative cascading policy employs a pool of 30 configurations obtained through clustering-based offline methods, deploying them in dynamic cascades across multiple rounds. In each round, the 30 configurations are tested according to the cascading ordering, and the best tree decomposition is retained for further improvement, with the option to adjust the following ordering of cascades. This iterative approach significantly enhances the performance of TW-SLIM beyond baseline results, even within varying global timeouts. This highlights the effectiveness of the proposed iterative cascading policy in enhancing the efficiency and efficacy of complex algorithmic frameworks like TW-SLIM.

Link to Repositum

Near-Tight Runtime Guarantees for Many-Objective Evolutionary Algorithms
Wietheger, Simon, Doerr, Benjamin
Type: Inproceedings; In: Parallel Problem Solving from Nature – PPSN XVIII : 18th International Conference, PPSN 2024, Hagenberg, Austria, September 14–18, 2024, Proceedings, Part IV; Vol: 15151; Pages: 153-168
Show Abstract
Despite significant progress in the field of mathematical runtime analysis of multi-objective evolutionary algorithms (MOEAs), the performance of MOEAs on discrete many-objective problems is little understood. In particular, the few existing bounds for the SEMO, global SEMO, and SMS-EMOA algorithms on classic benchmarks are all roughly quadratic in the size of the Pareto front. In this work, we prove near-tight runtime guarantees for these three algorithms on the four most common benchmark problems OneMinMax, CountingOnesCountingZeros, LeadingOnesTrailingZeros, and OneJumpZeroJump, and this for arbitrary numbers of objectives. Our bounds depend only linearly on the Pareto front size, showing that these MOEAs on these benchmarks cope much better with many objectives than what previous works suggested. Our bounds are tight apart from small polynomial factors in the number of objectives and length of bitstrings. This is the first time that such tight bounds are proven for many-objective uses of these MOEAs. While it is known that such results cannot hold for the NSGA-II, we do show that our bounds, via a recent structural result, transfer to the NSGA-III algorithm.

Link to Repositum

A Mathematical Runtime Analysis of the Non-dominated Sorting Genetic Algorithm III (NSGA-III)
Wietheger, Simon, Doerr, Benjamin
Type: Inproceedings; In: GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion; Pages: 63-64
Show Abstract
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications. While it performs evidently well on bi-objective optimization problems, empirical studies suggest that it is less effective when applied to problems with more than two objectives. A recent mathematical runtime analysis confirmed this observation by proving that the NGSA-II for an exponential number of iterations misses a constant factor of the Pareto front of the simple m-objective OneMinMax problem when m ≥ 3.In this work, we provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives. We prove that the NSGA-III with sufficiently many reference points - a small constant factor more than the size of the Pareto front, as suggested for this algorithm - computes the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations. This result holds for all population sizes (that are at least the size of the Pareto front). It shows a drastic advantage of the NSGA-III over the NSGA-II on this benchmark.This paper for the Hot-off-the-Press track at GECCO 2024 summarizes the work Simon Wietheger and Benjamin Doerr. A mathematical runtime analysis of the Non-dominated Sorting Genetic Algorithm III (NSGA-III). In International Joint Conference on Artificial Intelligence, IJCAI 2023. 5657 - 5665, 2023. [15].

Link to Repositum

SAT Modulo Symmetries for Graph Generation and Enumeration
Kirchweger, Markus, Szeider, Stefan
Type: Article; In: ACM Transactions on Computational Logic; Vol: 25; Issue: 3
Show Abstract
We propose a novel SAT-based approach to graph generation. Our approach utilizes the interaction between a CDCL SAT solver and a special symmetry propagator where the SAT solver runs on an encoding of the desired graph property. The symmetry propagator checks partially generated graphs for minimality with respect to a lexicographic ordering during the solving process. This approach has several advantages over a static symmetry breaking: (i) symmetries are detected early in the generation process, (ii) symmetry breaking is seamlessly integrated into the CDCL procedure, and (iii) the propagator performs a complete symmetry breaking without causing a prohibitively large initial encoding. We instantiate our approach by generating extremal graphs with certain restrictions in terms of forbidden subgraphs and diameter. In particular, we could confirm the Murty-Simon Conjecture (1979) on diameter-2-critical graphs for graphs up to 19 vertices and prove the exact number of Ramsey graphs R (3,5,n) and R (4,4,n).

Link to Repositum

SAT backdoors: Depth beats size
Dreier, Jan, Ordyniak, Sebastian, Szeider, Stefan
Type: Article; In: Journal of Computer and System Sciences; Vol: 142
Show Abstract
For several decades, much effort has been put into identifying classes of CNF formulas whose satisfiability can be decided in polynomial time. Classic results are the linear-time tractability of Horn formulas (Aspvall, Plass, and Tarjan, 1979) and Krom (i.e., 2CNF) formulas (Dowling and Gallier, 1984). Backdoors, introduced by Williams, Gomes and Selman (2003), gradually extend such a tractable class to all formulas of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a formula and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021), is a more refined distance measure, which admits the utilization of different backdoor variables in parallel. We propose FPT approximation algorithms to compute backdoor depth into the classes Horn and Krom. This leads to a linear-time algorithm for deciding the satisfiability of formulas of bounded backdoor depth into these classes.

Link to Repositum

On the Relative Efficiency of Dynamic and Static Top-Down Compilation to Decision-DNNF
de Colnet, Alexis
Type: Inproceedings; In: 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024); Vol: 305; Pages: 11:1-11:21
Show Abstract
Top-down compilers of CNF formulas to circuits in decision-DNNF (Decomposable Negation Normal Form) have proved to be useful for model counting. These compilers rely on a common set of techniques including DPLL-style exploration of the set of models, caching of residual formulas, and connected components detection. Differences between compilers lie in the variable selection heuristics and in the additional processing techniques they may use. We investigate, from a theoretical perspective, the ability of top-down compilation algorithms to find small decision-DNNF circuits for two different variable selection strategies. Both strategies are guided by a graph of the CNF formula and are inspired by what is done in practice. The first uses a dynamic graph-partitioning approach while the second works with a static tree decomposition. We show that the dynamic approach performs significantly better than the static approach for some formulas, and that the opposite also holds for other formulas. Our lower bounds are proved despite loose settings where the compilation algorithm is only forced to follow its designed variable selection strategy and where everything else, including the many opportunities for tie-breaking, can be handled non-deterministically.

Link to Repositum

SAT-based Decision Tree Learning for Large Data Sets
Schidler, Andre, Szeider, Stefan
Type: Article; In: Journal of Artificial Intelligence Research; Vol: 80; Pages: 875-918
Show Abstract
Decision trees of low depth are beneficial for understanding and interpreting the data they represent. Unfortunately, finding a decision tree of lowest complexity (depth or size) that correctly represents given data is NP-hard. Hence known algorithms either (i) utilize heuristics that do not minimize the depth or (ii) are exact but scale only to small or medium-sized instances. We propose a new hybrid approach to decision tree learning, combining heuristic and exact methods in a novel way. More specifically, we employ SAT encodings repeatedly to local parts of a decision tree provided by a standard heuristic, leading to an overall reduction in complexity. This allows us to scale the power of exact SAT-based methods to comparatively very large data sets. We evaluate our new approach experimentally on a range of real-world instances that contain up to several thousand samples. In almost all cases, our method successfully decreases the complexity of the initial decision tree; often, the decrease is significant.

Link to Repositum

Structure-Guided Local Improvement for Maximum Satisfiability
Schidler, André, Szeider, Stefan
Type: Inproceedings; In: 30th International Conference on Principles and Practice of Constraint Programming (CP 2024); Vol: 307; Pages: 26:1-26:23
Show Abstract
The enhanced performance of today's MaxSAT solvers has elevated their appeal for many large-scale applications, notably in software analysis and computer-aided design. Our research delves into refining anytime MaxSAT solving by repeatedly identifying and solving with an exact solver smaller subinstances that are chosen based on the graphical structure of the instance. We investigate various strategies to pinpoint these subinstances. This structure-guided selection of subinstances provides an exact solver with a high potential for improving the current solution. Our exhaustive experimental analyses contrast our methodology as instantiated in our tool MaxSLIM with previous studies and benchmark it against leading-edge MaxSAT solvers.

Link to Repositum

eSLIM: Circuit Minimization with SAT Based Local Improvement
Reichl, Franz Xaver, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024); Vol: 305; Pages: 23:1-23:14
Show Abstract
eSLIM is a tool for circuit minimization that utilizes Exact Synthesis and the SAT-based local improvement method (SLIM) to locally improve circuits. eSLIM improves upon the earlier prototype CIOPS that uses Quantified Boolean Formulas (QBF) to succinctly encode resynthesis of multi-output subcircuits subject to don’t cares. This paper describes two improvements. First, it presents a purely propositional encoding based on a Boolean relation characterizing the input-output behavior of the subcircuit under don’t cares. This allows the use of a SAT solver for resynthesis, substantially reducing running times when applied to functions from the IWLS 2023 competition, where eSLIM placed second. Second, it proposes circuit partitioning techniques in which don’t cares for a subcircuit are captured only with respect to an enclosing window, rather than the entire circuit. Circuit partitioning trades completeness for efficiency, and successfully enables the application of exact synthesis to some of the largest circuits in the EPFL suite, leading to improvements over the current best implementation for several instances.

Link to Repositum

Computing Small Rainbow Cycle Numbers with SAT Modulo Symmetries
Kirchweger, Markus, Szeider, Stefan
Type: Inproceedings; In: 30th International Conference on Principles and Practice of Constraint Programming (CP 2024); Vol: 307; Pages: 37:1-37:11
Show Abstract
Envy-freeness up to any good (EFX) is a key concept in Computational Social Choice for the fair division of indivisible goods, where no agent envies another's allocation after removing any single item. A deeper understanding of EFX allocations is facilitated by exploring the rainbow cycle number (Rf (d)), the largest number of independent sets in a certain class of directed graphs. Upper bounds on Rf (d) provide guarantees to the feasibility of EFX allocations (Chaudhury et al., EC 2021). In this work, we precisely compute the numbers Rf (d) for small values of d, employing the SAT modulo Symmetries framework (Kirchweger and Szeider, CP 2021). SAT modulo Symmetries is tailored specifically for the constraint-based isomorph-free generation of combinatorial structures. We provide an efficient encoding for the rainbow cycle number, comparing eager and lazy approaches. To cope with the huge search space, we extend the encoding with invariant pruning, a new method that significantly speeds up computation.

Link to Repositum

Large Neighborhood Search for an Electric Dial-A-Ride Problem
Bresich, Maria
Type: Presentation

Link to Repositum

Speeding up Logic-Based Benders Decomposition by Strengthening Cuts with Graph Neural Network
Raidl, Günther
Type: Presentation

Link to Repositum

Combinatorial Search for Structures from Balanced Sequential Testing
Iurlano, Enrico
Type: Presentation

Link to Repositum

Designing Heuristics for Generalizations of Graph Burning
Iurlano, Enrico
Type: Presentation

Link to Repositum

Parameterized Complexity Problems in Explainable AI
Szeider, Stefan
Type: Presentation

Link to Repositum

Improving User Experience in Interactive Job Scheduling
Varga, Johannes
Type: Presentation

Link to Repositum

Compilation and Fast Model Counting beyond CNF
de Colnet, Alexis, Szeider, Stefan, Zhang, Tianwei
Type: Inproceedings; In: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence; Pages: 3315-3323
Show Abstract
Circuits in deterministic decomposable negation normal form (d-DNNF) are representations of Boolean functions that enable linear-time model counting. This paper strengthens our theoretical knowledge of what classes of functions can be efficiently transformed, or compiled, into d-DNNF. Our main contribution is the fixed-parameter tractable (FPT) compilation of conjunctions of specific constraints parameterized by incidence treewidth. This subsumes the known result for CNF. The constraints in question are all functions representable by constant-width ordered binary decision diagrams (OBDDs) for all variable orderings. For instance, this includes parity constraints and cardinality constraints with constant threshold. The running time of the FPT compilation is singly exponential in the incidence treewidth but hides large constants in the exponent. To balance that, we give a more efficient FPT algorithm for model counting that applies to a sub-family of the constraints and does not require compilation.

Link to Repositum

ASP-QRAT: A Conditionally Optimal Dual Proof System for ASP
Chew, Leroy, de Colnet, Alexis, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning; Pages: 253-263
Show Abstract
Answer Set Programming (ASP) is a declarative programming approach that captures many problems in knowledge representation and reasoning. To certify an ASP solver's decision, whether the program is consistent or inconsistent, we need a certificate or proof that can be independently verified. This paper proposes the dual proof system ASP-QRAT that certifies both consistent and inconsistent ASPs. ASP-QRAT is based on a translation of ASP to QBF (Quantified Boolean Formus) and the QBF proof system QRAT as a checking format. We show that ASP-QRAT p-simulates ASP-DRUPE, an existing refutation system for inconsistent disjunctive ASPs. We show that ASP-QRAT is conditionally optimal for consistent and inconsistent ASPs, i.e., any super-polynomial lower bound on the shortest proof size of ASP-QRAT implies a major breakthrough in theoretical computer science. The case for consistent ASPs is remarkable because no analog exists in the QBF case.

Link to Repositum

Explaining Decisions in ML Models: A Parameterized Complexity Analysis
Ordyniak, Sebastian, Paesani, Giacomo, Rychlicki, Mateusz, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning; Pages: 563-573
Show Abstract
This paper presents a comprehensive theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models. Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms. We address two principal types of explanation problems: abductive and contrastive, both in their local and global variants. Our analysis encompasses diverse ML models, including Decision Trees, Decision Sets, Decision Lists, Ordered Binary Decision Diagrams, Random Forests, and Boolean Circuits, and ensembles thereof, each offering unique explanatory challenges. This research fills a significant gap in explainable AI (XAI) by providing a foundational understanding of the complexities of generating explanations for these models. This work provides insights vital for further research in the domain of XAI, contributing to the broader discourse on the necessity of transparency and accountability in AI systems.

Link to Repositum

Learning to Predict User Replies in Interactive Job Scheduling
Varga, Johannes
Type: Presentation

Link to Repositum

A Neural Network Based Guidance for a BRKGA: An Application to the Longest Common Square Subsequence Problem
Reixach, Jaume, Blum, Christian, Djukanović, Marko, Raidl, Günther R.
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization : 24th European Conference, EvoCOP 2024, Held as Part of EvoStar 2024, Aberystwyth, UK, April 3–5, 2024, Proceedings; Vol: 14632; Pages: 1-15
Show Abstract
In this work we apply machine learning to better guide a biased random key genetic algorithm (Brkga) for the longest common square subsequence (LCSqS) problem. The problem is a variant of the well-known longest common subsequence (LCS) problem in which valid solutions are square strings. A string is square if it can be expressed as the concatenation of a string with itself. The original Brkga is based on a reduction of the LCSqS problem to the LCS problem by cutting each input string into two parts. Our work consists in enhancing the search process of Brkga for good cut points by using a machine learning approach, which is trained to produce promising cut points for the input strings of a problem instance. In this study, we show the benefits of this approach by comparing the enhanced Brkga with the original Brkga, using two benchmark sets from the literature. We show that the results of the enhanced Brkga significantly improve over the original results, especially when tackling instances with non-uniformly generated input strings.

Link to Repositum

Fairness in Assignments with Congestion-Averse Agents: Concepts, Algorithms, and Complexity
Chen, Jiehua
Type: Presentation
Show Abstract
The congested assignment problem is concerned with assigning agents to posts where agents care about both the posts and their congestion levels. Here, agents are averse to congestion, consistently preferring lower over higher congestion for the same resource. Such scenarios are prevalent across many domains, including traffic management and school choice, where fair resource allocation is crucial. Congested assignment can be considered as a restricted variant of the Group Activity Selection problem, introduced by Darmann et al. Additionally, it is related to many-to-one matching in matching under preferences. In this talk, I will explore one ex-ante fairness concept, top-fairness, and two ex-post fairness concepts, envy-freeness and competitiveness. The top-fairness and competitiveness concepts were recently introduced by Bogomolnaia and Moulin. While a top-fair or envy-free assignment always exists and can be found easily, competitive assignments do not always exist. The talk will cover the following key points: 1. An efficient method to determine the existence of competitive or maximally competitive assignments for a given congestion profile. 2. Two optimization variants of congested assignments and their computational complexity: a) Finding a top-fair assignment that is envy-free b) Finding a top-fair assignment that is maximally competitive. Both variants are NP-hard, unfortunately. 3. Parameterized algorithms for these NP-hard problems.

Link to Repositum

Neurosymbolic AI: Deep Learning and Deep Reasoning
Szeider, Stefan
Type: Presentation

Link to Repositum

Structure-Guided Local Improvement for Maximum Satisfiability
Szeider, Stefan
Type: Presentation

Link to Repositum

SAT modulo Symmetries
Szeider, Stefan
Type: Presentation

Link to Repositum

Multi-Winner Reconfiguration
Chen, Jiehua, Hatschka, Christian, Simola, Sofia Henna Elisa
Type: Presentation
Show Abstract
We introduce a multi-winner reconfiguration model to examine how to transition between subsets of alternatives (aka. committees) through a sequence of minor yet impactful modifications, called reconfiguration path. We analyze this model under four approval-based voting rules: Chamberlin-Courant (CC), Proportional Approval Voting (PAV), Approval Voting (AV), and Satisfaction Approval Voting (SAV). The problem exhibits computational intractability for CC and PAV, and polynomial solvability for AV and SAV. We provide a detailed multivariate complexity analysis for CC and PAV, demonstrating that although the problem remains challenging in many scenarios, there are specific cases that allow for efficient parameterized algorithms.

Link to Repositum

On the Complexity of Establishing Hereditary Graph Properties via Vertex Splitting
Firbas, Alexander, Sorge, Manuel
Type: Inproceedings; In: 35th International Symposium on Algorithms and Computation (ISAAC 2024); Vol: 322; Pages: 1-15
Show Abstract
Vertex splitting is a graph operation that replaces a vertex v with two nonadjacent new vertices u, w and makes each neighbor of v adjacent with one or both of u or w. Vertex splitting has been used in contexts from circuit design to statistical analysis. In this work, we generalize from specific vertex-splitting problems and systematically explore the computational complexity of achieving a given graph property Π by a limited number of vertex splits, formalized as the problem Π Vertex Splitting (Π-VS). We focus on hereditary graph properties and contribute four groups of results: First, we classify the classical complexity of Π-VS for graph properties characterized by forbidden subgraphs of order at most 3. Second, we provide a framework that allows one to show NP-completeness whenever one can construct a combination of a forbidden subgraph and prescribed vertex splits that satisfy certain conditions. Using this framework we show NP-completeness when Π is characterized by sufficiently well-connected forbidden subgraphs. In particular, we show that F-Free-VS is NP-complete for each biconnected graph F. Third, we study infinite families of forbidden subgraphs, obtaining NP-completeness for Bipartite-VS and Perfect-VS, contrasting the known result that Π-VS is in P if Π is the set of all cycles. Finally, we contribute to the study of the parameterized complexity of Π-VS with respect to the number of allowed splits. We show para-NP-hardness for K₃-Free-VS and derive an XP-algorithm when each vertex is only allowed to be split at most once, showing that the ability to split a vertex more than once is a key driver of the problems' complexity.

Link to Repositum

Learning to Solve Dynamic Vehicle Routing Problems
Bresich, Maria, Raidl, Günther, Limmer, Steffen, Probst, Malte
Type: Presentation

Link to Repositum

The Complexity of Fair Division of Indivisible Items with Externalities
Deligkas, Argyrios, Eiben, Eduard, Korchemna, Viktoria, Schierreich, Šimon
Type: Inproceedings; In: Proceedings of the 38th AAAI Conference on Artificial Intelligence; Vol: 38; Pages: 9653-9661
Show Abstract
We study the computational complexity of fairly allocating a set of indivisible items under externalities. In this recently-proposed setting, in addition to the utility the agent gets from their bundle, they also receive utility from items allocated to other agents. We focus on the extended definitions of envy-freeness up to one item (EF1) and of envy-freeness up to any item (EFX), and we provide the landscape of their complexity for several different scenarios. We prove that it is NP-complete to decide whether there exists an EFX allocation, even when there are only three agents, or even when there are only six different values for the items. We complement these negative results by showing that when both the number of agents and the number of different values for items are bounded by a parameter the problem becomes fixed-parameter tractable. Furthermore, we prove that two-valued and binary-valued instances are equivalent and that EFX and EF1 allocations coincide for this class of instances. Finally, motivated from real-life scenarios, we focus on a class of structured valuation functions, which we term agent/item-correlated. We prove their equivalence to the “standard” setting without externalities. Therefore, all previous results for EF1 and EFX apply immediately for these valuations.

Link to Repositum

Constrained Planarity in Practice: Engineering the Synchronized Planarity Algorithm
Fink, Simon Dominik, Rutter, Ignaz
Type: Inproceedings; In: 2024 Proceedings of the Symposium on Algorithm Engineering and Experiments (ALENEX); Pages: 1-14
Show Abstract
In the constrained planarity setting, we ask whether a graph admits a planar drawing that additionally satisfies a given set of constraints. These constraints are often derived from very natural problems; prominent examples are Level Planarity, where vertices have to lie on given horizontal lines indicating a hierarchy, and Clustered Planarity, where we additionally draw the boundaries of clusters which recursively group the vertices in a crossing-free manner. Despite receiving significant amount of attention and substantial theoretical progress on these problems, only very few of the found solutions have been put into practice and evaluated experimentally. In this paper, we describe our implementation of the recent quadratic-time algorithm by Bläsius et al. [TALG Vol 19, No 4] for solving the problem Synchronized Planarity, which can be seen as a common generalization of several constrained planarity problems, including the aforementioned ones. Our experimental evaluation on an existing benchmark set shows that even our baseline implementation outperforms all competitors by at least an order of magnitude. We systematically investigate the degrees of freedom in the implementation of the Synchronized Planarity algorithm for larger instances and propose several modifications that further improve the performance. Altogether, this allows us to solve instances with up to 100 vertices in milliseconds and instances with up to 100 000 vertices within a few minutes.

Link to Repositum

Signed double Roman domination on cubic graphs
Iurlano, Enrico, Zec, Tatjana, Djukanovic, Marko, Raidl, Günther R.
Type: Article; In: Applied Mathematics and Computation; Vol: 471
Show Abstract
The signed double Roman domination problem is a combinatorial optimization problem on a graph asking to assign a label from {±1,2,3} to each vertex feasibly, such that the total sum of assigned labels is minimized. Here feasibility is given whenever (i) vertices labeled ±1 have at least one neighbor with label in {2,3}; (ii) each vertex labeled −1 has one 3-labeled neighbor or at least two 2-labeled neighbors; and (iii) the sum of labels over the closed neighborhood of any vertex is positive. The cumulative weight of an optimal labeling is called signed double Roman domination number (SDRDN). In this work, we first consider the problem on general cubic graphs of order n for which we present a sharp n/2+Θ(1) lower bound for the SDRDN by means of the discharging method. Moreover, we derive a new best upper bound. Observing that we are often able to minimize the SDRDN over the class of cubic graphs of a fixed order, we then study in this context generalized Petersen graphs for independent interest, for which we propose a constraint programming guided proof. We then use these insights to determine the SDRDNs of subcubic 2×m grid graphs, among other results.

Link to Repositum

Learning Small Decision Trees for Data of Low Rank-Width
Dabrowski, Konrad, Eiben, Eduard, Ordyniak, Sebastian, Paesani, Giacomo, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI 2024); Vol: 38; Pages: 10476-10483
Show Abstract
We consider the NP-hard problem of finding a smallest decision tree representing a classification instance in terms of a partially defined Boolean function. Small decision trees are desirable to provide an interpretable model for the given data. We show that the problem is fixed-parameter tractable when parameterized by the rank-width of the incidence graph of the given classification instance. Our algorithm proceeds by dynamic programming using an NLC decomposition obtained from a rank-width decomposition. The key to the algorithm is a succinct representation of partial solutions. This allows us to limit the space and time requirements for each dynamic programming step in terms of the parameter.

Link to Repositum

Letting a Large Neighborhood Search for an Electric Dial-A-Ride Problem Fly: On-The-Fly Charging Station Insertion
Bresich, Maria, Raidl, Günther R., Limmer, Steffen
Type: Inproceedings; In: GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion; Pages: 142-150
Show Abstract
We consider the electric autonomous dial-a-ride problem (E-ADARP), a challenging extension of the dial-a-ride problem with the goal of finding minimum cost routes serving given transportation requests with a fleet of electric and autonomous vehicles (EAVs). Special emphasis lies on the minimization of user excess ride time under consideration of the charging requirements of the EAVs while constraints regarding, for example, user ride times and time windows have to be satisfied. We propose a novel large neighborhood search (LNS) approach for the E-ADARP employing the concept of battery-restricted fragments for route representation and efficient cost computations. For the charging of the EAVs, the scheduling, and route evaluation, we introduce two approaches where one deals with these challenges separately and one provides a combined approach. The first approach uses dedicated LNS operators and a forward labeling algorithm whereas the latter employs a novel route evaluation procedure for inserting charging stops on-the-fly as needed. The performance of our LNS-based algorithms is evaluated on common benchmark instances and results show that especially the approach with the on-the-fly insertion almost consistently outperforms former state-of-the-art techniques, finding new best-known solutions for many instances.

Link to Repositum

Backdoor DNFs
Ordyniak, Sebastian, Schidler, Andre, Szeider, Stefan
Type: Article; In: Journal of Computer and System Sciences; Vol: 144
Show Abstract
We introduce backdoor DNFs, as a tool to measure the theoretical hardness of CNF formulas. Like backdoor sets and backdoor trees, backdoor DNFs are defined relative to a tractable class of CNF formulas. Each conjunctive term of a backdoor DNF defines a partial assignment that moves the input CNF formula into the base class. Backdoor DNFs are more expressive and potentially smaller than their predecessors backdoor sets and backdoor trees. We establish the fixed-parameter tractability of the backdoor DNF detection problem. Our results hold for the fundamental base classes Horn and 2CNF, and their combination. We complement our theoretical findings by an empirical study. Our experiments show that backdoor DNFs provide a significant improvement over their predecessors.

Link to Repositum

A General Theoretical Framework for Learning Smallest Interpretable Models
Ordyniak, Sebastian, Paesani, Giacomo, Rychlicki, Mateusz, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 38th AAAI Conference on Artificial Intelligence; Vol: 38; Pages: 10662-10669
Show Abstract
We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.

Link to Repositum

Hoop Diagrams: A Set Visualization Method
Rodgers, Peter, Chapman, Peter, Blake, Andrew, Nöllenburg, Martin, Wallinger, Markus, Dobler, Alexander
Type: Inproceedings; In: Diagrammatic Representation and Inference; Vol: 14981; Pages: 377-392
Show Abstract
We introduce Hoop Diagrams, a new visualization technique for set data. Hoop Diagrams are a circular visualization with hoops representing sets and sectors representing set intersections. We present an interactive tool for drawing Hoop Diagrams and describe a user study comparing them with Linear Diagrams. The results show only small differences, with users answering questions more quickly with Linear Diagrams, but answering some questions more accurately with Hoop Diagrams. Interaction data indicates that those using set order and intersection highlighting were more successful at answering questions, but those who used other interactions had a slower response. The similarity in usability suggests that the diagram type should be chosen based on the presentation method. Linear Diagrams increase in the horizontal direction with the number of intersections, leading to difficulties fitting on a screen. Hoop Diagrams always have a square aspect ratio

Link to Repositum

Cutsets and EF1 Fair Division of Graphs
Chen, Jiehua, Zwicker, William
Type: Inproceedings; In: AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems; Pages: 2192-2194
Show Abstract
A connected graph G = (V, E) provides a natural context for importing the connectivity requirement of fair division from the continuous world into the discrete one. Each of n agents is allocated a share of G's vertex set V. These n shares partition V, with each required to induce a connected subgraph. Agents use their own valuation functions to determine the non-negative numerical values of the shares, which then determine whether the allocation is fair in some specified sense. Applications include the problem of dividing cities connected by a road network when each party wishes to drive among its allocated cities without leaving its territory. We introduce graph cutsets - forbidden substructures which block allocations that are fair in the EF1 (envy-free up to one item) sense. Two parameters - gap and valence - determine blocked values of n. If G contains a cutset of gap k ≥ 2 and valence in the interval [n - k + 1, n - 1], then allocations that are CEF1 (connected EF1) fail to exist for n agents with certain CM (common monotone) valuations; an elementary cutset yields such a failure even for CA (common additive) valuations. Additionally, we provide an example (Graph GIII in Figure 1) which excludes both cutsets of gap at least two and CEF1 divisions for three agents even with CA valuations. We show that it is NP-complete to determine whether cutsets exist. Finally, for some graphs G we can, in combination with some new positive results, pin down G's spectrum - the list of exactly which values of n do/do not guarantee CEF1 allocations. Examples suggest a conjectured common spectral pattern for all graphs.

Link to Repositum

Parameterized Algorithms for Coordinated Motion Planning: Minimizing Energy
Deligkas, Argyrios, Eiben, Eduard, Ganian, Robert, Kanj, Iyad, Ramanujan, M. S.
Type: Inproceedings; In: 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024); Vol: 297; Pages: 53:1-53:18
Show Abstract
We study the parameterized complexity of a generalization of the coordinated motion planning problem on graphs, where the goal is to route a specified subset of a given set of k robots to their destinations with the aim of minimizing the total energy (i.e., the total length traveled). We develop novel techniques to push beyond previously-established results that were restricted to solid grids. We design a fixed-parameter additive approximation algorithm for this problem parameterized by k alone. This result, which is of independent interest, allows us to prove the following two results pertaining to well-studied coordinated motion planning problems: (1) A fixed-parameter algorithm, parameterized by k, for routing a single robot to its destination while avoiding the other robots, which is related to the famous Rush-Hour Puzzle; and (2) a fixed-parameter algorithm, parameterized by k plus the treewidth of the input graph, for the standard Coordinated Motion Planning (CMP) problem in which we need to route all the k robots to their destinations. The latter of these results implies, among others, the fixed-parameter tractability of CMP parameterized by k on graphs of bounded outerplanarity, which include bounded-height subgrids. We complement the above results with a lower bound which rules out the fixed-parameter tractability for CMP when parameterized by the total energy. This contrasts the recently-obtained tractability of the problem on solid grids under the same parameterization. As our final result, we strengthen the aforementioned fixed-parameter tractability to hold not only on solid grids but all graphs of bounded local treewidth – a class including, among others, all graphs of bounded genus.

Link to Repositum

Coalition Formation with Bounded Coalition Size
Levinger, Chaya, Hazon, Noam, Simola, Sofia Henna Elisa, Azaria, Amos
Type: Inproceedings; In: AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems; Pages: 1119-1127
Show Abstract
In many situations when people are assigned to coalitions, the utility of each person depends on the friends in her coalition. Additionally, in many situations, the size of each coalition should be bounded. This paper studies such coalition formation scenarios in both weighted and unweighted settings. Since finding a partition that maximizes the utilitarian social welfare is computationally hard, we provide a polynomial-time approximation algorithm. We also investigate the existence and the complexity of finding stable partitions. Namely, we show that the Contractual Strict Core (CSC) is never empty, but the Strict Core (SC) of some games is empty. Finding partitions that are in the CSC is computationally easy, but even deciding whether an SC of a given game exists is NP-hard. In the unweighted setting, we show that when the coalition size is bounded by 3 the core is never empty, and we present a polynomial time algorithm for finding a member of the core. However, for the weighted setting, the core may be empty, and we prove that deciding whether there exists a core is NP-hard.

Link to Repositum

Parameterized Algorithms for Optimal Refugee Resettlement
Chen, Jiehua, Schlotter, Ildikó, Simola, Sofia
Type: Inproceedings; In: ECAI 2024; Vol: 392; Pages: 3413-3420
Show Abstract
We study variants of the Optimal Refugee Resettlement problem where a set F of refugee families need to be allocated to a set P of possible places of resettlement in a feasible and optimal way. Feasibility issues emerge from the assumption that each family requires certain services (such as accommodation, school seats, or medical assistance), while there is an upper and, possibly, a lower quota on the number of service units provided at a given place. Besides studying the problem of finding a feasible assignment, we also investigate two natural optimization variants. In the first one, we allow families to express preferences over P, and we aim for a Pareto-optimal assignment. In a more general setting, families can attribute utilities to each place in P, and the task is to find a feasible assignment with maximum total utilities. We study the computational complexity of all three variants in a multivariate fashion using the framework of parameterized complexity. We provide fixed-parameter algorithms for a handful of natural parameterizations, and complement these tractable cases with tight intractability results.

Link to Repositum

On Combined Visual Cluster and Set Analysis
Piccolotto, Nikolaus, Wallinger, Markus, Miksch, Silvia, Bögl, Markus
Type: Inproceedings; In: 2024 IEEE Visualization and Visual Analytics (VIS); Pages: 131-135
Show Abstract
Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.

Link to Repositum

Bundling-Aware Graph Drawing
Archambault, Daniel, Liotta, Giuseppe, Nöllenburg, Martin, Piselli, Tommaso, Tappini, Alessandra, Wallinger, Markus
Type: Inproceedings; In: 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024); Vol: 320; Pages: 1-19
Show Abstract
Edge bundling algorithms significantly improve the visualization of dense graphs by reducing the clutter of many edges visible on screen by bundling them together. As such, bundling is often viewed as a post-processing step applied to a drawing, and the vast majority of edge bundling algorithms consider a graph and its drawing as input. Another way of thinking about edge bundling is to simultaneously optimize both the drawing and the bundling. In this paper, we investigate methods to simultaneously optimize a graph drawing and its bundling. We describe an algorithmic framework which consists of three main steps, namely Filter, Draw, and Bundle. We then propose two alternative implementations and experimentally compare them against the state-of-the-art approach and the simple idea of drawing and subsequently bundling the graph. The experiments confirm that bundled drawings created by our framework outperform previous approaches according to standard quality metrics for edge bundling.

Link to Repositum

Exact Algorithms for Clustered Planarity with Linear Saturators
Da Lozzo, Giordano, Ganian, Robert, Gupta, Siddharth, Mohar, Bojan, Ordyniak, Sebastian, Zehavi, Meirav
Type: Inproceedings; In: 35th International Symposium on Algorithms and Computation (ISAAC 2024); Vol: 322; Pages: 1-16
Show Abstract
We study Clustered Planarity with Linear Saturators, which is the problem of augmenting an n-vertex planar graph whose vertices are partitioned into independent sets (called clusters) with paths - one for each cluster - that connect all the vertices in each cluster while maintaining planarity. We show that the problem can be solved in time 2^𝒪(n) for both the variable and fixed embedding case. Moreover, we show that it can be solved in subexponential time 2^𝒪(√n log n) in the fixed embedding case if additionally the input graph is connected. The latter time complexity is tight under the Exponential-Time Hypothesis. We also show that n can be replaced with the vertex cover number of the input graph by providing a linear (resp. polynomial) kernel for the variable-embedding (resp. fixed-embedding) case; these results contrast the NP-hardness of the problem on graphs of bounded treewidth (and even on trees). Finally, we complement known lower bounds for the problem by showing that Clustered Planarity with Linear Saturators is NP-hard even when the number of clusters is at most 3, thus excluding the algorithmic use of the number of clusters as a parameter.

Link to Repositum

Cluster Editing Parameterized above Modification-disjoint P₃-packings
Li, Shaohua, Pilipczuk, Marcin, Sorge, Manuel
Type: Article; In: ACM Transactions on Algorithms; Vol: 20; Issue: 1
Show Abstract
Given a graph G = (V, E) and an integer k, the Cluster Editing problem asks whether we can transform G into a union of vertex-disjoint cliques by at most k modifications (edge deletions or insertions). In this paper, we study the following variant of Cluster Editing. We are given a graph G = (V, E), a packing H of modification-disjoint induced P3s (no pair of P3s in H share an edge or non-edge) and an integer ℓ. The task is to decide whether G can be transformed into a union of vertex-disjoint cliques by at most ℓ + |H | modifications (edge deletions or insertions). We show that this problem is NP-hard even when ℓ = 0 (in which case the problem asks to turn G into a disjoint union of cliques by performing exactly one edge deletion or insertion per element of H) and when each vertex is in at most 23 P3s of the packing. This answers negatively a question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated by C. Komusiewicz at Shonan meeting no. 144 in March 2019. We then initiate the study to find the largest integer c such that the problem remains tractable when restricting to packings such that each vertex is in at most c packed P3s. Here packed P3s are those belonging to the packing H . Van Bevern et al. showed that the case c = 1 is fixed-parameter tractable with respect to ℓ and we show that the case c = 2 is solvable in |V |2ℓ+O(1) time.

Link to Repositum

Flip-Breakability: A Combinatorial Dichotomy for Monadically Dependent Graph Classes
Dreier, Jan, Mählmann, Nikolas, Toruńczyk, Szymon
Type: Inproceedings; In: STOC 2024: Proceedings of the 56th Annual ACM Symposium on Theory of Computing; Pages: 1550-1560
Show Abstract
A conjecture in algorithmic model theory predicts that the model-checking problem for first-order logic is fixed-parameter tractable on a hereditary graph class if and only if the class is monadically dependent. Originating in model theory, this notion is defined in terms of logic, and encompasses nowhere dense classes, monadically stable classes, and classes of bounded twin-width. Working towards this conjecture, we provide the first two combinatorial characterizations of monadically dependent graph classes. This yields the following dichotomy. On the structure side, we characterize monadic dependence by a Ramsey-theoretic property called flip-breakability. This notion generalizes the notions of uniform quasi-wideness, flip-flatness, and bounded grid rank, which characterize nowhere denseness, monadic stability, and bounded twin-width, respectively, and played a key role in their respective model checking algorithms. Natural restrictions of flip-breakability additionally characterize bounded treewidth and cliquewidth and bounded treedepth and shrubdepth. On the non-structure side, we characterize monadic dependence by explicitly listing few families of forbidden induced subgraphs. This result is analogous to the characterization of nowhere denseness via forbidden subdivided cliques, and allows us to resolve one half of the motivating conjecture: First-order model checking is AW[∗]-hard on every hereditary graph class that is monadically independent. The result moreover implies that hereditary graph classes which are small, have almost bounded twin-width, or have almost bounded flip-width, are monadically dependent. Lastly, we lift our result to also obtain a combinatorial dichotomy in the more general setting of monadically dependent classes of binary structures.

Link to Repositum

Mixed Integer Linear Programming Based Large Neighborhood Search Approaches for the Directed Feedback Vertex Set Problem
Bresich, Maria, Varga, Johannes, Raidl, Günther R., Limmer, Steffen
Type: Inproceedings; In: Metaheuristics and Nature Inspired Computing : 9th International Conference, META 2023, Marrakech, Morocco, November 1–4, 2023, Revised Selected Papers; Vol: 2016; Pages: 3-20
Show Abstract
A directed feedback vertex set (DFVS) of a directed graph is a subset of vertices whose removal makes the graph acyclic. Finding a DFVS of minimum cardinality is the goal of the directed feedback vertex set problem, an NP-hard combinatorial optimization problem. We first consider two mixed integer linear programming (MILP) models for this problem, which, when solved with Gurobi, are effective on graphs of small to medium complexity but do not scale well to large instances. Aiming at better scalability and higher robustness over a large variety of graphs, we investigate a large neighborhood search (LNS) in which a destroy operator removes randomly chosen nodes from an incumbent DFVS and one of the MILP models is used for repair. Regarding the destroy operator, finding a best degree of destruction is challenging. A main contribution lies in proposing several selection strategies for this parameter as well as a strategy for choosing the more promising MILP model for repair. We evaluate the performance of the MILP models and different LNS variants on benchmark instances and compare the approaches to each other as well as to state-of-the-art procedures. Results show that our LNS variants yield clearly better solutions on average than standalone MILP solving. Even though our approaches cannot outperform the state-of-the-art, we gain valuable insights on beneficially configuring such a MILP-based LNS.

Link to Repositum

Crossing Number is NP-hard for Constant Path-width (and Tree-width)
Hliněný, Petr, Khazaliya, Liana
Type: Preprint
Show Abstract
Crossing Number is a celebrated problem in graph drawing. It is known to be NP-complete since 1980s, and fairly involved techniques were already required to show its fixed-parameter tractability when parameterized by the vertex cover number. In this paper we prove that computing exactly the crossing number is NP-hard even for graphs of path-width 12 (and as a result, even of tree-width 9). Thus, while tree-width and path-width have been very successful tools in many graph algorithm scenarios, our result shows that general crossing number computations unlikely (under P!=NP) could be successfully tackled using bounded width of graph decompositions, which has been a 'tantalizing open problem' [S. Cabello, Hardness of Approximation for Crossing Number, 2013] till now.

Link to Repositum

Twin-Width Meets Feedback Edges and Vertex Integrity
Balabán, Jakub, Ganian, Robert, Rocton, Mathis Teva
Type: Inproceedings; In: 19th International Symposium on Parameterized and Exact Computation (IPEC 2024); Vol: 321
Show Abstract
The approximate computation of twin-width has attracted significant attention already since the moment the parameter was introduced. A recently proposed approach (STACS 2024) towards obtaining a better understanding of this question is to consider the approximability of twin-width via fixed-parameter algorithms whose running time depends not on twin-width itself, but rather on parameters which impose stronger restrictions on the input graph. The first step that article made in this direction is to establish the fixed-parameter approximability of twin-width (with an additive error of 1) when the runtime parameter is the feedback edge number. Here, we make several new steps in this research direction and obtain: - An asymptotically tight bound between twin-width and the feedback edge number; - A significantly improved fixed-parameter approximation algorithm for twin-width under the same runtime parameter (i.e., the feedback edge number) which circumvents many of the technicalities of the original result and simultaneously avoids its formerly non-elementary runtime dependency; - An entirely new fixed-parameter approximation algorithm for twin-width when the runtime parameter is the vertex integrity of the graph.

Link to Repositum

Introducing Fairness in Graph Visualization via Gradient Descent
Hong, Seok-Hee, Liotta, Giuseppe, Montecchiani, Fabrizio, Nöllenburg, Martin, Piselli, Tommaso
Type: Inproceedings; In: MLVis: Machine Learning Methods in Visualisation for Big Data (2024)
Show Abstract
Motivated by the need for decision-making systems that avoid bias and discrimination, the concept of fairness recently gained traction in the broad field of artificial intelligence, stimulating new research also within the information visualization community. In this paper, we introduce a notion of fairness in network visualization, specifically for straight-line drawings of graphs, a foundational paradigm in the field. We empirically investigate the following research questions: (i) What is the price of incorporating fairness constraints in straight-line drawings? (ii) How unfair is a straight-line drawing that does not optimize fairness as a primary objective? To tackle these questions, we implement an algorithm based on gradient-descent that can compute straight-line drawings of graphs by optimizing multi-objective functions. We experimentally show that one can significantly increase the fairness of a drawing by paying a relatively small amount in terms of reduced readability.

Link to Repositum

Counting Vanishing Matrix-Vector Products
Brand, Cornelius, Korchemna, Viktoria, Simonov, Kirill, Skotnica, Michael
Type: Inproceedings; In: WALCOM: Algorithms and Computation: 18th International Conference and Workshops on Algorithms and Computation, WALCOM 2024, Kanazawa, Japan, March 18-20, 2024, Proceedings; Vol: 14549; Pages: 335-349
Show Abstract
Consider the following parameterized counting variation of the classic subset sum problem, which arises notably in the context of higher homotopy groups of topological spaces: Let v ∈ Qd be a rational vector, (T1, T2 . . . Tm) a list of d × d rational matrices, S ∈ Qh×d a rational matrix not necessarily square and k a parameter. The goal is to compute the number of ways one can choose k matrices Ti1, Ti2, Tik from the list such that STik,· Ti1 v = 0 ∈ Qh. In this paper, we show that this problem is #W[2]-hard for parameter k. As a consequence, computing the k-th homotopy group of a d-dimensional 1-connected topological space for d > 3 is #W[2]-hard for parameter k. We also discuss a decision version of the problem and its several modifications for which we show W[1]/W[2]-hardness. This is in contrast to the parameterized k-sum problem, which is only W[1]-hard (Abboud-Lewi-Williams, ESA’14). In addition, we show that the decision version of the problem without parameter is an undecidable problem, and we give a fixed-parameter tractable algorithm for matrices of bounded size over finite fields, parameterized the matrix dimensions and the order of the field.

Link to Repositum

Generating All Invertible Matrices by Row Operations
Gregor, Petr, Hoang, Phuc Hung, Merino, Arturo, Mička, Ondřej
Type: Inproceedings; In: 35th International Symposium on Algorithms and Computation (ISAAC 2024); Vol: 322; Pages: 1-14
Show Abstract
We show that all invertible n × n matrices over any finite field 𝔽_q can be generated in a Gray code fashion. More specifically, there exists a listing such that (1) each matrix appears exactly once, and (2) two consecutive matrices differ by adding or subtracting one row from a previous or subsequent row, or by multiplying or dividing a row by the generator of the multiplicative group of 𝔽_q. This even holds in the more general setting where the pairs of rows that can be added or subtracted are specified by an arbitrary transition tree that has to satisfy some mild constraints. Moreover, we can prescribe the first and the last matrix if n ≥ 3, or n = 2 and q > 2. In other words, the corresponding flip graph on all invertible n × n matrices over 𝔽_q is Hamilton connected if it is not a cycle. This solves yet another special case of Lovász conjecture on Hamiltonicity of vertex-transitive graphs.

Link to Repositum

Cluster Editing for Multi-Layer and Temporal Graphs
Chen, Jiehua, Molter, Hendrik, Sorge, Manuel, Ondra Suchý
Type: Article; In: Theory of Computing Systems; Vol: 68; Issue: 5; Pages: 1239-1290
Show Abstract
Motivated by the recent rapid growth of research for algorithms to cluster multi-layer and temporal graphs, we study extensions of the classical Cluster Editing problem. In Multi-Layer Cluster Editing we receive a set of graphs on the same vertex set, called layers and aim to transform all layers into cluster graphs (disjoint unions of cliques) that differ only slightly. More specifically, we want to mark at most d vertices and to transform each layer into a cluster graph using at most k edge additions or deletions per layer so that, if we remove the marked vertices, we obtain the same cluster graph in all layers. In Temporal Cluster Editing we receive a sequence of layers and we want to transform each layer into a cluster graph so that consecutive layers differ only slightly. That is, we want to transform each layer into a cluster graph with at most k edge additions or deletions and to mark a distinct set of d vertices in each layer so that each two consecutive layers are the same after removing the vertices marked in the first of the two layers. We study the combinatorial structure of the two problems via their parameterized complexity with respect to the parameters d and k, among others. Despite the similar definition, the two problems behave quite differently: In particular, Multi-Layer Cluster Editing is fixed-parameter tractable with running time kO(k+d)sO(1) for inputs of size s, whereas Temporal Cluster Editing is W[1]-hard with respect to k even if d=3.

Link to Repositum

A note on clustering aggregation for binary clusterings
Chen, Jiehua, Hermelin, Danny, Sorge, Manuel
Type: Article; In: Operations Research Letters; Vol: 52
Show Abstract
We consider the clustering aggregation problem in which we are given a set of clusterings and want to find an aggregated clustering which minimizes the sum of mismatches to the input clusterings. In the binary case (each clustering is a bipartition) this problem was known to be NP-hard under Turing reductions. We strengthen this result by providing a polynomial-time many-one reduction. Our result also implies that no 2o(n)⋅|I′|O(1)-time algorithm exists that solves any given clustering instance I′ with n elements, unless the Exponential Time Hypothesis fails. On the positive side, we show that the problem is fixed-parameter tractable with respect to the number of input clusterings.

Link to Repositum

Level Planarity Is More Difficult Than We Thought
Fink, Simon Dominik, Pfretzschner, Matthias, Rutter, Ignaz, Stumpf, Peter
Type: Preprint
Show Abstract
We consider three simple quadratic time algorithms for the problem Level Planarity and give a level-planar instance that they either falsely report as negative or for which they output a drawing that is not level planar.

Link to Repositum

Bounding and Computing Obstacle Numbers of Graphs
Balko, Martin, Chaplick, Steven, Ganian, Robert, Gupta, Siddharth, Hoffmann, Michael, Valtr, Pavel, Wolff, Alexander
Type: Article; In: SIAM Journal on Discrete Mathematics; Vol: 38; Issue: 2; Pages: 1537-1565
Show Abstract
An obstacle representation of a graph G consists of a set of pairwise disjoint simply connected closed regions and a one-to-one mapping of the vertices of G to points such that two vertices are adjacent in G if and only if the line segment connecting the two corresponding points does not intersect any obstacle. The obstacle number of a graph is the smallest number of obstacles in an obstacle representation of the graph in the plane such that all obstacles are simple polygons. It is known that the obstacle number of each n-vertex graph is O(nlogn) [M. Balko, J. Cibulka, and P. Valtr, Discrete Comput. Geom., 59 (2018), pp. 143-164] and that there are n-vertex graphs whose obstacle number is Ω(n/(loglogn)²) [V. Dujmovi\'c and P. Morin, Electron. J. Combin., 22 (2015), 3.1]. We improve this lower bound to Ω(n/loglogn) for simple polygons and to Ω(n) for convex polygons. To obtain these stronger bounds, we improve known estimates on the number of n-vertex graphs with bounded obstacle number, solving a conjecture by Dujmovi\'c and Morin. We also show that if the drawing of some n-vertex graph is given as part of the input, then for some drawings Ω(n²) obstacles are required to turn them into an obstacle representation of the graph. Our bounds are asymptotically tight in several instances. We complement these combinatorial bounds by two complexity results. First, we show that computing the obstacle number of a graph G is fixed-parameter tractable in the vertex cover number of G. Second, we show that, given a graph G and a simple polygon P, it is NP-hard to decide whether G admits an obstacle representation using P as the only obstacle.

Link to Repositum

The Fine-Grained Complexity of Graph Homomorphism Parameterized by Clique-Width
Ganian, Robert, Hamm, Thekla, Korchemna, Viktoria, Okrasa, Karolina, Simonov, Kirill
Type: Article; In: ACM Transactions on Algorithms; Vol: 20; Issue: 3
Show Abstract
The generic homomorphism problem, which asks whether an input graph G admits a homomorphism into a fixed target graph H, has been widely studied in the literature. In this article, we provide a fine-grained complexity classification of the running time of the homomorphism problem with respect to the clique-width of G (denoted cw) for virtually all choices of H under the Strong Exponential Time Hypothesis. In particular, we identify a property of H called the signature number B(H) and show that for each H, the homomorphism problem can be solved in time O∗(B(H)cw). Crucially, we then show that this algorithm can be used to obtain essentially tight upper bounds. Specifically, we provide a reduction that yields matching lower bounds for each H that is either a projective core or a graph admitting a factorization with additional properties-allowing us to cover all possible target graphs under long-standing conjectures.

Link to Repositum

On the complexity of the storyplan problem
Binucci, Carla, Di Giacomo, Emilio, Lenhart, William, Liotta, Giuseppe, Montecchiani, Fabrizio, Nöllenburg, Martin, Symvonis, Antonios
Type: Article; In: Journal of Computer and System Sciences; Vol: 139
Show Abstract
We study the problem of representing a graph as a storyplan, a recently introduced model for dynamic graph visualization. It is based on a sequence of frames, each showing a subset of vertices and a planar drawing of their induced subgraphs, where vertices appear and disappear over time. Namely, in the StoryPlan problem, we are given a graph and we want to decide whether there exists a total vertex appearance order for which a storyplan exists. We prove that the problem is NP-complete, and complement this hardness with two parameterized algorithms, one in the vertex cover number and one in the feedback edge set number of the input graph. We prove that partial 3-trees always admit a storyplan, which can be computed in linear time. Finally, we show that the problem remains NP-complete if the vertex appearance order is given and we have to choose how to draw the frames.

Link to Repositum

Constrained Boundary Labeling
Depian, Thomas, Nöllenburg, Martin, Terziadis, Soeren, Wallinger, Markus
Type: Inproceedings; In: 35th International Symposium on Algorithms and Computation (ISAAC 2024); Vol: 322
Show Abstract
Boundary labeling is a technique in computational geometry used to label dense sets of feature points in an illustration. It involves placing labels along an axis-aligned bounding box and connecting each label with its corresponding feature point using non-crossing leader lines. Although boundary labeling is well-studied, semantic constraints on the labels have not been investigated thoroughly. In this paper, we introduce grouping and ordering constraints in boundary labeling: Grouping constraints enforce that all labels in a group are placed consecutively on the boundary, and ordering constraints enforce a partial order over the labels. We show that it is NP-hard to find a labeling for arbitrarily sized labels with unrestricted positions along one side of the boundary. However, we obtain polynomial-time algorithms if we restrict this problem either to uniform-height labels or to a finite set of candidate positions. Finally, we show that finding a labeling on two opposite sides of the boundary is NP-complete, even for uniform-height labels and finite label positions.

Link to Repositum

Boundary Labeling in a Circular Orbit
Bonerath, Annika, Nöllenburg, Martin, Terziadis, Soeren, Wallinger, Markus, Wulms, Jules
Type: Inproceedings; In: 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024); Vol: 320; Pages: 1-17
Show Abstract
Boundary labeling is a well-known method for displaying short textual labels for a set of point features in a figure alongside the boundary of that figure. Labels and their corresponding points are connected via crossing-free leaders. We propose orbital boundary labeling as a new variant of the problem, in which (i) the figure is enclosed by a circular contour and (ii) the labels are placed as disjoint circular arcs in an annulus-shaped orbit around the contour. The algorithmic objective is to compute an orbital boundary labeling with the minimum total leader length. We identify several parameters that define the corresponding problem space: two leader types (straight or orbital-radial), label size and order, presence of candidate label positions, and constraints on where a leader attaches to its label. Our results provide polynomial-time algorithms for many variants and NP-hardness for others, using a variety of geometric and combinatorial insights.

Link to Repositum

Strong (D)QBF Dependency Schemes via Implication-free Resolution Paths
Beyersdorff, Olaf, Blinkhorn, Joshua Lewis, Peitl, Tomáš
Type: Article; In: ACM Transactions on Computation Theory; Vol: 16; Issue: 4
Show Abstract
We suggest a general framework to study dependency schemes for dependency quantified Boolean formulas (DQBF). As our main contribution, we exhibit a new infinite collection of implication-free DQBF dependency schemes that generalise the reflexive resolution path dependency scheme. We establish soundness of all these schemes, implying that they can be used in any DQBF proof system. We further explore the power of QBF and DQBF resolution systems parameterised by implication-free dependency schemes and show that the hierarchical structure naturally present among the dependency schemes translates isomorphically to a hierarchical structure of parameterised proof systems with respect to p-simulation. As a special case, we demonstrate that our new schemes are exponentially stronger than the reflexive resolution path dependency scheme when used in Q-resolution, thus resulting in the strongest QBF dependency schemes known to date.

Link to Repositum

QCDCL with cube learning or pure literal elimination – What is best?
Böhm, Benjamin, Peitl, Tomáš, Beyersdorff, Olaf
Type: Article; In: Artificial Intelligence; Vol: 336
Show Abstract
Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving variants via proof complexity techniques. Our results show that almost all of the QCDCL variants are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL.

Link to Repositum

Should Decisions in QCDCL Follow Prefix Order?
Böhm, Benjamin, Peitl, Tomáš, Beyersdorff, Olaf
Type: Article; In: Journal of Automated Reasoning; Vol: 68; Issue: 1
Show Abstract
Quantified conflict-driven clause learning (QCDCL) is one of the main solving approaches for quantified Boolean formulas (QBF). One of the differences between QCDCL and propositional CDCL is that QCDCL typically follows the prefix order of the QBF for making decisions. We investigate an alternative model for QCDCL solving where decisions can be made in arbitrary order. The resulting system QCDCLANY is still sound and terminating, but does not necessarily allow to always learn asserting clauses or cubes. To address this potential drawback, we additionally introduce two subsystems that guarantee to always learn asserting clauses (QCDCLUNI-ANY) and asserting cubes (QCDCLEXI-ANY), respectively. We model all four approaches by formal proof systems and show that QCDCLUNI-ANY is exponentially better than QCDCL on false formulas, whereas QCDCLEXI-ANY is exponentially better than QCDCL on true QBFs. Technically, this involves constructing specific QBF families and showing lower and upper bounds in the respective proof systems. We complement our theoretical study with some initial experiments that confirm our theoretical findings.

Link to Repositum

Hardness of Random Reordered Encodings of Parity for Resolution and CDCL
Chew, Leroy, De Colnet, Alexis, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 38th AAAI Conference on Artificial Intelligence; Vol: 38; Pages: 7978-7986
Show Abstract
Parity reasoning is challenging for Conflict-Driven Clause Learning (CDCL) SAT solvers. This has been observed even for simple formulas encoding two contradictory parity constraints with different variable orders (Chew and Heule 2020). We provide an analytical explanation for their hardness by showing that they require exponential resolution refutations with high probability when the variable order is chosen at random. We obtain this result by proving that these formulas, which are known to be Tseitin formulas, have Tseitin graphs of linear treewidth with high probability. Since such Tseitin formulas require exponential resolution proofs, our result follows. We generalize this argument to a new class of formulas that capture a basic form of parity reasoning involving a sum of two random parity constraints with random orders. Even when the variable order for the sum is chosen favorably, these formulas remain hard for resolution. In contrast, we prove that they have short DRAT refutations. We show experimentally that the running time of CDCL SAT solvers on both classes of formulas grows exponentially with their treewidth.

Link to Repositum

Satisfiability Modulo User Propagators
Fazekas, Katalin, Niemetz, Aina, Preiner, Mathias, Kirchweger, Markus, Szeider, Stefan, Biere, Armin
Type: Article; In: Journal of Artificial Intelligence Research; Vol: 81; Pages: 989-1017
Show Abstract
Modern SAT solvers are often integrated as sub-reasoning engines into more complex tools to address problems beyond the Boolean satisfiability problem. Consider, for example, solvers for Satisfiability Modulo Theories (SMT), combinatorial optimization, model enumeration, and model counting. There, the SAT solver can often provide relevant information beyond the satisfiability answer and the domain knowledge of the embedding system, such as symmetry properties or theory axioms, may benefit the CDCL search. However, this knowledge can often not be efficiently represented in clausal form. This paper proposes a general interface to inspect and influence the internal behaviour of CDCL SAT solvers. The aim is to capture the essential functionalities that simplify and improve use cases requiring a more fine-grained interaction with the SAT solver than provided via the standard IPASIR interface. For our experiments, the state-of-the-art SAT solver CaDiCaL is extended with the proposed interface and evaluated on two representative use cases: enumerating graphs within the SAT modulo Symmetries framework (SMS), and as the main CDCL(T) SAT engine of the SMT solver cvc5.

Link to Repositum

Efficient Approximation of Fractional Hypertree Width
Korchemna, Viktoria, Lokshtanov, Daniel, Saurabh, Saket, Surianarayanan, Vaishali, Xue, Jie
Type: Inproceedings; In: 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS); Pages: 754-779
Show Abstract
We give two new approximation algorithms to compute the fractional hypertree width of an input hypergraph. The first algorithm takes as input n-vertex m-edge hypergraph $H$ of fractional hypertree width at most $\omega$, runs in polynomial time and produces a tree decomposition of $H$ of fractional hypertree width $\mathcal{O}(\omega\log n\log\omega)$, i.e., it is an $\mathcal{O}(\log n\log\omega)$-approximation algorithm. As an immediate corollary this yields poly-nomial time $\mathcal{O}(\log^{2}n\log\omega)$-approximation algorithms for (generalized) hypertree width as well. To the best of our knowledge our algorithm is the first non-trivial polynomial-time approximation algorithm for fractional hypertree width and (generalized) hypertree width, as opposed to algorithms that run in polynomial time only when $\omega$ is considered a constant. For hypergraphs where every pair of hyperedges have at most $\eta$ vertices in common, the al-gorithm outputs a hypertree decomposition with fractional hypertree width $\mathcal{O}(\eta\omega^{2}\log\omega)$ and generalized hypertree width $\mathcal{O}(\eta\omega^{2}\log\omega(\log\eta+\text{log}\omega))$. This ratio is comparable with the recent algorithm of Lanzinger and Razgon [STACS 2024], which produces a hypertree decomposition with generalized hypertree width ${\mathcal{O}}(\omega^{2}(\omega+\eta))$, but uses time (at least) exponential in $\eta$ and $\omega$. The second algorithm runs in time $n^{\omega}m^{\mathcal{O}(1)}$ and pro-duces a tree decomposition of $H$ of fractional hypertree width $\mathcal{O}(\omega{\mathrm{l}}\text{og}^{2}\omega)$. This significantly improves over the $(n+m)^{\mathcal{O}(\omega^{3})}$ time algorithm of Marx [ACM TALG 2010], which produces a tree decomposition of fractional hyper-tree width $\mathcal{O}(\omega^{3})$, both in terms of running time and the approximation ratio. Our main technical contribution, and the key insight behind both algorithms, is a variant of the classic Menger's Theorem for clique separators in graphs: For every graph $G$, vertex sets $A$ and $B$, family $\mathcal{F}$ of cliques in $G$, and positive rational $f$, either there exists a sub-family of $\mathcal{O}(f \cdot {\mathrm{l}}\text{og}^{2}n)$ cliques in $\mathcal{F}$ whose union separates $A$ from $B$, or there exist $f\cdot\log\vert \mathcal{F}\vert$ paths from $A$ to $B$ such that no clique in $\mathcal{F}$ intersects more than $\log\vert \mathcal{F}\vert$ paths.

Link to Repositum

Problems in NP Can Admit Double-Exponential Lower Bounds When Parameterized by Treewidth or Vertex Cover
Foucaud, Florent, Galby, Esther, Khazaliya, Liana, Li, Shaohua, Mc Inerney, Fionn Aidan, Sharma, Roohani, Tale, Prafullkumar
Type: Inproceedings; In: 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024); Pages: 66:1-66:19
Show Abstract
Treewidth serves as an important parameter that, when bounded, yields tractability for a wide class of problems. For example, graph problems expressible in Monadic Second Order (MSO) logic and Quantified SAT or, more generally, Quantified CSP, are fixed-parameter tractable parameterized by the treewidth of the input’s (primal) graph plus the length of the MSO-formula [Courcelle, Information & Computation 1990] and the quantifier rank [Chen, ECAI 2004], respectively. The algorithms generated by these (meta-)results have running times whose dependence on treewidth is a tower of exponents. A conditional lower bound by Fichte, Hecher, and Pfandler [LICS 2020] shows that, for Quantified SAT, the height of this tower is equal to the number of quantifier alternations. These types of lower bounds, which show that at least double-exponential factors in the running time are necessary, exhibit the extraordinary level of computational hardness for such problems, and are rare in the current literature: there are only a handful of such lower bounds (for treewidth and vertex cover parameterizations) and all of them are for problems that are #NP-complete, Σ p 2-complete, Π p2-complete, or complete for even higher levels of the polynomial hierarchy. Our results demonstrate, for the first time, that it is not necessary to go higher up in the polynomial hierarchy to achieve double-exponential lower bounds: we derive double-exponential lower bounds in the treewidth (tw) and the vertex cover number (vc), for natural, important, and well-studied NP-complete graph problems. Specifically, we design a technique to obtain such lower bounds and show its versatility by applying it to three different problems: Metric Dimension, Strong Metric Dimension, and Geodetic Set. We prove that these problems do not admit 2 2o(tw)· n O(1)-time algorithms, even on bounded diameter graphs, unless the ETH fails (here, n is the number of vertices in the graph). In fact, for Strong Metric Dimension, the double-exponential lower bound holds even for the vertex cover number. We further complement all our lower bounds with matching (and sometimes non-trivial) upper bounds.

Link to Repositum

Scheduling jobs using queries to interactively learn human availability times
Varga, Johannes, Raidl, Günther R., Rönnberg, Elina, Rodemann, Tobias
Type: Article; In: COMPUTERS & OPERATIONS RESEARCH; Vol: 167
Show Abstract
The solution to a job scheduling problem that involves humans as well some other shared resource has to consider the humans’ availability times. For practical acceptance of a scheduling tool, it is crucial that the interaction with the humans is kept simple and to a minimum. It is rarely practical to ask users to fully specify their availability times or to let them enumerate all possible starting times for their jobs. In the scenario we are considering, users initially only propose a single starting time for each of their jobs and a feasible and optimized schedule shall then be found within a small number of interaction rounds. In each such interaction round, our scheduling approach may propose each user a small number of alternative time intervals for scheduling the user's jobs, and then the user may accept or reject these. To make the best out of these limited interaction possibilities, we propose an approach that utilizes integer linear programming and an exact and computationally efficient probability calculation for the users’ availabilities assuming two different stochastic models. In this way, educated proposals of alternative time intervals for performing the jobs are determined based on the computed availability probabilities and the improvements these time intervals would enable. The approach is experimentally evaluated on a variety of artificial benchmark scenarios, and different variants are compared with each other and to diverse baselines. Results show that an initial schedule can usually be quickly improved over few interaction rounds even when assuming a quite simple stochastic model, and the final schedule may come close to the solution of the full-knowledge case despite the strongly limited interaction.

Link to Repositum

Speeding Up Logic-Based Benders Decomposition by Strengthening Cuts with Graph Neural Networks
Varga, Johannes, Karlsson, Emil, Raidl, Günther R., Rönnberg, Elina, Lindsten, Fredrik, Rodemann, Tobias
Type: Inproceedings; In: Machine Learning, Optimization, and Data Science : 9th International Conference, LOD 2023, Grasmere, UK, September 22–26, 2023, Revised Selected Papers, Part I; Vol: 14505; Pages: 24-38
Show Abstract
Logic-based Benders decomposition is a technique to solve optimization problems to optimality. It works by splitting the problem into a master problem, which neglects some aspects of the problem, and a subproblem, which is used to iteratively produce cuts for the master problem to account for those aspects. It is critical for the computational performance that these cuts are strengthened, but the strengthening of cuts comes at the cost of solving additional subproblems. In this work we apply a graph neural network in an autoregressive fashion to approximate the compilation of an irreducible cut, which then only requires few postprocessing steps to ensure its validity. We test the approach on a job scheduling problem with a single machine and multiple time windows per job and compare to approaches from the literature. Results show that our approach is capable of considerably reducing the number of subproblems that need to be solved and hence the total computational effort.

Link to Repositum

Fixed-Parameter Algorithms for Computing Bend-Restricted RAC Drawings of Graphs
Brand, Cornelius, Ganian, Robert, Röder, Sebastian, Schager, Florian
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 28; Issue: 2; Pages: 131-150
Show Abstract
In a right-angle crossing (RAC) drawing of a graph, each edge is represented as a polyline and edge crossings must occur at an angle of exactly 90°, where the number of bends on such polylines is typically restricted in some way. While structural and topological properties of RAC drawings have been the focus of extensive research and particular attention has been paid to RAC drawings with at most 0, 1, 2 and 3 bends per edge, little was known about the boundaries of tractability for computing such drawings. In this paper, we initiate the study of bend-restricted RAC drawings from the viewpoint of parameterized complexity. In particular, we establish that computing a RAC drawing of an input graph G with at most b bends where each edge e has a prescribed upper-bound 0 <= β(e) <= 3 on the number of bends it can support (or determining that none exists) is: fixed-parameter tractable parameterized by the feedback edge number of G, and fixed-parameter tractable parameterized by plus the vertex cover number of G.

Link to Repositum

Exact methods for the Selective Assessment Routing Problem
Salvà Soler, Joan, Hemmelmayr, Vera, Raidl, Günther R.
Type: Article; In: Central European Journal of Operations Research
Show Abstract
The Selective Assessment Routing Problem (SARP) is a problem in humanitarian logistics addressing the site selection and routing decisions of rapid needs assessment teams which aim to evaluate the post-disaster conditions of different community groups, each carrying a distinct characteristic. The aim is to construct an assessment plan that maximizes the covering of different characteristics in a balanced way. We explore exact approaches based on mixed integer linear programming. Different mathematical formulations are presented, and theoretical results regarding their strengths are derived. The models are experimentally evaluated on a set of test instances including a real-world scenario.

Link to Repositum

A Simulated Annealing Based Approach for the Roman Domination Problem
Greilhuber, Jakob, Schober, Sophia, Iurlano, Enrico, Raidl, Günther R.
Type: Inproceedings; In: Metaheuristics and Nature Inspired Computing : 9th International Conference, META 2023, Marrakech, Morocco, November 1–4, 2023, Revised Selected Papers; Vol: 2016; Pages: 28-43
Show Abstract
The Roman Domination Problem is an NP-hard combinatorial optimization problem on an undirected simple graph. It represents scenarios where a resource shall be economically distributed over its vertices while guaranteeing that each vertex has either a resource itself or at least one neighbor with a sharable surplus resource. We propose several (meta-)heuristic approaches for solving this problem. First, a greedy construction heuristic for quickly generating feasible solutions is introduced. A special feature of this heuristic is an optional advanced tiebreaker. This construction heuristic is then randomized and combined with a local search procedure to obtain a greedy randomized adaptive search procedure (GRASP). As an alternative, we further propose a simulated annealing (SA) algorithm to improve the solutions returned by the construction heuristic. As we observe different pros and cons for the GRASP and the SA, we finally combine them into a simulated annealing hybrid, which interleaves phases of greedy randomized construction and phases of simulated annealing. All algorithms are empirically evaluated on a large set of benchmark instances from the literature. We compare to an exact mixed integer linear programming model that is solved by Gurobi as well as to a variable neighborhood search from the literature. In particular the simulated annealing hybrid turns out to yield on average the best results, making it a new state-of-the-art method for the Roman domination problem.

Link to Repositum

Minimizing Switches in Cased Graph Drawings
Ganian, Robert, Nöllenburg, Martin, Röder, Sebastian
Type: Inproceedings; In: 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024); Vol: 320; Pages: 1-43
Show Abstract
In cased drawings of graphs, edges are drawn in front of others in order to decrease the negative impact of crossings on readability. In this context, a switch on an edge is defined as two consecutive crossings, where the edge is drawn in the front at one crossing and behind another edge at the next crossing. We investigate the problem of minimizing the maximum number of switches on any edge - both in a fixed drawing as well as for non-embedded graphs. We resolve an open question by Eppstein, van Kreveld, Mumford, and Speckmann (2009) by establishing the NP-hardness of minimizing the number of switches in a fixed drawing, provide a fixed-parameter algorithm for this problem, and obtain a full characterization of the problem for non-embedded graphs.

Link to Repositum

Revisiting ILP Models for Exact Crossing Minimization in Storyline Drawings
Dobler, Alexander, Jünger, Michael, Jünger, Paul J., Meffert, Julian, Mutzel, Petra, Nöllenburg, Martin
Type: Inproceedings; In: 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024); Vol: 320; Pages: 1-19
Show Abstract
Storyline drawings are a popular visualization of interactions of a set of characters over time, e.g., to show participants of scenes in a book or movie. Characters are represented as x-monotone curves that converge vertically for interactions and diverge otherwise. Combinatorially, the task of computing storyline drawings reduces to finding a sequence of permutations of the character curves for the different time points, with the primary objective being crossing minimization of the induced character trajectories. In this paper, we revisit exact integer linear programming (ILP) approaches for this NP-hard problem. By enriching previous formulations with additional problem-specific insights and new heuristics, we obtain exact solutions for an extended new benchmark set of larger and more complex instances than had been used before. Our experiments show that our enriched formulations lead to better performing algorithms when compared to state-of-the-art modelling techniques. In particular, our best algorithms are on average 2.6-3.2 times faster than the state-of-the-art and succeed in solving complex instances that could not be solved before within the given time limit. Further, we show in an ablation study that our enrichment components contribute considerably to the performance of the new ILP formulation.

Link to Repositum

Visualizing Extensions of Argumentation Frameworks as Layered Graphs
Nöllenburg, Martin, Pirker, Christian, Rapberger, Anna, Woltran, Stefan, Wulms, Jules
Type: Preprint
Show Abstract
The visualization of argumentation frameworks (AFs) is crucial for enabling a wide applicability of argumentative tools. However, their visualization is often considered only as an accompanying part of tools for computing semantics and standard graphical representations are used. We introduce a new visualization technique that draws an AF, together with an extension (as part of the input), as a 3-layer graph layout. Our technique supports the user to more easily explore the visualized AF, better understand extensions, and verify algorithms for computing semantics. To optimize the visual clarity and aesthetics of this layout, we propose to minimize edge crossings in our 3-layer drawing. We do so by an exact ILP-based approach, but also propose a fast heuristic pipeline. Via a quantitative evaluation, we show that the heuristic is feasible even for large instances, while producing at most twice as many crossings as an optimal drawing in most cases.

Link to Repositum

GdMetriX - A NetworkX Extension For Graph Drawing Metrics
Nöllenburg, Martin, Röder, Sebastian, Wallinger, Markus
Type: Inproceedings; In: 32nd International Symposium on Graph Drawing and Network Visualization (GD 2024); Vol: 320; Pages: 1-3
Show Abstract
networkX is a well-established Python library for network analysis. With gdMetriX, we aim to extend the functionality of networkX and provide common quality metrics used in the field of graph drawing, such as the number of crossings or the angular resolution. In addition, the package provides easy-to-use access to the graph datasets provided by the 'Graph Layout Benchmark Datasets' project from the Northeastern University Visualization Lab.

Link to Repositum

Clustered Planarity Variants for Level Graphs
Fink, Simon Dominik, Pfretzschner, Matthias, Rutter, Ignaz, Sieper, Marie Diana
Type: Preprint
Show Abstract
We consider variants of the clustered planarity problem for level-planar drawings. So far, only convex clusters have been studied in this setting. We introduce two new variants that both insist on a level-planar drawing of the input graph but relax the requirements on the shape of the clusters. In unrestricted Clustered Level Planarity (uCLP) we only require that they are bounded by simple closed curves that enclose exactly the vertices of the cluster and cross each edge of the graph at most once. The problem y-monotone Clustered Level Planarity (y-CLP) requires that additionally it must be possible to augment each cluster with edges that do not cross the cluster boundaries so that it becomes connected while the graph remains level-planar, thereby mimicking a classic characterization of clustered planarity in the level-planar setting. We give a polynomial-time algorithm for uCLP if the input graph is biconnected and has a single source. By contrast, we show that y-CLP is hard under the same restrictions and it remains NP-hard even if the number of levels is bounded by a constant and there is only a single non-trivial cluster.

Link to Repositum

The k-Opt Algorithm for the Traveling Salesman Problem Has Exponential Running Time for k ≥ 5
Heimann, Sophia, Hoang, Hung P., Hougardy, Stefan
Type: Inproceedings; In: 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024); Vol: 297
Show Abstract
The k-Opt algorithm is a local search algorithm for the Traveling Salesman Problem. Starting with an initial tour, it iteratively replaces at most k edges in the tour with the same number of edges to obtain a better tour. Krentel (FOCS 1989) showed that the Traveling Salesman Problem with the k-Opt neighborhood is complete for the class PLS (polynomial time local search) and that the k-Opt algorithm can have exponential running time for any pivot rule. However, his proof requires k ≫ 1000 and has a substantial gap. We show the two properties above for a much smaller value of k, addressing an open question by Monien, Dumrauf, and Tscheuschner (ICALP 2010). In particular, we prove the PLS-completeness for k ≥ 17 and the exponential running time for k ≥ 5.

Link to Repositum

Circuits, Proofs and Propositional Model Counting
Chede, Sravanthi, Chew, Leroy Nicholas, Shukla, Anil
Type: Inproceedings; In: 44th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2024); Vol: 323; Pages: 18:1-1:23
Show Abstract
In this paper we present a new proof system framework CLIP (Circuit Linear Induction Proposition) for propositional model counting (#SAT). A CLIP proof firstly involves a Boolean circuit, calculating the cumulative function (or running count) of models counted up to a point, and secondly a propositional proof arguing for the correctness of the circuit. This concept is remarkably simple and CLIP is modular so it allows us to use existing checking formats from propositional logic, especially strong proof systems. CLIP has polynomial-size proofs for XOR-pairs which are known to require exponential-size proofs in MICE [16]. The existence of a strong proof system that can tackle these hard problems was posed as an open problem in Beyersdorff et al. [3]. In addition, CLIP systems can p-simulate all other existing #SAT proofs systems (KCPS(#SAT) [8], CPOG [4], MICE). Furthermore, CLIP has a theoretical advantage over the other #SAT proof systems in the sense that CLIP only has lower bounds from its propositional proof system or if P#P is not contained in P/poly, which is a major open problem in circuit complexity. CLIP uses unrestricted circuits in its proof as compared to restricted structures used by the existing #SAT proof systems. In this way, CLIP avoids hardness or limitations due to circuit restrictions.

Link to Repositum

First-Order Model Checking on Monadically Stable Graph Classes
Dreier, Jan, Eleftheriadis, Ioannis, Mählmann, Nikolas, McCarty, Rose, Pilipczuk, Michał, Toruńczyk, Szymon
Type: Inproceedings; In: 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS); Pages: 21-30
Show Abstract
A graph class C is called monadically stable if one cannot interpret, in first-order logic, arbitrary large linear orders in colored graphs from C. We prove that the model checking problem for first-order logic is fixed-parameter tractable on every monadically stable graph class. This extends the results of [Grohe, Kreutzer, Siebertz; J. ACM '17] for nowhere dense classes and of [Dreier, Mählmann, Siebertz; STOC '23] for structurally nowhere dense classes to all monadically stable classes. This result is complemented by a hardness result showing that monadic stability is precisely the dividing line between tractability and intractability of first-order model checking on hereditary classes that are edge-stable: exclude some half-graph as a semi-induced subgraph. Precisely, we prove that for every hereditary graph class C that is edge-stable but not monadically stable, first-order model checking is AW[∗] -hard on C, and W[1]-hard when restricted to existential sentences. This confirms, in the special case of edge-stable classes, an open conjecture that the notion of monadic dependence delimits the tractability of first-order model checking on hereditary classes of graphs. For our tractability result, we first prove that monadically stable graph classes have almost linear neighborhood complexity, by combining tools from stability theory and from sparsity theory. We then use this result to construct sparse neighborhood covers for monadically stable graph classes, which provides the missing ingredient for the algorithm of [Dreier, Mählmann, Siebertz; STOC '23]. The key component of this construction is the usage of orders with low crossing number [Welzl; SoCG '88], a tool from the area of range queries. For our hardness result, we first prove a new characterization of monadically stable graph classes in terms of forbidden induced subgraphs. We then use this characterization to show that in hereditary classes that are edge-stable but not monadically stable, one can efficiently interpret the class of all graphs using only existential formulas; this implies W[1]-hardness of model checking already for existential formulas.

Link to Repositum

2023
Bitonic st-orderings for upward planar graphs: splits and bends in the variable embedding scenario
Angelini, Patrizio, Bekos, Michael A., Förster, Henry, Gronemann, Martin
Type: Article; In: Algorithmica; Vol: 85; Pages: 2667-2692
Show Abstract
Bitonic st-orderings for st-planar graphs were introduced as a method to cope with several graph drawing problems. Notably, they have been used to obtain the best-known upper bound on the number of bends for upward planar polyline drawings with at most one bend per edge in polynomial area. For an st-planar graph that does not admit a bitonic st-ordering, one may split certain edges such that for the resulting graph such an ordering exists. Since each split is interpreted as a bend, one is usually interested in splitting as few edges as possible. While this optimization problem admits a linear-time algorithm in the fixed embedding setting, it remains open in the variable embedding setting. We close this gap in the literature by providing a linear-time algorithm that optimizes over all embeddings of the input st-planar graph. The best-known lower bound on the number of required splits of an st-planar graph with n vertices is n- 3. However, it is possible to compute a bitonic st-ordering without any split for the st-planar graph obtained by reversing the orientation of all edges. In terms of upward planar polyline drawings in polynomial area, the former translates into n- 3 bends, while the latter into no bends. We show that this idea cannot always be exploited by describing an st-planar graph that needs at least n- 5 splits in both orientations. We provide analogous bounds for graphs with small degree. Finally, we further investigate the relationship between splits in bitonic st-orderings and bends in upward planar polyline drawings with polynomial area, by providing bounds on the number of bends in such drawings.

Link to Repositum

Block Crossings in One-Sided Tanglegrams
Dobler, Alexander, Nöllenburg, Martin
Type: Inproceedings; In: Algorithms and Data Structures : 18th International Symposium, WADS 2023, Montreal, QC, Canada, July 31 – August 2, 2023, Proceedings; Vol: 14079; Pages: 386-400
Show Abstract
Tanglegrams are drawings of two rooted binary phylogenetic trees and a matching between their leaf sets. The trees are drawn crossing-free on opposite sides with their leaf sets facing each other on two vertical lines. Instead of minimizing the number of pairwise edge crossings, we consider the problem of minimizing the number of block crossings, that is, two bundles of lines crossing each other locally. With one tree fixed, the leaves of the second tree can be permuted according to its tree structure. We give a complete picture of the algorithmic complexity of minimizing block crossings in one-sided tanglegrams by showing NP -completeness, constant-factor approximations, and a fixed-parameter algorithm. We also state first results for non-binary trees.

Link to Repositum

A Dynamic MaxSAT-based Approach to Directed Feedback Vertex Sets
Kiesel, Rafael, Schidler, André
Type: Inproceedings; In: 2023 Proceedings of the Symposium on Algorithm Engineering and Experiments (ALENEX); Pages: 39-52
Show Abstract
We propose a new approach to the Directed Feedback Vertex Set Problem (DFVSP), where the input is a directed graph and the solution is a minimum set of vertices whose removal makes the graph acyclic. Our approach, implemented in the solver DAGer, is based on two novel contributions: Firstly, we add a wide range of data reductions that are partially inspired by reductions for the similar vertex cover problem. For this, we give a theoretical basis for lifting reductions from vertex cover to DFVSP but also incorporate novel ideas into strictly more general and new DFVSP reductions. Secondly, we propose dynamically encoding DFVSP in propositional logic using cycle propagation for im- proved performance. Cycle propagation builds on the idea that already a limited number of the constraints in a propositional encoding is usually sufficient for finding an optimal solution. Our algorithm, therefore, starts with a small number of constraints and cycle propa- gation adds additional constraints when necessary. We propose an efficient integration of cycle propagation into the workflow of MaxSAT solvers, further improving the performance of our algorithm. Our extensive experimental evaluation shows that DAGer significantly outperforms the state-of-the-art solvers and that our data reductions alone directly solve many of the instances.

Link to Repositum

CSP beyond tractable constraint languages
Dreier, Jan, Ordyniak, Sebastian, Szeider, Stefan
Type: Article; In: Constraints; Vol: 28; Issue: 3; Pages: 450-471
Show Abstract
The constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zhuk 2017) Backdoors, introduced by Williams, Gomes, and Selman (2003), gradually extend such a tractable class to all CSP instances of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a CSP instance and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021) for SAT, is a more refined distance measure, which admits the parallel utilization of different backdoor variables. Bounded backdoor size implies bounded backdoor depth, but there are instances of constant backdoor depth and arbitrarily large backdoor size. Dreier, Ordyniak, and Szeider (2022) provided fixed-parameter algorithms for finding backdoors of small depth into the classes of Horn and Krom formulas. In this paper, we consider backdoor depth for CSP. We consider backdoors w.r.t. tractable subproblems CΓ of the CSP defined by a constraint language Γ , i.e., where all the constraints use relations from the language Γ . Building upon Dreier et al.’s game-theoretic approach and their notion of separator obstructions, we show that for any finite, tractable, semi-conservative constraint language Γ , the CSP is fixed-parameter tractable parameterized by the backdoor depth into CΓ plus the domain size. With backdoors of low depth, we reach classes of instances that require backdoors of arbitrary large size. Hence, our results strictly generalize several known results for CSP that are based on backdoor size.

Link to Repositum

Optimal Capacity Modification for Many-To-One Matching Problems
Chen, Jiehua, Csáji, Gergely
Type: Inproceedings; In: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems; Pages: 2880-2882
Show Abstract
We consider many-to-one matching problems, where one side consists of students and the other side of schools with capacity constraints. We study how to optimally increase the capacities of the schools so as to obtain a stable and perfect matching (i.e., every student is matched) or a matching that is stable and Pareto-efficient for the students. We consider two common optimality criteria, one aiming to minimize the sum of capacity increases of all schools (abbrv. as MinSum) and the other aiming to minimize the maximum capacity increase of any school (abbrv. as MinMax). We obtain a complete picture in terms of computational complexity: Except for stable and perfect matchings using the MinMax criteria which is polynomial-time solvable, all three remaining problems are NP- hard. We further investigate the parameterized complexity and approximability and find that achieving stable and Pareto-efficient matchings via minimal capacity increases is much harder than achieving stable and perfect matchings.

Link to Repositum

On Translations between ML Models for XAI Purposes
de Colnet, Alexis, Marquis, Pierre
Type: Inproceedings; In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23); Pages: 3158-3166
Show Abstract
In this paper, the succinctness of various ML models is studied. To be more precise, the existence of polynomial-time and polynomial-space translations between representation languages for classifiers is investigated. The languages that are considered include decision trees, random forests, several types of boosted trees, binary neural networks, Boolean multilayer perceptrons, and various logical representations of binary classifiers. We provide a complete map indicating for every pair of languages C, C' whether or not a polynomial-time / polynomial-space translation exists from C to C'. We also explain how to take advantage of the resulting map for XAI purposes.

Link to Repositum

An Evolutionary Approach for Scheduling a Fleet of Shared Electric Vehicles
Limmer, Steffen, Varga, Johannes, Raidl, Günther R.
Type: Inproceedings; In: Applications of Evolutionary Computation : 26th European Conference, EvoApplications 2023, Held as Part of EvoStar 2023, Brno, Czech Republic, April 12–14, 2023, Proceedings; Vol: 13989; Pages: 3-18
Show Abstract
In the present paper, we investigate the management of a fleet of electric vehicles. We propose a hybrid evolutionary approach for solving the problem of simultaneously planning the charging of electric vehicles and the assignment of electric vehicles to a set of reservations. The reservation assignment is optimized with an evolutionary algorithm while linear programming is used to compute optimal charging schedules. The evolutionary algorithm uses an indirect encoding and a problem-specific crossover operator. Furthermore, we propose the use of a surrogate fitness function. Experimental results on problem instances with up to 100 vehicles and 1600 reservations show that the proposed approach is able to notably outperform two approaches based on mixed integer linear programming.

Link to Repositum

LinSets.zip: Compressing Linear Set Diagrams
Wallinger, Markus, Dobler, Alexander, Nöllenburg, Martin
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 29; Issue: 6; Pages: 2875-2887
Show Abstract
Linear diagrams are used to visualize set systems by depicting set memberships as horizontal line segments in a matrix, where each set is represented as a row and each element as a column. Each such line segment of a set is shown in a contiguous horizontal range of cells of the matrix indicating that the corresponding elements in the columns belong to the set. As each set occupies its own row in the matrix, the total height of the resulting visualization is as large as the number of sets in the instance. Such a linear diagram can be visually sparse and intersecting sets containing the same element might be represented by distant rows. To alleviate such undesirable effects, we present LinSets.zip, a new approach that achieves a more space-efficient representation of linear diagrams. First, we minimize the total number of gaps in the horizontal segments by reordering columns, a criterion that has been shown to increase readability in linear diagrams. The main difference of LinSets.zip to linear diagrams is that multiple non-intersecting sets can be positioned in the same row of the matrix. Furthermore, we present several different rendering variations for a matrix-based representation that utilize the proposed row compression. We implemented the different steps of our approach in a visualization pipeline using integer-linear programming, and suitable heuristics aiming at sufficiently fast computations in practice. We conducted both a quantitative evaluation and a small-scale user experiment to compare the effects of compressing linear diagrams.

Link to Repositum

Large neighborhood search for electric vehicle fleet scheduling
Limmer, Steffen, Varga, Johannes, Raidl, Günther
Type: Article; In: Energies; Vol: 16; Issue: 12
Show Abstract
This work considers the problem of planning how a fleet of shared electric vehicles is charged and used for serving a set of reservations. While exact approaches can be used to efficiently solve small to medium-sized instances of this problem, heuristic approaches have been demonstrated to be superior in larger instances. The present work proposes a large neighborhood search approach for solving this problem, which employs a mixed integer linear programming-based repair operator. Three variants of the approach using different destroy operators are evaluated on large instances of the problem. The experimental results show that the proposed approach significantly outperforms earlier state-of-the-art methods on this benchmark set by obtaining solutions with up to 8.5% better objective values.

Link to Repositum

Isomorph-Free Generation of Combinatorial Objects with SAT Modulo Symmetries
Szeider, Stefan
Type: Presentation
Show Abstract
SAT modulo Symmetries (SMS) is a framework for the exhaustive isomorph-free generation of combinatorial objects with a prescribed property. SMS relies on the tight integration of a CDCL SAT solver with a custom dynamic symmetry-breaking algorithm that iteratively refines an ordered partition of the generated object's elements. SMS supports the generation of DRAT proofs and thus provides an additional layer of confidence in the obtained results. This talk will discuss the basic concepts of SMS and review some of its applications, including extremal graph problems like planar Turán numbers, the Earth-Moon coloring problem, Rota's matroid basis conjecture, and the Erdős-Faber-Lovász conjecture on hypergraphs.

Link to Repositum

Game Implementation: What Are the Obstructions?
Chen, Jiehua, Layegh Khavidaki, Seyedeh Negar, Haydn, Sebastian Vincent, Simola, Sofia, Sorge, Manuel
Type: Inproceedings; In: Proceedings of the 37th AAAI Conference on Artificial Intelligence; Vol: 37(5); Pages: 5557-5564
Show Abstract
In many applications, we want to influence the decisions of independent agents by designing incentives for their actions. We revisit a fundamental problem in this area, called GAME IMPLEMENTATION: Given a game in standard form and a set of desired strategies, can we design a set of payment promises such that if the players take the payment promises into account, then all undominated strategies are desired? Furthermore, we aim to minimize the cost, that is, the worst-case amount of payments. We study the tractability of computing such payment promises and determine more closely what obstructions we may have to overcome in doing so. We show that GAME IMPLEMENTATION is NP-hard even for two players, solving in particular a long-standing open question and suggesting more restrictions are necessary to obtain tractability results. We thus study the regime in which players have only a small constant number of strategies and obtain the following. First, this case remains NP-hard even if each player's utility depends only on three others. Second, we repair a flawed efficient algorithm for the case of both small number of strategies and small number of players. Among further results, we characterize sets of desired strategies that can be implemented at zero cost as a generalization of Nash equilibria.

Link to Repositum

Advancing Stability in Matching Markets: Multi-Modal Preferences and Beyond
Chen, Jiehua
Type: Presentation
Show Abstract
In this talk, we explore two recent advances that challenge traditional assumptions and broaden our understanding of what stability can entail. First, we consider the impact of multi-modal preferences--a scenario in which each agent may possess multiple preference lists, potentially based on different criteria. We introduce three natural stability concepts for this setting, investigate their mutual relations, and focus on the computational complexity associated with determining stable matchings under these concepts. Next, we shift to novel quantitative stability notions, robustness and near-stability, which respectively strengthen and relax the classical stability definition. These new metrics not only facilitate a fine-grained stability analysis but also enable the exploration of trade-offs between stability and social optimality. We probe the computational challenges posed by these nuanced stability perspectives by showing that determining robustness is easy while finding a socially optimal and nearly stable matching is hard.

Link to Repositum

SAT-boosted tabu search for coloring massive graphs
Schidler, Andre, Szeider, Stefan
Type: Article; In: ACM Journal on Experimental Algorithmics; Vol: 28
Show Abstract
Graph coloring is the problem of coloring the vertices of a graph with as few colors as possible, avoiding monochromatic edges. It is one of the most fundamental NP-hard computational problems. For decades researchers have developed exact and heuristic methods for graph coloring. While methods based on propositional satisfiability (SAT) feature prominently among these exact methods, the encoding size is prohibitive for large graphs. For such graphs, heuristic methods have been proposed, with tabu search among the most successful ones. In this article, we enhance tabu search for graph coloring within the SAT-based local improvement (SLIM) framework. Our hybrid algorithm incrementally improves a candidate solution by repeatedly selecting small subgraphs and coloring them optimally with a SAT solver. This approach scales to dense graphs with several hundred thousand vertices and over 1.5 billion edges. Our experimental evaluation shows that our hybrid algorithm beats state-of-the-art methods on large dense graphs.

Link to Repositum

New Complexity-Theoretic Frontiers of Tractability for Neural Network Training
Brand, Cornelius, Ganian, Robert, Rocton, Mathis Teva
Type: Inproceedings; In: 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Show Abstract
In spite of the fundamental role of neural networks in contemporary machine learning research, our understanding of the computational complexity of optimally training neural networks remains limited even when dealing with the simplest kinds of activation functions. Indeed, while there has been a number of very recent results that establish ever-tighter lower bounds for the problem under linear and ReLU activation functions, little progress has been made towards the identification of novel polynomial-time tractable network architectures. In this article we obtain novel algorithmic upper bounds for training linear- and ReLU-activated neural networks to optimality which push the boundaries of tractability for these problems beyond the previous state of the art

Link to Repositum

Approaching the Traveling Tournament Problem with Randomized Beam Search
Frohner, Nikolaus, Neumann, Bernhard, Pace, Giulio, Raidl, Günther R
Type: Article; In: Evolutionary Computation; Vol: 31; Issue: 3; Pages: 233-257
Show Abstract
The traveling tournament problem is a well-known sports league scheduling problem famous for its practical hardness. Given an even number of teams with symmetric distances between their venues, a double round-robin tournament has to be scheduled minimizing the total travel distances over all teams. We consider the most common constrained variant without repeaters and a streak limit of three, for which we study a beam search approach based on a state-space formulation guided by heuristics derived from different lower bound variants. We solve the arising capacitated vehicle routing subproblems either exactly for small- to medium-sized instances up to 18 teams or heuristically also for larger instances up to 24 teams. In a randomized variant of the search, we employ random team ordering and add small amounts of Gaussian noise to the nodes' guidance for diversification when multiple runs are performed. This allows for a simple yet effective parallelization of the beam search. A final comparison is done on the NL, CIRC, NFL, and GALAXY benchmark instances with 12 to 24 teams, for which we report a mean gap difference to the best known feasible solutions of 1.2% and five new best feasible solutions.

Link to Repositum

MySemCloud: Semantic-aware Word Cloud Editing
Huber, Michael, Nöllenburg, Martin, Villedieu, Anaïs
Type: Inproceedings; In: 2023 IEEE 16th Pacific Visualization Symposium (PacificVis); Pages: 147-156
Show Abstract
Word clouds are a popular text visualization technique that summarize an input text by displaying its most important words in a compact image. The traditional layout methods do not take proximity effects between words into account; this has been improved in semantic word clouds, where relative word placement is controlled by edges in a word similarity graph. We introduce MySemCloud, a new human-in-The-loop tool to visualize and edit semantic word clouds. MySemCloud lets users perform computer-Assisted local moves of words, which improve or at least retain the semantic quality. To achieve this, we construct a word similarity graph on which a system of forces is applied to generate a compact initial layout with good semantic quality. The force system also allows us to maintain these attributes after each user interaction, as well as preserve the user's mental map. The tool provides algorithmic support for the editing operations to help the user enhance the semantic quality of the visualization, while adjusting it to their personal preference. We show that MySemCloud provides high user satisfaction as well as permits users to create layouts of higher quality than state-of-The-Art semantic word cloud generation tools.

Link to Repositum

Transitions in Dynamic Point Labeling
Depian, Thomas, Li, Guangping, Nöllenburg, Martin, Wulms, Jules
Type: Inproceedings; In: 12th International Conference on Geographic Information Science (GIScience 2023); Vol: 277
Show Abstract
The labeling of point features on a map is a well-studied topic. In a static setting, the goal is to find a non-overlapping label placement for (a subset of) point features. In a dynamic setting, the set of point features and their corresponding labels change, and the labeling has to adapt to such changes. To aid the user in tracking these changes, we can use morphs, here called transitions, to indicate how a labeling changes. Such transitions have not gained much attention yet, and we investigate different types of transitions for labelings of points, most notably consecutive transitions and simultaneous transitions. We give (tight) bounds on the number of overlaps that can occur during these transitions. When each label has a (non-negative) weight associated to it, and each overlap imposes a penalty proportional to the weight of the overlapping labels, we show that it is NP-complete to decide whether the penalty during a simultaneous transition has weight at most k. Finally, in a case study, we consider geotagged Twitter data on a map, by labeling points with rectangular labels showing tweets. We developed a prototype implementation to evaluate different transition styles in practice, measuring both number of overlaps and transition duration.

Link to Repositum

Splitting Plane Graphs to Outerplanarity
Gronemann, Martin, Nöllenburg, Martin, Villedieu, Anaïs
Type: Inproceedings; In: WALCOM: Algorithms and Computation : 17th International Conference and Workshops, WALCOM 2023, Hsinchu, Taiwan, March 22–24, 2023, Proceedings; Vol: 13973; Pages: 217-228
Show Abstract
Vertex splitting replaces a vertex by two copies and partitions its incident edges amongst the copies. This problem has been studied as a graph editing operation to achieve desired properties with as few splits as possible, most often planarity, for which the problem is NP-hard. Here we study how to minimize the number of splits to turn a plane graph into an outerplane one. We tackle this problem by establishing a direct connection between splitting a plane graph to outerplanarity, finding a connected face cover, and finding a feedback vertex set in its dual. We prove NP-completeness for plane biconnected graphs, while we show that a polynomial-time algorithm exists for maximal planar graphs. Finally, we provide upper and lower bounds for certain families of maximal planar graphs.

Link to Repositum

Computing Twin-width with SAT and Branch & Bound
Schidler, André, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23); Pages: 2013-2021
Show Abstract
The graph width-measure twin-width recently attracted great attention because of its solving power and generality. Many prominent NP-hard problems are tractable on graphs of bounded twin-width if a certificate for the twin-width bound is provided as an input. Bounded twin-width subsumes other prominent structural restrictions such as bounded treewidth and bounded rank-width. Computing such a certificate is NP-hard itself, already for twin-width 4, and the only known implemented algorithm for twin-width computation is based on a SAT encoding. In this paper, we propose two new algorithmic approaches for computing twin-width that significantly improve the state of the art. Firstly, we develop a SAT encoding that is far more compact than the known encoding and consequently scales to larger graphs. Secondly, we propose a new Branch & Bound algorithm for twin-width that, on many graphs, is significantly faster than the SAT encoding. It utilizes a sophisticated caching system for partial solutions. Both algorithmic approaches are based on new conceptual insights into twin-width computation, including the reordering of contractions.

Link to Repositum

Co-Certificate Learning with SAT Modulo Symmetries
Kirchweger, Markus, Peitl, Tomáš, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23); Pages: 1944-1953
Show Abstract
We present a new SAT-based method for generating all graphs up to isomorphism that satisfy a given co-NP property. Our method extends the SAT Modulo Symmetry (SMS) framework with a technique that we call co-certificate learning. If SMS generates a candidate graph that violates the given co-NP property, we obtain a certificate for this violation, i.e., `co-certificate' for the co-NP property. The co-certificate gives rise to a clause that the SAT solver, serving as SMS's backend, learns as part of its CDCL procedure. We demonstrate that SMS plus co-certificate learning is a powerful method that allows us to improve the best-known lower bound on the size of Kochen-Specker vector systems, a problem that is central to the foundations of quantum mechanics and has been studied for over half a century. Our approach is orders of magnitude faster and scales significantly better than a recently proposed SAT-based method.

Link to Repositum

Efficient Algorithms for Monroe and CC Rules in Multi-Winner Elections with (Nearly) Structured Preferences
Chen, Jiehua, Hatschka, Christian, Simola, Sofia
Type: Inproceedings; In: ECAI 2023 : 26th European Conference on Artificial Intelligence, September 30–October 4, 2023, Kraków, Poland. Including 12th Conference on Prestigious Applications of Intelligent Systems (PAIS 2023). Proceedings; Vol: 372; Pages: 397-404
Show Abstract
We investigate winner determination for two popular proportional representation systems: the Monroe and Chamberlin-Courant (abbrv. CC) systems. Our study focuses on (nearly) single-peaked resp. single-crossing preferences. We show that for single-crossing approval preferences, winner determination of the Monroe rule is polynomial, and for both rules, winner determination mostly admits FPT algorithms with respect to the number of voters to delete to obtain single-peaked or single-crossing preferences. Our results answer some complexity questions from the literature [19, 29, 22].

Link to Repositum

Computing optimal hypertree decompositions with SAT
Schidler, André, Szeider, Stefan
Type: Article; In: Artificial Intelligence; Vol: 325
Show Abstract
Hypertree width is a prominent hypergraph invariant with many algorithmic applications in constraint satisfaction and databases. We propose two novel characterisations for hypertree width in terms of linear orderings. We utilize these characterisations to obtain SAT, MaxSAT, and SMT encodings for computing the hypertree width exactly. We evaluate the encodings on an extensive set of benchmark instances and compare them to state-of-the-art exact methods for computing optimal hypertree width. Our results show that our approach outperforms these state-of-the-art algorithms.

Link to Repositum

Crossing Minimization in Time Interval Storylines
Dobler, Alexander, Nöllenburg, Martin, Stojanovic, Daniel, Villedieu, Anais, Wulms, Jules
Type: Inproceedings; In: 39th European Workshop on Computational Geometry : EuroCG2023 : Book of Abstracts; Pages: 36-37

Link to Repositum

Deontic Paradoxes in ASP with Weak Constraints
Ciabattoni, Agata, Eiter, Thomas, Hatschka, Christian
Type: Inproceedings; In: Proceedings 39th International Conference on Logic Programming; Pages: 367-380
Show Abstract
The rise of powerful AI technology for a range of applications that are sensitive to legal, social, and ethical norms demands decision-making support in presence of norms and regulations. Normative reasoning is the realm of deontic logics, that are challenged by well-known benchmark problems (deontic paradoxes), and lack efficient computational tools. In this paper, we use Answer Set Programming (ASP) for addressing these shortcomings and showcase how to encode and resolve several well-known deontic paradoxes utilizing weak constraints. By abstracting and generalizing this encoding, we present a methodology for translating normative systems in ASP with weak constraints. This methodology is applied to "ethical" versions of Pac-man, where we obtain a comparable performance with related works, but ethically preferable results.

Link to Repositum

Are hitting formulas hard for resolution?
Peitl, Tomáš, Szeider, Stefan
Type: Article; In: Discrete Applied Mathematics; Vol: 337; Pages: 173-184
Show Abstract
Hitting formulas, introduced by Iwama, are an unusual class of propositional CNF formulas. Not only is their satisfiability decidable in polynomial time, but even their models can be counted in closed form. This stands in stark contrast with other polynomial-time decidable classes, which usually have algorithms based on backtracking and resolution and for which model counting remains hard, like 2-SAT and Horn-SAT. However, those resolution-based algorithms usually easily imply an upper bound on resolution complexity, which is missing for hitting formulas. Are hitting formulas hard for resolution? In this paper we take the first steps towards answering this question. We show that the resolution complexity of hitting formulas is dominated by so-called irreducible hitting formulas, first studied by Kullmann and Zhao, that cannot be composed of smaller hitting formulas. However, by definition, large irreducible unsatisfiable hitting formulas are difficult to construct; it is not even known whether infinitely many exist. Building upon our theoretical results, we implement an efficient algorithm on top of the Nauty software package to enumerate all irreducible unsatisfiable hitting formulas with up to 14 clauses. We also determine the exact resolution complexity of the generated hitting formulas with up to 13 clauses by extending a known SAT encoding for our purposes. Our experimental results suggest that hitting formulas are indeed hard for resolution.

Link to Repositum

Circuit Minimization with QBF-Based Exact Synthesis
Reichl, Franz-Xaver, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 37th AAAI Conference on Artificial Intelligence; Vol: 37; Pages: 4087-4094
Show Abstract
This paper presents a rewriting method for Boolean circuits that minimizes small subcircuits with exact synthesis. Individual synthesis tasks are encoded as Quantified Boolean Formulas (QBFs) that capture the full flexibility for implementing multi-output subcircuits. This is in contrast to SAT-based resynthesis, where "don't cares" are computed for an individual gate, and replacements are confined to the circuitry used exclusively by that gate. An implementation of our method achieved substantial size reductions compared to state-of-the-art methods across a wide range of benchmark circuits.

Link to Repositum

Circuit Minimization with Exact Synthesis: From QBF Back to SAT
Reichl, Franz Xaver, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: IWLS 2023: 32nd International Workshop on Logic & Synthesis; Pages: 98-105
Show Abstract
The exact synthesis problem is to find a smallest (or shallowest) circuit matching a given specification. SAT-based exact synthesis is currently limited to circuits of about 10 gates, but can be used to minimize larger circuits by optimally resynthesizing small subcircuits. In this setting, subcircuits can often be replaced by non-equivalent circuits due to “don’t cares”. Quantified Boolean Formulas (QBFs) succinctly encode local resynthesis tasks subject to don’t cares, and QBF solvers can be used to solve the resulting instances. This paper describes two refinements of this approach. First, for subcircuits with few inputs and outputs, it is feasible to compute a Boolean relation that completely characterizes the input-output behaviour of the subcircuit under don’t cares. This enables the use of a SAT encoding instead of a QBF encoding, leading to significantly reduced running times when applied to functions from the IWLS22 and IWLS23 competitions. Second, we describe circuit partitioning techniques in which don’t cares for a subcircuit are captured only with respect to an enclosing window, rather than the entire circuit. This successfully enables the application of exact synthesis to some of the largest circuits in the EPFL suite.

Link to Repositum

Optimal Seat Arrangement: What Are the Hard and Easy Cases?
Ceylan, Esra, Chen, Jiehua, Roy, Sanjukta
Type: Inproceedings; In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23); Pages: 2563-2571
Show Abstract
We study four NP-hard optimal seat arrangement problems which each have as input a set of n agents, where each agent has cardinal preferences over other agents, and an n-vertex undirected graph (called the seat graph). The task is to assign each agent to a distinct vertex in the seat graph such that either the sum of utilities or the minimum utility is maximized, or it is envy-free or exchange-stable. Aiming at identifying hard and easy cases, we extensively study the algorithmic complexity of the four problems by looking into natural graph classes for the seat graph (e.g., paths, cycles, stars, or matchings), problem-specific parameters (e.g., the number of non-isolated vertices in the seat graph or the maximum number of agents towards whom an agent has non-zero preferences), and preference structures (e.g., non-negative or symmetric preferences). For strict preferences and seat graphs with disjoint edges and isolated vertices, we correct an error in the literature and show that finding an envy-free arrangement remains NP-hard in this case.

Link to Repositum

Growth of the perfect sequence covering array number
Iurlano, Enrico
Type: Article; In: Designs, Codes and Cryptography; Vol: 91; Issue: 4; Pages: 1487-1494
Show Abstract
In this note we answer positively an open question posed by Yuster in 2020 [14] on the polynomial boundedness of the perfect sequence covering array numberg(n, k) (PSCA number). The latter determines the (renormalized) minimum row-count that perfect sequence covering arrays (PSCAs) can possess. PSCAs are matrices with permutations in Sn as rows, such that each ordered k-sequence of distinct elements of [n] is covered by the same number of rows. We obtain the result after illuminating an isomorphism between this structure from design theory and a special case of min-wise independent permutations. Afterwards, we point out that asymptotic bounds and constructions can be transferred between these two structures. Moreover, we sharpen asymptotic lower bounds for g(n, k) and improve upper bounds for g(n, 4) and g(n, 3), for some concrete values of n. We conclude with some open questions and propose a new matrix class being potentially advantageous for searching PSCAs.

Link to Repositum

Separating Incremental and Non-Incremental Bottom-Up Compilation
De Colnet, Alexis
Type: Inproceedings; In: 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023); Vol: 271
Show Abstract
The aim of a compiler is, given a function represented in some language, to generate an equivalent representation in a target language L. In bottom-up (BU) compilation of functions given as CNF formulas, constructing the new representation requires compiling several subformulas in L. The compiler starts by compiling the clauses in L and iteratively constructs representations for new subformulas using an "Apply" operator that performs conjunction in L, until all clauses are combined into one representation. In principle, BU compilation can generate representations for any subformulas and conjoin them in any way. But an attractive strategy from a practical point of view is to augment one main representation - which we call the core - by conjoining to it the clauses one at a time. We refer to this strategy as incremental BU compilation. We prove that, for known relevant languages L for BU compilation, there is a class of CNF formulas that admit BU compilations to L that generate only polynomial-size intermediate representations, while their incremental BU compilations all generate an exponential-size core.

Link to Repositum

A SAT Solver's Opinion on the Erdos-Faber-Lovász Conjecture
Kirchweger, Markus, Peitl, Tomáš, Szeider, Stefan
Type: Inproceedings; In: 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023); Vol: 271; Pages: 1-17
Show Abstract
In 1972, Paul Erdos, Vance Faber, and Lászlo Lovász asked whether every linear hypergraph with n vertices can be edge-colored with n colors, a statement that has come to be known as the EFL conjecture. Erdos himself considered the conjecture as one of his three favorite open problems, and offered increasing money prizes for its solution on several occasions. A proof of the conjecture was recently announced, for all but a finite number of hypergraphs. In this paper we look at some of the cases not covered by this proof. We use SAT solvers, and in particular the SAT Modulo Symmetries (SMS) framework, to generate non-colorable linear hypergraphs with a fixed number of vertices and hyperedges modulo isomorphisms. Since hypergraph colorability is NP-hard, we cannot directly express in a propositional formula that we want only non-colorable hypergraphs. Instead, we use one SAT (SMS) solver to generate candidate hypergraphs modulo isomorphisms, and another to reject them by finding a coloring. Each successive candidate is required to defeat all previous colorings, whereby we avoid having to generate and test all linear hypergraphs. Computational methods have previously been used to verify the EFL conjecture for small hypergraphs. We verify and extend these results to larger values and discuss challenges and directions. Ours is the first computational approach to the EFL conjecture that allows producing independently verifiable, DRAT proofs.

Link to Repositum

Fast Convolutions for Near-Convex Sequences
Brand, Cornelius, Lassota, Alexandra
Type: Inproceedings; In: 34th International Symposium on Algorithms and Computation (ISAAC 2023); Vol: 283; Pages: 1-16
Show Abstract
We develop algorithms for (min,+)-Convolution and related convolution problems such as Super Additivity Testing, Convolution 3-Sum and Minimum Consecutive Subsums which use the degree of convexity of the instance as a parameter. Assuming the min-plus conjecture (Künnemann-Paturi-Schneider, ICALP'17 and Cygan et al., ICALP'17), our results interpolate in an optimal manner between fully convex instances, which can be solved in near-linear time using Legendre transformations, and general non-convex sequences, where the trivial quadratic-time algorithm is conjectured to be best possible, up to subpolynomial factors.

Link to Repositum

The Parameterized Complexity of Network Microaggregation
Blažej, Václav, Ganian, Robert, Knop, Dusan, Pokorný, Jan, Schierreich, Šimon, Simonov, Kirill
Type: Inproceedings; In: Proceedings of the 37th AAAI Conference on Artificial Intelligence; Vol: 37, 5; Pages: 6262-6270
Show Abstract
Microaggregation is a classical statistical disclosure control technique which requires the input data to be partitioned into clusters while adhering to specified size constraints. We provide novel exact algorithms and lower bounds for the task of microaggregating a given network while considering both unrestricted and connected clusterings, and analyze these from the perspective of the parameterized complexity paradigm. Altogether, our results assemble a complete complexity-theoretic picture for the network microaggregation problem with respect to the most natural parameterizations of the problem, including input-specified parameters capturing the size and homogeneity of the clusters as well as the treewidth and vertex cover number of the network.

Link to Repositum

The silent (r)evolution of SAT
Fichte, Johannes K., Le Berre, Daniel, Hecher, Markus, Szeider, Stefan
Type: Article; In: Communications of the ACM; Vol: 66; Issue: 6; Pages: 64-72
Show Abstract
Today's powerful, robust SAT solvers have become primary tools for solving hard computational problems.

Link to Repositum

Multi-Objective Policy Evolution for a Same-Day Delivery Problem with Soft Deadlines
Frohner, Nikolaus, Raidl, Günther, Chicano, Francisco
Type: Inproceedings; In: GECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary Computation; Pages: 1941-1949
Show Abstract
Same-day delivery problems (SDDPs) deal with efficient near-term satisfaction of dynamic and stochastic customer demand. During the day, a sequence of decisions has to be performed related to delivery route construction and driver dispatching. Formulated as Markov decision process, the goal is to find a policy that maximizes the expected reward over a specified class of instances. In the literature, scalar reward functions have dominated so far, by either considering only one objective or several objectives transformed into one using a weighted sum or lexicographic order. In this work, we consider a vector-valued reward with two dimensions, tardiness and costs, for a real-world inspired SDDP with tight soft deadlines within hours. The goal is to evolve non-dominated sets of policies for typical days with a certain stochastic load pattern and geographical order distribution in advance and let the decision maker select which one to deploy. We achieve this utilizing classical multi-objective genetic algorithms, NSGA-II and SPEA2, and by parallelization of the computationally intensive policy evaluations. Based on the experimental results, we observe that accepting a small amount of additional tardiness initially leads to a substantial return in terms of reduced mean delivery times per order.

Link to Repositum

Computing Data-driven Multilinear Metro Maps
Nöllenburg, Martin, Terziadis, Soeren
Type: Article; In: Cartographic Journal; Vol: 60; Issue: 4; Pages: 367-382
Show Abstract
Traditionally, most schematic metro maps in practice as well as metro map layout algorithms adhere to an octolinear layout style with all paths composed of horizontal, vertical, and 45◦-diagonal edges. Despite growing interest in more general multilinear metro maps, generic algorithms to draw metro maps based on a system of k ≥ 2 not necessarily equidistant slopes have not been investigated thoroughly. In this paper, we present and implement an adaptation of the octolinear mixed-integer linear programming approach of Nöllenburg and Wolff (2011) that can draw metro maps schematized to any set C of arbitrary orientations. We further present a data-driven approach to determine a suitable set C by either detecting the best rotation of an equidistant orientation system or by clustering the input edge orientations using a k-medians algorithm. We demonstrate the new possibilities of our method using several real-world examples.

Link to Repositum

Characterizing Tseitin-formulas with short regular resolution refutations
De Colnet, Alexis, Mengel, Stefan
Type: Article; In: Journal of Artificial Intelligence Research; Vol: 76; Pages: 265-286
Show Abstract
Tseitin-formulas are systems of parity constraints whose structure is described by a graph. These formulas have been studied extensively in proof complexity as hard instances in many proof systems. In this paper, we prove that a class of unsatisfiable Tseitin-formulas of bounded degree has regular resolution refutations of polynomial length if and only if the treewidth of all underlying graphs G for that class is in O(log |V (G)|). It follows that unsatisfiable Tseitin-formulas with polynomial length of regular resolution refutations are completely determined by the treewidth of the underlying graphs when these graphs have bounded degree. To prove this, we show that any regular resolution refutation of an unsatisfiable Tseitin-formula with graph G of bounded degree has length 2Ω(tw(G)/|V (G)|, thus essentially matching the known 2O(tw(G))poly(|V (G)|) upper bound. Our proof first connects the length of regular resolution refutations of unsatisfiable Tseitin-formulas to the size of representations of satisfiable Tseitin-formulas in decomposable negation normal form (DNNF). Then we prove that for every graph G of bounded degree, every DNNF-representation of every satisfiable Tseitin-formula with graph G must have size 2Ω(tw(G)) which yields our lower bound for regular resolution.

Link to Repositum

Learning Small Decision Trees with Large Domain
Eiben, Eduard, Ordyniak, Sebastian, Paesani, Giacomo, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23); Pages: 3184-3192
Show Abstract
One favors decision trees (DTs) of the smallest size or depth to facilitate explainability and interpretability. However, learning such an optimal DT from data is well-known to be NP-hard. To overcome this complexity barrier, Ordyniak and Szeider (AAAI 21) initiated the study of optimal DT learning under the parameterized complexity perspective. They showed that solution size (i.e., number of nodes or depth of the DT) is insufficient to obtain fixed-parameter tractability (FPT). Therefore, they proposed an FPT algorithm that utilizes two auxiliary parameters: the maximum difference (as a structural property of the data set) and maximum domain size. They left it as an open question of whether bounding the maximum domain size is necessary. The main result of this paper answers this question. We present FPT algorithms for learning a smallest or lowest-depth DT from data, with the only parameters solution size and maximum difference. Thus, our algorithm is significantly more potent than the one by Szeider and Ordyniak as it can handle problem inputs with features that range over unbounded domains. We also close several gaps concerning the quality of approximation one obtains by only considering DTs based on minimum support sets.

Link to Repositum

The Parameterized Complexity of Finding Concise Local Explanations
Ordyniak, Sebastian, Paesani, Giacomo, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23); Pages: 3312-3320
Show Abstract
We consider the computational problem of finding a smallest local explanation (anchor) for classifying a given feature vector (example) by a black-box model. After showing that the problem is NP-hard in general, we study various natural restrictions of the problem in terms of problem parameters to see whether these restrictions make the problem fixedparameter tractable or not. We draw a detailed and systematic complexity landscape for combinations of parameters, including the size of the anchor, the size of the anchor’s coverage, and parameters that capture structural aspects of the problem instance, including rank-width, twin-width, and maximum difference.

Link to Repositum

Group Activity Selection with Few Agent Types
Ganian, Robert, Ordyniak, Sebastian, Rahul, C. S.
Type: Article; In: Algorithmica; Vol: 85; Issue: 5; Pages: 1111-1155
Show Abstract
In this paper we establish the complexity map for the Group Activity Selection Problem (GASP), along with two of its prominent variants called sGASP and gGASP, focusing on the case when the number of types of agents is the parameter. In all these problems, one is given a set of agents (each with their own preferences) and a set of activities, and the aim is to assign agents to activities in a way which satisfies certain global as well as preference-based conditions. Our positive results, consisting of one fixed-parameter algorithm and one XP algorithm, rely on a combination of novel Subset Sum machinery (which may be of general interest) and identifying certain compression steps that allow us to focus on solutions with a simpler, well-defined structure (in particular, they are “acyclic”). These algorithms are complemented by matching lower bounds, which among others close a gap to a recently obtained tractability result of Gupta et al. (in: Algorithmic game theory—10th international symposium, SAGT 2017, vol 10504 of lecture notes in computer science, Springer, 2017). In this direction, the techniques used to establish W[1]-hardness of sGASP are of particular interest: as an intermediate step, we use Sidon sequences to show the W[1]-hardness of a highly restricted variant of multi-dimensional Subset Sum, which may find applications in other settings as well.

Link to Repositum

Parameterized complexity of envy-free resource allocation in social networks
Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Ordyniak, Sebastian
Type: Article; In: Artificial Intelligence; Vol: 315
Show Abstract
We consider the classical problem of allocating indivisible resources among agents in an envy-free (and, where applicable, proportional) way. Recently, the basic model was enriched by introducing the concept of a social network which allows to capture situations where agents might not have full information about the allocation of all resources. We initiate the study of the parameterized complexity of these resource allocation problems by considering natural parameters which capture structural properties of the network and similarities between agents and resources. In particular, we show that even very general fragments of the considered problems become tractable as long as the social network has constant treewidth or clique-width. We complement our results with matching lower bounds which show that our algorithms cannot be substantially improved.

Link to Repositum

Pseudorandom Finite Models
Dreier, Jan, Tucker-Foltz, Jamie
Type: Inproceedings; In: 2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS); Pages: 1-13
Show Abstract
We study pseudorandomness and pseudorandom generators from the perspective of logical definability. Building on results from ordinary derandomization and finite model theory, we show that it is possible to deterministically construct, in polynomial time, graphs and relational structures that are statistically indistinguishable from random structures by any sentence of first order or least fixed point logics. This raises the question of whether such constructions can be implemented via logical transductions from simpler structures with less entropy. In other words, can logical formulas be pseudorandom generators? We provide a complete classification of when this is possible for first order logic, fixed point logic, and fixed point logic with parity, and provide partial results and conjectures for first order logic with parity.

Link to Repositum

A Parameterized Theory of PAC Learning
Brand, Cornelius, Ganian, Robert, Simonov, Kirill
Type: Inproceedings; In: Proceedings of the AAAI Conference on Artificial Intelligence; Vol: 37, 6; Pages: 6834-6841

Link to Repositum

Optimizing the positions of battery swapping stations - Pilot studies and layout optimization algorithm -
Rodemann, Tobias, Kataoka, Hiroaki, Jatschka, Thomas, Raidl, Günther, Limmer, Steffen, Meguro, Hiromu
Type: Inproceedings; In: EVTeC 2023: The 6th International Electric Vehicle Technology Conference; Pages: 28-28
Show Abstract
For electric scooters, battery swapping is a promising alternative to battery charging due to the lower weight and volume of their batteries that allows a manual replacement at battery swapping stations. Mobile batteries are shared between all users and the target of the operator is therefore to maximize the customer satisfaction while minimizing system set-up and operation costs. Here we give an overview of Honda’s activities for a Battery as a Service (BaaS) business in Indonesia, Philippines and India, while looking specifically at the optimal placement of battery swapping stations with a given customer demand. Multiple objectives like set-up cost, energy costs, and customer detours are considered. We employ a Large Neighborhood Search (LNS) approach that uses specific destroy and repair operators for each objective and includes a Mixed Integer Linear Programming (MILP) element for repairing solutions. Our results show that the employed LNS outperforms a state-of-the-art pure MILP approach for larger problem sizes with up to 500 potential station locations and 1000 trips. Overall 10-30% better results compared to standard approaches can be obtained.

Link to Repositum

On the parameterized complexity of clustering problems for incomplete data
Eiben, Eduard, Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian, Szeider, Stefan
Type: Article; In: Journal of Computer and System Sciences; Vol: 134; Pages: 1-19
Show Abstract
We study fundamental clustering problems for incomplete data. Specifically, given a set of incomplete d-dimensional vectors (representing rows of a matrix), the goal is to complete the missing vector entries in a way that admits a partitioning of the vectors into at most k clusters with radius or diameter at most r. We give characterizations of the parameterized complexity of these problems with respect to the parameters k, r, and the minimum number of rows and columns needed to cover all the missing entries. We show that the considered problems are fixed-parameter tractable when parameterized by the three parameters combined, and that dropping any of the three parameters results in parameterized intractability. A byproduct of our results is that, for the complete data setting, all problems under consideration are fixed-parameter tractable parameterized by k+r.

Link to Repositum

From Data Completion to Problems on Hypercubes: A Parameterized Analysis of the Independent Set Problem
Eiben, Eduard, Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: 18th International Symposium on Parameterized and Exact Computation (IPEC 2023); Vol: 285; Pages: 1-14
Show Abstract
Several works have recently investigated the parameterized complexity of data completion problems, motivated by their applications in machine learning, and clustering in particular. Interestingly, these problems can be equivalently formulated as classical graph problems on induced subgraphs of powers of partially-defined hypercubes. In this paper, we follow up on this recent direction by investigating the Independent Set problem on this graph class, which has been studied in the data science setting under the name Diversity. We obtain a comprehensive picture of the problem’s parameterized complexity and establish its fixed-parameter tractability w.r.t. the solution size plus the power of the hypercube. Given that several such FO-definable problems have been shown to be fixed-parameter tractable on the considered graph class, one may ask whether fixed-parameter tractability could be extended to capture all FO-definable problems. We answer this question in the negative by showing that FO model checking on induced subgraphs of hypercubes is as difficult as FO model checking on general graphs.

Link to Repositum

The Parameterized Complexity of Coordinated Motion Planning
Eiben, Eduard, Ganian, Robert, Kanj, Iyad
Type: Inproceedings; In: 39th International Symposium on Computational Geometry, SoCG 2023; Vol: 258; Pages: 1-16
Show Abstract
In Coordinated Motion Planning (CMP), we are given a rectangular-grid on which k robots occupy k distinct starting gridpoints and need to reach k distinct destination gridpoints. In each time step, any robot may move to a neighboring gridpoint or stay in its current gridpoint, provided that it does not collide with other robots. The goal is to compute a schedule for moving the k robots to their destinations which minimizes a certain objective target - prominently the number of time steps in the schedule, i.e., the makespan, or the total length traveled by the robots. We refer to the problem arising from minimizing the former objective target as CMP-M and the latter as CMP-L. Both CMP-M and CMP-L are fundamental problems that were posed as the computational geometry challenge of SoCG 2021, and CMP also embodies the famous (n2 − 1)-puzzle as a special case. In this paper, we settle the parameterized complexity of CMP-M and CMP-L with respect to their two most fundamental parameters: the number of robots, and the objective target. We develop a new approach to establish the fixed-parameter tractability of both problems under the former parameterization that relies on novel structural insights into optimal solutions to the problem. When parameterized by the objective target, we show that CMP-L remains fixed-parameter tractable while CMP-M becomes para-NP-hard. The latter result is noteworthy, not only because it improves the previously-known boundaries of intractability for the problem, but also because the underlying reduction allows us to establish - as a simpler case - the NP-hardness of the classical Vertex Disjoint and Edge Disjoint Paths problems with constant path-lengths on grids.

Link to Repositum

MosaicSets: Embedding Set Systems into Grid Graphs
Rottmann, Peter, Wallinger, Markus, Bonerath, Annika, Gedicke, Sven, Nöllenburg, Martin, Haunert, Jan-Henrik
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 29; Issue: 1; Pages: 875-885
Show Abstract
Visualizing sets of elements and their relations is an important research area in information visualization. In this paper, we present MosaicSets: a novel approach to create Euler-like diagrams from non-spatial set systems such that each element occupies one cell of a regular hexagonal or square grid. The main challenge is to find an assignment of the elements to the grid cells such that each set constitutes a contiguous region. As use case, we consider the research groups of a university faculty as elements, and the departments and joint research projects as sets. We aim at finding a suitable mapping between the research groups and the grid cells such that the department structure forms a base map layout. Our objectives are to optimize both the compactness of the entirety of all cells and of each set by itself. We show that computing the mapping is NP-hard. However, using integer linear programming we can solve real-world instances optimally within a few seconds. Moreover, we propose a relaxation of the contiguity requirement to visualize otherwise non-embeddable set systems. We present and discuss different rendering styles for the set overlays. Based on a case study with real-world data, our evaluation comprises quantitative measures as well as expert interviews.

Link to Repositum

Searching for Smallest Universal Graphs and Tournaments with SAT
Zhang, Tianwei, Szeider, Stefan
Type: Inproceedings; In: 29th International Conference on Principles and Practice of Constraint Programming; Vol: 280
Show Abstract
A graph is induced k-universal if it contains all graphs of order k as an induced subgraph. For over half a century, the question of determining smallest k-universal graphs has been studied. A related question asks for a smallest k-universal tournament containing all tournaments of order k. This paper proposes and compares SAT-based methods for answering these questions exactly for small values of k. Our methods scale to values for which a generate-and-test approach isn't feasible; for instance, we show that an induced 7-universal graph has more than 16 vertices, whereas the number of all connected graphs on 16 vertices, modulo isomorphism, is a number with 23 decimal digits Our methods include static and dynamic symmetry breaking and lazy encodings, employing external subgraph isomorphism testing.

Link to Repositum

Proven Optimally-Balanced Latin Rectangles with SAT
Ramaswamy, Vaidyanathan Peruvemba, Szeider, Stefan
Type: Inproceedings; In: 29th International Conference on Principles and Practice of Constraint Programming (CP 2023); Vol: 280
Show Abstract
Motivated by applications from agronomic field experiments, Díaz, Le Bras, and Gomes [CPAIOR 2015] introduced Partially Balanced Latin Rectangles as a generalization of Spatially Balanced Latin Squares. They observed that the generation of Latin rectangles that are optimally balanced is a highly challenging computational problem. They computed, utilizing CSP and MIP encodings, Latin rectangles up to 12 × 12, some optimally balanced, some suboptimally balanced. In this paper, we develop a SAT encoding for generating balanced Latin rectangles. We compare experimentally encoding variants. Our results indicate that SAT encodings perform competitively with the MIP encoding, in some cases better. In some cases we could find Latin rectangles that are more balanced than previously known ones. This finding is significant, as there are many arithmetic constraints involved. The SAT approach offers the advantage that we can certify that Latin rectangles are optimally balanced through DRAT proofs that can be verified independently.

Link to Repositum

A Multilevel Optimization Approach for Large Scale Battery Exchange Station Location Planning
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther R.
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization: 23rd European Conference, EvoCOP 2023, Held as Part of EvoStar 2023, Brno, Czech Republic, April 12–14, 2023. Proceedings; Vol: 13987; Pages: 50-65
Show Abstract
We propose a multilevel optimization algorithm (MLO) for solving large scale instances of the Multi-Period Battery Swapping Station Location Problem (MBSSLP), i.e., a problem for deciding the placement of battery swapping stations in an urban area. MLO generates a solution to an MBSSLP instance in three steps. First the problem size is iteratively reduced by coarsening. Then, a solution to the coarsest problem instance is determined, and finally the obtained solution is projected to more fine grained problem instances in reverse order until a solution to the original problem instance is obtained. We test our approach on benchmark instances with up to 10000 areas for placing stations and 100000 user trips. We compare MLO to solving a mixed integer linear program (MILP) in a direct way as well as solving the instances with a construction heuristic (CH). Results show that MLO scales substantially better for such large instances than the MILP or the CH.

Link to Repositum

Consistency Checking Problems: A Gateway to Parameterized Sample Complexity
Ganian, Robert, Khazaliya, Liana, Simonov, Kirill
Type: Inproceedings; In: 18th International Symposium on Parameterized and Exact Computation (IPEC 2023); Vol: 285; Pages: 1-17
Show Abstract
Recently, Brand, Ganian and Simonov introduced a parameterized refinement of the classical PAC-learning sample complexity framework. A crucial outcome of their investigation is that for a very wide range of learning problems, there is a direct and provable correspondence between fixed-parameter PAC-learnability (in the sample complexity setting) and the fixed-parameter tractability of a corresponding "consistency checking" search problem (in the setting of computational complexity). The latter can be seen as generalizations of classical search problems where instead of receiving a single instance, one receives multiple yes- and no-examples and is tasked with finding a solution which is consistent with the provided examples. Apart from a few initial results, consistency checking problems are almost entirely unexplored from a parameterized complexity perspective. In this article, we provide an overview of these problems and their connection to parameterized sample complexity, with the primary aim of facilitating further research in this direction. Afterwards, we establish the fixed-parameter (in)-tractability for some of the arguably most natural consistency checking problems on graphs, and show that their complexity-theoretic behavior is surprisingly very different from that of classical decision problems. Our new results cover consistency checking variants of problems as diverse as (k-)Path, Matching, 2-Coloring, Independent Set and Dominating Set, among others.

Link to Repositum

First-Order Model Checking on Structurally Sparse Graph Classes
Dreier, Jan, Mählmann, Nikolas, Siebertz, Sebastian
Type: Inproceedings; In: Proceedings of the 55th Annual ACM Symposium on Theory of Computing; Pages: 567-580
Show Abstract
A class of graphs is structurally nowhere dense if it can be constructed from a nowhere dense class by a first-order transduction. Structurally nowhere dense classes vastly generalize nowhere dense classes and constitute important examples of monadically stable classes. We show that the first-order model checking problem is fixed-parameter tractable on every structurally nowhere dense class of graphs. Our result builds on a recently developed game-theoretic characterization of monadically stable graph classes. As a second key ingredient of independent interest, we provide a polynomial-time algorithm for approximating weak neighborhood covers (on general graphs). We combine the two tools into a recursive locality-based model checking algorithm. This algorithm is efficient on every monadically stable graph class admitting flip-closed sparse weak neighborhood covers, where flip-closure is a mild additional assumption. Thereby, establishing efficient first-order model checking on monadically stable classes is reduced to proving the existence of flip-closed sparse weak neighborhood covers on these classes-a purely combinatorial problem. We complete the picture by proving the existence of the desired covers for structurally nowhere dense classes: we show that every structurally nowhere dense class can be sparsified by contracting local sets of vertices, enabling us to lift the existence of covers from sparse classes.

Link to Repositum

Faster edge‐path bundling through graph spanners
Wallinger, Markus, Archambault, Daniel, Auber, David, Nöllenburg, Martin, Peltonen, Jaakko
Type: Article; In: Computer Graphics Forum; Vol: 42; Issue: 6
Show Abstract
Edge-Path bundling is a recent edge bundling approach that does not incur ambiguities caused by bundling disconnected edges together. Although the approach produces less ambiguous bundlings, it suffers from high computational cost. In this paper, we present a new Edge-Path bundling approach that increases the computational speed of the algorithm without reducing the quality of the bundling. First, we demonstrate that biconnected components can be processed separately in an Edge-Path bundling of a graph without changing the result. Then, we present a new edge bundling algorithm that is based on observing and exploiting a strong relationship between Edge-Path bundling and graph spanners. Although the worst case complexity of the approach is the same as of the original Edge-Path bundling algorithm, we conduct experiments to demonstrate that the new approach is 5–256 times faster than Edge-Path bundling depending on the dataset, which brings its practical running time more in line with traditional edge bundling algorithms.

Link to Repositum

Computing Hive Plots: A Combinatorial Framework
Nöllenburg, Martin, Wallinger, Markus
Type: Inproceedings; In: Graph Drawing and Network Visualization : 31st International Symposium, GD 2023, Isola delle Femmine, Palermo, Italy, September 20–22, 2023, Revised Selected Papers, Part II; Vol: 14466; Pages: 153-169
Show Abstract
Hive plots are a graph visualization style placing vertices on a set of radial axes emanating from a common center and drawing edges as smooth curves connecting their respective endpoints. In previous work on hive plots, assignment to an axis and vertex positions on each axis were determined based on selected vertex attributes and the order of axes was prespecified. Here, we present a new framework focusing on combinatorial aspects of these drawings to extend the original hive plot idea and optimize visual properties such as the total edge length and the number of edge crossings in the resulting hive plots. Our framework comprises three steps: (1) partition the vertices into multiple groups, each corresponding to an axis of the hive plot; (2) optimize the cyclic axis order to bring more strongly connected groups near each other; (3) optimize the vertex ordering on each axis to minimize edge crossings. Each of the three steps is related to a well-studied, but NP-complete computational problem. We combine and adapt suitable algorithmic approaches, implement them as an instantiation of our framework and show in a case study how it can be applied in a practical setting. Furthermore, we conduct computational experiments to gain further insights regarding algorithmic choices of the framework. The code of the implementation and a prototype web application can be found on OSF.

Link to Repositum

On Families of Planar DAGs with Constant Stack Number
Nöllenburg, Martin, Pupyrev, Sergey
Type: Inproceedings; In: Graph Drawing and Network Visualization : 31st International Symposium, GD 2023, Isola delle Femmine, Palermo, Italy, September 20–22, 2023, Revised Selected Papers, Part II; Vol: 14466; Pages: 135-151
Show Abstract
A k-stack layout (or k-page book embedding) of a graph consists of a total order of the vertices, and a partition of the edges into k sets of non-crossing edges with respect to the vertex order. The stack number of a graph is the minimum k such that it admits a k-stack layout. In this paper we study a long-standing problem regarding the stack number of planar directed acyclic graphs (DAGs), for which the vertex order has to respect the orientation of the edges. We investigate upper and lower bounds on the stack number of several families of planar graphs: We improve the constant upper bounds on the stack number of single-source and monotone outerplanar DAGs and of outerpath DAGs, and improve the constant upper bound for upward planar 3-trees. Further, we provide computer-aided lower bounds for upward (outer-) planar DAGs.

Link to Repositum

On the Complexity of the Storyplan Problem
Binucci, Carla, Di Giacomo, Emilio, Lenhart, William J., Liotta, Giuseppe, Montecchiani, Fabrizio, Nöllenburg, Martin, Symvonis, Antonios
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2022; Vol: 13764; Pages: 304-318
Show Abstract
Motivated by dynamic graph visualization, we study the problem of representing a graph G in the form of a storyplan, that is, a sequence of frames with the following properties. Each frame is a planar drawing of the subgraph of G induced by a suitably defined subset of its vertices. Between two consecutive frames, a new vertex appears while some other vertices may disappear, namely those whose incident edges have already been drawn in at least one frame. In a storyplan, each vertex appears and disappears exactly once. For a vertex (edge) visible in a sequence of consecutive frames, the point (curve) representing it does not change throughout the sequence. Note that the order in which the vertices of G appear in the sequence of frames is a total order. In the StoryPlan problem, we are given a graph and we want to decide whether there exists a total order of its vertices for which a storyplan exists. We prove that the problem is NP-complete, and complement this hardness with two parameterized algorithms, one in the vertex cover number and one in the feedback edge set number of G. Also, we prove that partial 3-trees always admit a storyplan, which can be computed in linear time. Finally, we show that the problem remains NP-complete in the case in which the total order of the vertices is given as part of the input and we have to choose how to draw the frames.

Link to Repositum

Planar L-drawings of directed graphs
Chaplick, Steven, Cornelsen, Sabine, Nöllenburg, Martin, Tollis, Ioannis G., Chimani, Markus, Da Lozzo, Giordano, Patrignani, Maurizio, Wolf, Alexander
Type: Article; In: Computing in Geometry and Topology; Vol: 2; Issue: 1; Pages: 7:1-7:15
Show Abstract
In this paper, we study drawings of directed graphs. We use the L-drawing standard where each edge is represented by a polygonal chain that consists of a vertical line segment incident to the source of the edge and a horizontal line segment incident to the target. First, we consider planar L-drawings. We provide necessary conditions for the existence of these drawings and show that testing for the existence of a planar L-drawing is an NP-complete problem. We also show how to decide in linear time whether there exists a planar L-drawing of a plane directed graph with a fixed assignment of the edges to the four sides (top, bottom, left, and right) of the vertices. Second, we consider upward- (resp. upward-rightward-) planar L-drawings. We provide upper bounds on the maximum number of edges of graphs admitting such drawings. Moreover, we characterize the directed st-graphs admitting an upward- (resp. upward-rightward-) planar L-drawing as exactly those admitting an embedding supporting a bitonic (resp. monotonically decreasing) st-ordering.

Link to Repositum

Indiscernibles and Flatness in Monadically Stable and Monadically NIP Classes
Dreier, Jan, Mählmann, Nikolas, Siebertz, Sebastian, Toruńczyk, Szymon
Type: Inproceedings; In: 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023); Vol: 261
Show Abstract
Monadically stable and monadically NIP classes of structures were initially studied in the context of model theory and defined in logical terms. They have recently attracted attention in the area of structural graph theory, as they generalize notions such as nowhere denseness, bounded cliquewidth, and bounded twinwidth. Our main result is the – to the best of our knowledge first – purely combinatorial characterization of monadically stable classes of graphs, in terms of a property dubbed flip-flatness. A class C of graphs is flip-flat if for every fixed radius r, every sufficiently large set of vertices of a graph G ∈ C contains a large subset of vertices with mutual distance larger than r, where the distance is measured in some graph G′ that can be obtained from G by performing a bounded number of flips that swap edges and non-edges within a subset of vertices. Flip-flatness generalizes the notion of uniform quasi-wideness, which characterizes nowhere dense classes and had a key impact on the combinatorial and algorithmic treatment of nowhere dense classes. To obtain this result, we develop tools that also apply to the more general monadically NIP classes, based on the notion of indiscernible sequences from model theory. We show that in monadically stable and monadically NIP classes indiscernible sequences impose a strong combinatorial structure on their definable neighborhoods. All our proofs are constructive and yield efficient algorithms.

Link to Repositum

The Influence of Dimensions on the Complexity of Computing Decision Trees
Kobourov, Stephen G., Löffler, Maarten, Montecchiani, Fabrizio, Pilipczuk, Marcin, Rutter, Ignaz, Seidel, Raimund, Sorge, Manuel, Wulms, Jules
Type: Inproceedings; In: Proceedings of the 37th AAAI Conference on Artificial Intelligence; Vol: 37, 7; Pages: 8343-8350
Show Abstract
A decision tree recursively splits a feature space Rd and then assigns class labels based on the resulting partition. Decision trees have been part of the basic machine-learning toolkit for decades. A large body of work considers heuristic algorithms that compute a decision tree from training data, usually aiming to minimize in particular the size of the resulting tree. In contrast, little is known about the complexity of the underlying computational problem of computing a minimum-size tree for the given training data. We study this problem with respect to the number d of dimensions of the feature space Rd, which contains n training examples. We show that it can be solved in O(n2d+1)time, but under reasonable complexity-theoretic assumptions it is not possible to achieve f (d)·no(d/ log d) running time. The problem is solvable in (dR)O(dR) ·n1+o(1) time, if there are exactly two classes and R is an upper bound on the number of tree leaves labeled with the first class.

Link to Repositum

Parallel Beam Search for Combinatorial Optimization
Frohner, Nikolaus, Gmys, Jan, Melab, Nouredine, Raidl, Günther, Talbi, El-Ghazali
Type: Inproceedings; In: The 51st International Conference on Parallel Processing. Workshop Proceedings
Show Abstract
Inspired by the recent success of parallelized exact methods to solve difficult scheduling problems, we present a general parallel beam search framework for combinatorial optimization problems. Beam search is a constructive metaheuristic traversing a search tree layer by layer while keeping in each layer a bounded number of promising nodes to consider many partial solutions in parallel. We propose a variant which is suitable for intra-node parallelization by multithreading with data parallelism. Diversification and inter-node parallelization are combined by performing multiple randomized runs on independent workers communicating via MPI. For sufficiently large problem instances and beam widths our prototypical implementation in the JIT-compiled Julia language admits speed-ups between 30-42 × on 46 cores with uniform memory access for two difficult classical problems, namely Permutation Flow Shop Scheduling (PFSP) with flowtime objective and the Traveling Tournament Problem (TTP). This allowed us to perform large beam width runs to find 11 new best feasible solutions for 22 difficult TTP benchmark instances up to 20 teams with an average wallclock runtime of about one hour per instance.

Link to Repositum

Uncertainty in humanities network visualization
Conroy, Melanie, Gillmann, Christina, Harvey, Francis, Mchedlidze, Tamara, Fabrikant, Sara Irina, Windhager, Florian, Scheuermann, Gerik, Tangherlini, Timothy, Warren, Christopher N., Weingart, Scott B., Rehbein, Malte, Börner, Katy, Elo, Kimmo, Jänicke, Stefan, Kerren, Andreas, Nöllenburg, Martin, Dwyer, Tim, Eide, Øyvind, Kobourov, Stephen G., Betz, Gregor
Type: Article; In: Frontiers in Communication; Vol: 8
Show Abstract
Network visualization is one of the most widely used tools in digital humanities research. The idea of uncertain or “fuzzy” data is also a core notion in digital humanities research. Yet network visualizations in digital humanities do not always prominently represent uncertainty. In this article, we present a mathematical and logical model of uncertainty as a range of values which can be used in network visualizations. We review some of the principles for visualizing uncertainty of different kinds, visual variables that can be used for representing uncertainty, and how these variables have been used to represent different data types in visualizations drawn from a range of non-humanities fields like climate science and bioinformatics. We then provide examples of two diagrams: one in which the variables displaying degrees of uncertainty are integrated/pinto the graph and one in which glyphs are added to represent data certainty and uncertainty. Finally, we discuss how probabilistic data and what-if scenarios could be used to expand the representation of uncertainty in humanities network visualizations.

Link to Repositum

Deterministic Constrained Multilinear Detection
Brand, Cornelius, Korchemna, Viktoria, Skotnica, Michael
Type: Inproceedings; In: 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023); Vol: 72; Pages: 1-14
Show Abstract
We extend the algebraic techniques of Brand and Pratt (ICALP'21) for deterministic detection of k-multilinear monomials in a given polynomial with non-negative coefficients to the more general situation of detecting colored k-multilinear monomials that satisfy additional constraints on the multiplicities of the colors appearing in them. Our techniques can be viewed as a characteristic-zero generalization of the algebraic tools developed by Guillemot and Sikora (MFCS'10) and Björklund, Kaski and Kowalik (STACS'13) As applications, we recover the state-of-the-art deterministic algorithms for the Graph Motif problem due to Pinter, Schachnai and Zehavi (MFCS'14), and give new deterministic algorithms for generalizations of certain questions on colored directed spanning trees or bipartite planar matchings running in deterministic time O^∗(4^k), studied originally by Gutin, Reidl, Wahlström and Zehavi (J. Comp. Sys. Sci. 95, '18). Finally, we give improved randomized algorithms for intersecting three and four matroids of rank k in characteristic zero, improving the record bounds of Brand and Pratt (ICALP'21) from O^∗(64^k) and O^∗(256^k), respectively, to O^∗(4^k).

Link to Repositum

The Computational Complexity of Concise Hypersphere Classification
Eiben, Eduard, Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 40th International Conference on Machine Learning; Vol: 202; Pages: 9060-9070
Show Abstract
Hypersphere classification is a classical and foundational method that can provide easy-to-process explanations for the classification of real-valued as well as binary data. However, obtaining an (ideally concise) explanation via hypersphere classification is much more difficult when dealing with binary data as opposed to real-valued data. In this paper, we perform the first complexity-theoretic study of the hypersphere classification problem for binary data. We use the fine-grained parameterized complexity paradigm to analyze the impact of structural properties that may be present in the input data as well as potential conciseness constraints. Our results include not only stronger lower bounds but also a number of new fixed-parameter algorithms for hypersphere classification of binary data, which can find an exact and concise explanation when one exists.

Link to Repositum

Hedonic diversity games: A complexity picture with more than two colors
Ganian, Robert, Hamm, Thekla, Knop, Dušan, Schierreich, Šimon, Suchý, Ondřej
Type: Article; In: Artificial Intelligence; Vol: 325
Show Abstract
Hedonic diversity games are a variant of the classical hedonic games designed to better model a variety of questions concerning diversity and fairness. Previous works mainly targeted the case with two diversity classes (represented as colors in the model) and provided some initial complexity-theoretic and existential results concerning Nash and individually stable outcomes. Here, we design new algorithms accompanied with lower bounds which provide a comprehensive parameterized-complexity picture for computing Nash and individually stable outcomes with respect to the most natural parameterizations of the problem. Crucially, our results hold for general hedonic diversity games where the number of colors is not necessarily restricted to two, and show that—apart from two trivial cases—a necessary condition for tractability in this setting is that the number of colors is bounded by the parameter. Moreover, for the special case of two colors we resolve an open question asked in previous work (Boehmer and Elkind, AAAI 2020).

Link to Repositum

Evaluating Restricted First-Order Counting Properties on Nowhere Dense Classes and Beyond
Dreier, Jan, Mock, Daniel, Rossmanith, Peter
Type: Inproceedings; In: 31st Annual European Symposium on Algorithms, ESA 2023; Vol: 274
Show Abstract
It is known that first-order logic with some counting extensions can be efficiently evaluated on graph classes with bounded expansion, where depth-r minors have constant density. More precisely, the formulas are ∃x1 . . . xk#y φ(x1, . . ., xk, y) > N, where φ is an FO-formula. If φ is quantifier-free, we can extend this result to nowhere dense graph classes with an almost linear FPT run time. Lifting this result further to slightly more general graph classes, namely almost nowhere dense classes, where the size of depth-r clique minors is subpolynomial, is impossible unless FPT = W[1]. On the other hand, in almost nowhere dense classes we can approximate such counting formulas with a small additive error. Note those counting formulas are contained in FOC({>}) but not FOC1(P). In particular, it follows that partial covering problems, such as partial dominating set, have fixed parameter algorithms on nowhere dense graph classes with almost linear running time.

Link to Repositum

Planarizing Graphs and Their Drawings by Vertex Splitting
Nöllenburg, Martin, Sorge, Manuel, Terziadis, Soeren, Villedieu, Anaïs, Wu, Hsiang-Yun, Wulms, Jules
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2022; Vol: 13764; Pages: 232-246
Show Abstract
The splitting number of a graph G= (V, E) is the minimum number of vertex splits required to turn G into a planar graph, where a vertex split removes a vertex v∈ V, introduces two new vertices v1, v2, and distributes the edges formerly incident to v among v1, v2. The splitting number problem is known to be NP-complete for abstract graphs and we provide a non-uniform fixed-parameter tractable (FPT) algorithm for this problem. We then shift focus to the splitting number of a given topological graph drawing in R2, where the new vertices resulting from vertex splits must be re-embedded into the existing drawing of the remaining graph. We show NP-completeness of this embedded splitting number problem, even for its two subproblems of (1) selecting a minimum subset of vertices to split and (2) for re-embedding a minimum number of copies of a given set of vertices. For the latter problem we present an FPT algorithm parameterized by the number of vertex splits. This algorithm reduces to a bounded outerplanarity case and uses an intricate dynamic program on a sphere-cut decomposition.

Link to Repositum

Hard QBFs for merge resolution
Beyersdorff, Olaf, Blinkhorn, Joshua, Mahajan, Meena, Peitl, Tomáš, Sood, Guarav
Type: Article; In: ACM Transactions on Computation Theory
Show Abstract
We prove the first genuine QBF proof size lower bounds for the proof system Merge Resolution (MRes [7]), a refutational proof system for prenex quantified Boolean formulas (QBF) with a CNF matrix. Unlike most QBF resolution systems in the literature, proofs in MRes consist of resolution steps together with information on countermodels, which are syntactically stored in the proofs as merge maps. As demonstrated in [7], this makes MRes quite powerful: it has strategy extraction by design and allows short proofs for formulas which are hard for classical QBF resolution systems. Here we show the first genuine QBF exponential lower bounds for MRes, thereby uncovering limitations of MRes. Technically, the results are either transferred from bounds from circuit complexity (for restricted versions of MRes) or directly obtained by combinatorial arguments (for full MRes). Our results imply that the MRes approach is largely orthogonal to other QBF resolution models such as the QCDCL resolution systems QRes and QURes and the expansion systems ∀Exp + Res and IR.

Link to Repositum

Hardness Characterisations and Size-width Lower Bounds for QBF Resolution
Beyersdorff, Olaf, Blinkhorn, Joshua, Mahajan, Meena, Peitl, Tomáš
Type: Article; In: ACM Transactions on Computational Logic; Vol: 24; Issue: 2
Show Abstract
We provide a tight characterisation of proof size in resolution for quantified Boolean formulas (QBF) via circuit complexity. Such a characterisation was previously obtained for a hierarchy of QBF Frege systems [16], but leaving open the most important case of QBF resolution. Different from the Frege case, our characterisation uses a new version of decision lists as its circuit model, which is stronger than the CNFs the system works with. Our decision list model is well suited to compute countermodels for QBFs. Our characterisation works for both Q-Resolution and QU-Resolution. Using our characterisation, we obtain a size-width relation for QBF resolution in the spirit of the celebrated result for propositional resolution [4]. However, our result is not just a replication of the propositional relation—intriguingly ruled out for QBF in previous research [12]—but shows a different dependence between size, width, and quantifier complexity. An essential ingredient is an improved relation between the size and width of term decision lists; this may be of independent interest. We demonstrate that our new technique elegantly reproves known QBF hardness results and unifies previous lower-bound techniques in the QBF domain.

Link to Repositum

Maximizing Social Welfare in Score-Based Social Distance Games
Ganian, Robert, Hamm, Thekla, Knop, Dusan, Roy, Sanjukta, Schierreich, Šimon, Suchý, Ondřej
Type: Inproceedings; In: Proceedings Nineteenth conference on Theoretical Aspects of Rationality and Knowledge; Vol: 379; Pages: 272-286
Show Abstract
Social distance games have been extensively studied as a coalition formation model where the utilities of agents in each coalition were captured using a utility function u that took into account distances in a given social network. In this paper, we consider a non-normalized score-based definition of social distance games where the utility function u_v depends on a generic scoring vector v, which may be customized to match the specifics of each individual application scenario. As our main technical contribution, we establish the tractability of computing a welfare-maximizing partitioning of the agents into coalitions on tree-like networks, for every score-based function u_v. We provide more efficient algorithms when dealing with specific choices of u_v or simpler networks, and also extend all of these results to computing coalitions that are Nash stable or individually rational. We view these results as a further strong indication of the usefulness of the proposed score-based utility function: even on very simple networks, the problem of computing a welfare-maximizing partitioning into coalitions remains open for the originally considered canonical function u.

Link to Repositum

Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained Complexity
Chen, Jiehua, Csáji, Gergely, Roy, Sanjukta, Simola, Sofia Henna Elisa
Type: Inproceedings; In: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems; Pages: 251-259
Show Abstract
We investigate verification and existence problems for prominent stability concepts in hedonic games with friends, enemies, and optionally with neutrals [8, 15]. We resolve several (long-standing) open questions [4, 15, 19, 22] and show that for friend-oriented preferences, under the friends and enemies model, it is coNP-complete to verify whether a given agent partition is (strictly) core stable, while under the friends, enemies, and neutrals model, it is NP-complete to determine whether an individual stable partition exists. We further look into natural restricted cases from the literature, such as when the friends and enemies relationships are symmetric, when the initial coalitions have bounded size, when the vertex degree in the friendship graph (resp. the union of friendship and enemy graph) is bounded, or when such graph is acyclic or close to being acyclic. We obtain a complete (parameterized) complexity picture regarding these cases.

Link to Repositum

IPASIR-UP: User Propagators for CDCL
Fazekas, Katalin, Niemetz, Aina, Preiner, Mathias, Kirchweger, Markus, Szeider, Stefan, Biere, Armin
Type: Inproceedings; In: 26th International Conference on Theory and Applications of Satisfiability Testing; Vol: 271; Pages: 8:1-8:13
Show Abstract
Modern SAT solvers are frequently embedded as sub-reasoning engines into more complex tools for addressing problems beyond the Boolean satisfiability problem. Examples include solvers for Satisfiability Modulo Theories (SMT), combinatorial optimization, model enumeration and counting. In such use cases, the SAT solver is often able to provide relevant information beyond the satisfiability answer. Further, domain knowledge of the embedding system (e.g., symmetry properties or theory axioms) can be beneficial for the CDCL search, but cannot be efficiently represented in clausal form. In this paper, we propose a general interface to inspect and influence the internal behaviour of CDCL SAT solvers. Our goal is to capture the most essential functionalities that are sufficient to simplify and improve use cases that require a more fine-grained interaction with the SAT solver than provided via the standard IPASIR interface. For our experiments, we extend CaDiCaL with our interface and evaluate it on two representative use cases: enumerating graphs within the SAT modulo Symmetries framework (SMS), and as the main CDCL(T) SAT engine of the SMT solver cvc5.

Link to Repositum

A logic-based algorithmic meta-theorem for mim-width
Bergougnoux, Benjamin, Dreier, Jan, Jaffke, Lars
Type: Inproceedings; In: Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA23); Pages: 3282-3304
Show Abstract
We introduce a logic called distance neighborhood logic with acyclicity and connectivity constraints (A&C DN for short) which extends existential MSO1 with predicates for querying neighborhoods of vertex sets in various powers of a graph and for verifying connectivity and acyclicity of vertex sets. Building upon [Bergougnoux and Kante, ESA 2019; SIDMA 2021], we show that the model checking problem for every fixed A&C DN formula is solvable in nO(w) time when the input graph is given together with a branch decomposition of mim-width W. Nearly all problems that are known to be solvable in polynomial time given a branch decomposition of constant mim-width can be expressed in this framework. We add several natural problems to this list, including problems asking for diverse sets of solutions. Our model checking algorithm is efficient whenever the given branch decomposition of the input graph has small index in terms of the d-neighborhood equivalence [Bui-Xuan, Telle, and Vatshelle, TCS 2013]. We therefore unify and extend known algorithms for tree-width, clique-width and rank-width. Our algorithm has a single-exponential dependence on these three width measures and asymptotically matches run times of the fastest known algorithms for several problems. This results in algorithms with tight run times under the Exponential Time Hypothesis (ETH) for tree-width, clique-width and rank-width; the above mentioned run time for mim-width is nearly tight under the ETH for several problems as well. Our results are also tight in terms of the expressive power of the logic: we show that already slight extensions of our logic make the model checking problem para-NP-hard when parameterized by mim-width plus formula length.

Link to Repositum

Interactive Job Scheduling with Partially Known Personnel Availabilities
Varga, Johannes, Raidl, Günther R., Rönnberg, Elina, Rodemann, Tobias
Type: Inproceedings; In: Optimization and Learning: 6th International Conference, OLA 2023, Malaga, Spain, May 3–5, 2023, Proceedings; Vol: 1824; Pages: 236-247
Show Abstract
When solving a job scheduling problem that involves humans, the times in which they are available must be taken into account. For practical acceptance of a scheduling tool, it is further crucial that the interaction with the humans is kept simple and to a minimum. Requiring users to fully specify their availability times is typically not reasonable. We consider a scenario in which initially users only suggest single starting times for their jobs and an optimized schedule shall then be found within a small number of interaction rounds. In each round users may only be suggested a small set of alternative time intervals, which are accepted or rejected. To make the best out of these limited interaction possibilities, we propose an approach that utilizes integer linear programming and a theoretically derived probability calculation for the users’ availabilities based on a Markov model. Educated suggestions of alternative time intervals for performing jobs are determined from these acceptance probabilities as well as the optimization’s current state. The approach is experimentally evaluated and compared to diverse baselines. Results show that an initial schedule can be quickly improved over few interaction rounds, and the final schedule may come close to the solution of the full-knowledge case despite the limited interaction.

Link to Repositum

Fixed-Parameter Algorithms for Computing {RAC} Drawings of Graphs
Brand, Cornelius, Ganian, Robert, Röder Sebastian, Schager Florian
Type: Inproceedings; In: Graph Drawing and Network Visualization : 31st International Symposium, GD 2023, Isola delle Femmine, Palermo, Italy, September 20–22, 2023, Revised Selected Papers, Part II; Vol: 14466; Pages: 66-81
Show Abstract
In a right-angle crossing (RAC) drawing of a graph, each edge is represented as a polyline and edge crossings must occur at an angle of exactly , where the number of bends on such polylines is typically restricted in some way. While structural and topological properties of RAC drawings have been the focus of extensive research, little was known about the boundaries of tractability for computing such drawings. In this paper, we initiate the study of RAC drawings from the viewpoint of parameterized complexity. In particular, we establish that computing a RAC drawing of an input graph G with at most b bends (or determining that none exists) is fixed-parameter tractable parameterized by either the feedback edge number of G, or b plus the vertex cover number of G.

Link to Repositum

A Structural Complexity Analysis of Synchronous Dynamical Systems
Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Korchemna, Viktoriia
Type: Inproceedings; In: Proceedings of the 37th AAAI Conference on Artificial Intelligence; Vol: 37, 5; Pages: 6313-6321
Show Abstract
Synchronous dynamical systems are well-established models that have been used to capture a range of phenomena in networks, including opinion diffusion, spread of disease and product adoption. We study the three most notable problems in synchronous dynamical systems: whether the system will transition to a target configuration from a starting configuration, whether the system will reach convergence from a starting configuration, and whether the system is guaranteed to converge from every possible starting configuration. While all three problems were known to be intractable in the classical sense, we initiate the study of their exact boundaries of tractability from the perspective of structural parameters of the network by making use of the more fine-grained parameterized complexity paradigm. As our first result, we consider treewidth - as the most prominent and ubiquitous structural parameter - and show that all three problems remain intractable even on instances of constant treewidth. We complement this negative finding with fixed-parameter algorithms for the former two problems parameterized by treedepth, a well-studied restriction of treewidth. While it is possible to rule out a similar algorithm for convergence guarantee under treedepth, we conclude with a fixed-parameter algorithm for this last problem when parameterized by treedepth and the maximum in-degree.

Link to Repositum

Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth
Bergougnoux, Benjamin, Chekan, Vera, Ganian, Robert, Kanté, Mamadou M., Mnich, Matthias, Oum, Sang-il, Pilipczuk, Michał, van Leeuwen, Erik Jan
Type: Inproceedings; In: 31st Annual European Symposium on Algorithms, ESA 2023; Vol: 274; Pages: 1-18
Show Abstract
Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition’s width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on n-vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth) of depth d and using k labels, Independent Set can be solved in time 2O(dk) · nO(1) using O(dk2 log n) space; Max Cut can be solved in time nO(dk) using O(dk log n) space; and Dominating Set can be solved in time 2O(dk) · nO(1) using nO(1) space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial.

Link to Repositum

Structure-Aware Lower Bounds and Broadening the Horizon of Tractability for QBF
Fichte, Johannes K., Ganian, Robert, Hecher, Markus, Slivovsky, Friedrich, Ordyniak, Sebastian
Type: Inproceedings; In: 2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS); Pages: 1-14
Show Abstract
The QSAT problem, which asks to evaluate a quantified Boolean formula (QBF), is of fundamental interest in approximation, counting, decision, and probabilistic complexity and is also considered the prototypical PSPACE-complete problem. As such, it has previously been studied under various structural restrictions (parameters), most notably parameterizations of the primal graph representation of instances. Indeed, it is known that QSAT remains PSPACE-complete even when restricted to instances with constant treewidth of the primal graph, but the problem admits a double-exponential fixed-parameter algorithm parameterized by the vertex cover number (primal graph).However, prior works have left a gap in our understanding of the complexity of QSAT when viewed from the perspective of other natural representations of instances, most notably via incidence graphs. In this paper, we develop structure-aware reductions which allow us to obtain essentially tight lower bounds for highly restricted instances of QSAT, including instances whose incidence graphs have bounded treedepth or feedback vertex number. We complement these lower bounds with novel algorithms for QSAT which establish a nearly-complete picture of the problem's complexity under standard graph-theoretic parameterizations. We also show implications for other natural graph representations, and obtain novel upper as well as lower bounds for QSAT under more fine-grained parameterizations of the primal graph.

Link to Repositum

A Polyhedral Perspective on Tropical Convolutions
Brand, Cornelius, Koutecký, Martin, Lassota, Alexandra
Type: Inproceedings; In: Combinatorial Algorithms : 34th International Workshop, IWOCA 2023, Tainan, Taiwan, June 7–10, 2023, Proceedings; Vol: 13889; Pages: 111-122
Show Abstract
Tropical (or min-plus) convolution is a well-studied algorithmic primitive in fine-grained complexity. We exhibit a novel connection between polyhedral formulations and tropical convolution, through which we arrive at a dual variant of tropical convolution. We show this dual operation to be equivalent to primal convolutions. This leads us to considering the geometric objects that arise from dual tropical convolution as a new approach to algorithms and lower bounds for tropical convolutions. In particular, we initiate the study of their extended formulations.

Link to Repositum

Fixed-parameter tractability of DIRECTED MULTICUT with three terminal pairs parameterized by the size of the cutset: twin-width meets flow-augmentation
Hatzel, Meike, Jaffke, Lars, LIMA BARBOSA, CLÁUDIA PALOMA, Masařík, Tomáš, Pilipczuk, Marcin, Sharma, Roohani, Sorge, Manuel
Type: Inproceedings; In: Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA23); Pages: 3229-3244
Show Abstract
We show fixed-parameter tractability of the Directed Multicut problem with three terminal pairs (with a randomized algorithm). In this problem we are given a directed graph G, three pairs of vertices (calledterminals) (s1, t1), (s2, t2), (s3, t3), and an integer k and we want to find a set of at most k non-terminalvertices in G that intersect all s1t1-paths, all s2t2-paths, and all s3t3-paths. The parameterized complexity of this problem has been open since Chitnis, Hajiaghayi, and Marx proved fixed-parameter tractability ofthe two-terminal-pairs case at SODA 2012, and Pilipczuk and Wahlström proved the W[1]-hardness of thefour-terminal-pairs case at SODA 2016. On the technical side, we use two recent developments in parameterized algorithms. Using the techniqueof directed flow-augmentation [Kim, Kratsch, Pilipczuk, Wahlström, STOC 2022] we cast the problem as a CSP problem with few variables and constraints over a large ordered domain. We observe that this problemcan be in turn encoded as an FO model-checking task over a structure consisting of a few 0-1 matrices. Welook at this problem through the lenses of twin-width, a recently introduced structural parameter [Bonnet, Kim, Thomassé, Watrigant, FOCS 2020]: By a recent characterization [Bonnet, Giocanti, Ossona de Mendez,Simon, Thomassé, Toruńczyk, STOC 2022] the said FO model-checking task can be done in FPT time if the said matrices have bounded grid rank. To complete the proof, we show an irrelevant vertex rule: If any of thematrices in the said encoding has a large grid minor, a vertex corresponding to the “middle” box in the gridminor can be proclaimed irrelevant — not contained in the sought solution — and thus reduced.

Link to Repositum

Worbel: aggregating point labels into word clouds
Bhore, Sujoy, Ganian, Robert, Li, Guangping, Nöllenburg, Martin, Wulms, Jules
Type: Article; In: ACM Transactions on Spatial Algorithms and Systems; Vol: 9; Issue: 3
Show Abstract
Point feature labeling is a classical problem in cartography and GIS that has been extensively studied for geospatial point data. At the same time, word clouds are a popular visualization tool to show the most important words in text data which has also been extended to visualize geospatial data (Buchin et al. PacificVis 2016). In this article, we study a hybrid visualization, which combines aspects of word clouds and point labeling. In the considered setting, the input data consist of a set of points grouped into categories and our aim is to place multiple disjoint and axis-Aligned rectangles, each representing a category, such that they cover points of (mostly) the same category under some natural quality constraints. In our visualization, we then place category names inside the computed rectangles to produce a labeling of the covered points which summarizes the predominant categories globally (in a word-cloud-like fashion) while locally avoiding excessive misrepresentation of points (i.e., retaining the precision of point labeling). We show that computing a minimum set of such rectangles is NP-hard. Hence, we turn our attention to developing a heuristic with (optional) exact components using SAT models to compute our visualizations. We evaluate our algorithms quantitatively, measuring running time and quality of the produced solutions, on several synthetic and real-world data sets. Our experiments show that the fully heuristic approach produces solutions of comparable quality to heuristics combined with exact SAT models, while running much faster.

Link to Repositum

Untangling circular drawings: Algorithms and complexity
Bhore, Sujoy, Li, Guangping, Nöllenburg, Martin, Rutter, Ignaz, Wu, Hsiang-Yun
Type: Article; In: Computational Geometry; Vol: 111
Show Abstract
We consider the problem of untangling a given (non-planar) straight-line circular drawing δG of an outerplanar graph G=(V,E) into a planar straight-line circular drawing of G by shifting a minimum number of vertices to a new position on the circle. For an outerplanar graph G, it is obvious that such a crossing-free circular drawing always exists and we define the circular shifting number shift∘(δG) as the minimum number of vertices that are required to be shifted in order to resolve all crossings of δG. We show that the problem CIRCULAR UNTANGLING, asking whether shift∘(δG)≤K for a given integer K, is NP-complete. For n-vertex outerplanar graphs, we obtain a tight upper bound of shift∘(δG)≤n−⌊n−2⌋−2. Moreover, we study the CIRCULAR UNTANGLING for almost-planar circular drawings, in which a single edge is involved in all of the crossings. For this problem, we provide a tight upper bound [Formula presented] and present an O(n2)-time algorithm to compute the circular shifting number of almost-planar drawings.

Link to Repositum

Extending Orthogonal Planar Graph Drawings Is Fixed-Parameter Tractable
Bhore, Sujoy, Ganian, Robert, Khazaliya, Liana, Montecchiani, Fabrizio, Nöllenburg, Martin
Type: Inproceedings; In: 39th International Symposium on Computational Geometry; Vol: 258; Pages: 1-16
Show Abstract
The task of finding an extension to a given partial drawing of a graph while adhering to constraints on the representation has been extensively studied in the literature, with well-known results providing efficient algorithms for fundamental representations such as planar and beyond-planar topological drawings. In this paper, we consider the extension problem for bend-minimal orthogonal drawings of planar graphs, which is among the most fundamental geometric graph drawing representations. While the problem was known to be NP-hard, it is natural to consider the case where only a small part of the graph is still to be drawn. Here, we establish the fixed-parameter tractability of the problem when parameterized by the size of the missing subgraph. Our algorithm is based on multiple novel ingredients which intertwine geometric and combinatorial arguments. These include the identification of a new graph representation of bend-equivalent regions for vertex placement in the plane, establishing a bound on the treewidth of this auxiliary graph, and a global point-grid that allows us to discretize the possible placement of bends and vertices into locally bounded subgrids for each of the above regions.

Link to Repositum

On the upward book thickness problem: Combinatorial and complexity results
Bhore, Sujoy, Da Lozzo, Giordano, Montecchiani, Fabrizio, Nöllenburg, Martin
Type: Article; In: European Journal of Combinatorics; Vol: 110
Show Abstract
Among the vast literature concerning graph drawing and graph theory, linear layouts of graphs have been the subject of intense research over the years, both from a combinatorial and from an algorithmic perspective. In particular, upward book embeddings of directed acyclic graphs (DAGs) form a popular class of linear layouts with notable applications, and the upward book thickness of a DAG is the minimum number of pages required by any of its upward book embeddings. A long-standing conjecture by Heath, Pemmaraju, and Trenk (1999) states that the upward book thickness of outerplanar DAGs is bounded above by a constant. In this paper, we show that the conjecture holds for subfamilies of upward outerplanar graphs, namely those whose underlying graph is an internally-triangulated outerpath or a cactus, and those whose biconnected components are st-outerplanar graphs. On the complexity side, it is known that deciding whether a graph has upward book thickness k is NP-hard for any fixed k≥3. We show that the problem, for any k≥5, remains NP-hard for graphs whose domination number is O(k), but it is fixed-parameter tractable (FPT) in the vertex cover number.

Link to Repositum

SAT-Based Generation of Planar Graphs
Markus Kirchweger, Scheucher, Manfred, Szeider, Stefan
Type: Inproceedings; In: 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023); Vol: 271
Show Abstract
To test a graph's planarity in SAT-based graph generation we develop SAT encodings with dynamic symmetry breaking as facilitated in the SAT modulo Symmetry (SMS) framework. We implement and compare encodings based on three planarity criteria. In particular, we consider two eager encodings utilizing order-based and universal-set-based planarity criteria, and a lazy encoding based on Kuratowski's theorem. The performance and scalability of these encodings are compared on two prominent problems from combinatorics: The computation of planar Turán numbers and the Earth-Moon problem. We further showcase the power of SMS equipped with a planarity encoding by verifying and extending several integer sequences from the Online Encyclopedia of Integer Sequences (OEIS) related to planar graph enumeration. Furthermore, we extend the SMS framework to directed graphs which might be of independent interest.

Link to Repositum

On Computing Optimal Tree Ensembles
Komusiewicz, Christian, Kunz, Pascal, Sommer, Frank, Sorge, Manuel
Type: Inproceedings; In: Proceedings of the 40th International Conference on Machine Learning; Vol: 202; Pages: 17364-17374
Show Abstract
Random forests and, more generally, (deci- sion-)tree ensembles are widely used methods for classification and regression. Recent algorith- mic advances allow to compute decision trees that are optimal for various measures such as their size or depth. We are not aware of such research for tree ensembles and aim to contribute to this area. Mainly, we provide two novel algo- rithms and corresponding lower bounds. First, we are able to carry over and substantially improve on tractability results for decision trees, obtain- ing a (6δDS)S · poly-time algorithm, where S is the number of cuts in the tree ensemble, D the largest domain size, and δ is the largest num- ber of features in which two examples differ. To achieve this, we introduce the witness-tree tech- nique which also seems promising for practice. Second, we show that dynamic programming, which has been successful for decision trees, may also be viable for tree ensembles, providing an ℓn · poly-time algorithm, where ℓ is the number of trees and n the number of examples. Finally, we compare the number of cuts necessary to clas- sify training data sets for decision trees and tree ensembles, showing that ensembles may need ex- ponentially fewer cuts for increasing number of trees.

Link to Repositum

Splitting Vertices in 2-Layer Graph Drawings
Ahmed, Reyan, Angelini, Patrizio, Bekos, Michael A., Battista, Giuseppe Di, Kaufmann, Michael, Kindermann, Philipp, Kobourov, Stephen, Nöllenburg, Martin, Symvonis, Antonios, Villedieu, Anais, Wallinger, Markus
Type: Article; In: IEEE Computer Graphics and Applications; Vol: 43; Issue: 3; Pages: 24-35
Show Abstract
Bipartite graphs model the relationships between two disjoint sets of entities in several applications and are naturally drawn as 2-layer graph drawings. In such drawings, the two sets of entities (vertices) are placed on two parallel lines (layers), and their relationships (edges) are represented by segments connecting vertices. Methods for constructing 2-layer drawings often try to minimize the number of edge crossings. We use vertex splitting to reduce the number of crossings, by replacing selected vertices on one layer by two (or more) copies and suitably distributing their incident edges among these copies. We study several optimization problems related to vertex splitting, either minimizing the number of crossings or removing all crossings with fewest splits. While we prove that some variants are ${\mathsf {NP}}$NP-complete, we obtain polynomial-time algorithms for others. We run our algorithms on a benchmark set of bipartite graphs representing the relationships between human anatomical structures and cell types.

Link to Repositum

The st-Planar Edge Completion Problem Is Fixed-Parameter Tractable
Khazaliya, Liana, Kindermann, Philipp, Liotta, Giuseppe, Montecchiani, Fabrizio, Simonov, Kirill
Type: Inproceedings; In: 34th International Symposium on Algorithms and Computation (ISAAC 2023); Vol: 283; Pages: 1-13
Show Abstract
The problem of deciding whether a biconnected planar digraph G = (V, E) can be augmented to become an st-planar graph by adding a set of oriented edges E′ ⊆ V ×V is known to be NP-complete. We show that the problem is fixed-parameter tractable when parameterized by the size of the set E′

Link to Repositum

2022
An efficient algorithm for counting Markov equivalent DAGs
Ganian, Robert, Hamm, Thekla, Talvitie, Topi
Type: Article; In: Artificial Intelligence; Vol: 304; Pages: 1-13
Show Abstract
We consider the problem of counting the number of DAGs which are Markov equivalent, i.e., which encode the same conditional independencies between random variables. The problem has been studied, among others, in the context of causal discovery, and it is known that it reduces to counting the number of so-called moral acyclic orientations of certain undirected graphs, notably chordal graphs. Our main empirical contribution is a new algorithm which outperforms previously known exact algorithms for the considered problem by a significant margin. On the theoretical side, we show that our algorithm is guaranteed to run in polynomial time on a broad cubic-time recognisable class of chordal graphs, including interval graphs.

Link to Repositum

SLIM- SAT-based Local Improvement
Szeider, Stefan
Type: Presentation

Link to Repositum

The Parameterized Complexity of SAT
Szeider, Stefan
Type: Presentation

Link to Repositum

Pedant: A Certifying DQBF Solver
Reichl, Franz Xaver, Slivovsky, Friedrich
Type: Inproceedings; In: 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022); Vol: 236; Pages: 1-10
Show Abstract
Pedant is a solver for Dependency Quantified Boolean Formulas (DQBF) that combines propositional definition extraction with Counterexample-Guided Inductive Synthesis (CEGIS) to compute a model of a given formula. Pedant 2 improves upon an earlier version in several ways. We extend the notion of dependencies by allowing existential variables to depend on other existential variables. This leads to more and smaller definitions, as well as more concise repairs for counterexamples. Additionally, we reduce counterexamples by determining minimal separators in a conflict graph, and use decision tree learning to obtain default functions for undetermined variables. An experimental evaluation on standard benchmarks showed a significant increase in the number of solved instances compared to the previous version of our solver.

Link to Repositum

Learning Fast-Inference Bayesian Networks
Peruvemba Ramaswamy, Vaidyanathan, Szeider, Stefan
Type: Inproceedings; In: Advances in Neural Information Processing Systems 34 (NeurIPS 2021); Vol: 34
Show Abstract
We propose new methods for learning Bayesian networks (BNs) that reliably support fast inference. We utilize maximum state space size as a more fine-grained measure for the BN's reasoning complexity than the standard treewidth measure, thereby accommodating the possibility that variables range over domains of different sizes. Our methods combine heuristic BN structure learning algorithms with the recently introduced MaxSAT-powered local improvement method (Peruvemba Ramaswamy and Szeider, AAAI'21). Our experiments show that our new learning methods produce BNs that support significantly faster exact probabilistic inference than BNs learned with treewidth bounds.

Link to Repositum

Learning Beam Search: Utilizing Machine Learning for Solving Combinatorial Optimization Problems
Raidl, Günther
Type: Presentation

Link to Repositum

A SAT Approach to Twin-Width
Schidler, André, Szeider, Stefan
Type: Inproceedings; In: 2022 Proceedings of the Symposium on Algorithm Engineering and Experiments (ALENEX); Pages: 67-77

Link to Repositum

Learning for Guiding Metaheuristics
Raidl, Günther
Type: Presentation

Link to Repositum

A Multilevel Optimization Approach for Large Scale Battery Exchange Station Location Planning
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther
Type: Presentation
Show Abstract
We consider the Multi-Period Battery Swapping Station Location Problem (MBSSLP), where the setup costs for battery swapping stations should be minimized while at the same time a certain amount of customer demand should be satisfied. As not every customer is willing to travel to a predestined station, the MBSSLP also considers a certain customer dropout dependent on the length of the detour induced by traveling to an assigned station. In a previous approach, a large neighborhood search was developed for solving MBSSLP instances with up to roughly 2000 potential locations at which battery swapping stations can be placed and 8000 origin-destination (O/D) pairs that describe the trips of customers. However, real world instances can be magnitudes larger. Finding good solutions for such large instances is in general a difficult task, even for metaheuristics, and one often resorts to clustering, refinement, or partitioning approaches that reduce the problem size or decompose it into smaller subproblems. We propose a multilevel optimization (MLO) approach for addressing large scale MBSSLP instances. The basic idea of this approach is to first reduce the problem size by iteratively coarsening an underlying problem represented via a graph until the reduced problem can be solved in reasonable time. Then, a solution for the coarsest problem is generated. Afterwards, a solution to the original problem is obtained by refining the graph, i.e., by iteratively undoing the coarsening, and extending the solution accordingly. Our approach is experimentally evaluated on artificial benchmark scenarios. We evaluate different strategies for coarsening the problem graph and show that our MLO approach can generate reasonable solutions for up to tens of thousands of potential station areas and hundreds of thousands of O/D pairs within one hour.

Link to Repositum

Representing Normative Reasoning in Answer Set Programming Using Weak Constraints
Hatschka, Christian, Ciabattoni, Agata, Eiter, Thomas
Type: Presentation

Link to Repositum

A note on algebraic techniques for subgraph detection
Brand, Cornelius
Type: Article; In: Information Processing Letters; Vol: 176
Show Abstract
The k-path problem asks whether a given graph contains a simple path of length k. Along with other prominent parameterized problems, it reduces to the problem of detecting multilinear terms of degree k (k-MLD), making the latter a fundamental problem in parameterized algorithms. This has generated significant efforts directed at devising fast deterministic algorithms for k-MLD, and there are now at least two independent approaches that yield the same record bound on the running time. Namely the combinatorial representative-set based approach of Fomin et al. (JACM'16), and the algebraic techniques of Pratt (FOCS'19) and Brand and Pratt (ICALP'21). In this note, we explore the relationship between the latter results, based on partial differentials, and a previous algebraic approach based on the exterior algebra (Brand, ESA'19; Brand, Dell and Husfeldt, STOC'18). We do so by studying the relevant algebraic objects. These are on the one hand (1) the subalgebras of the tensor square of the exterior algebra generated in degree one. On the other hand, we consider (2) the space of partial derivatives of generic determinants, and closely related, (3) the space of minors of generic matrices. We prove that (2) arises as a quotient of (1), and that there is an isomorphism between the objects (1) and (3). Hence, the techniques are essentially equivalent, and the quotient relation between (2) and both of (1) and (3) hints at a possible refinement of the techniques.

Link to Repositum

QBF Solvers and their Proof Complexity
Slivovsky, Friedrich
Type: Presentation

Link to Repositum

Convex Grid Drawings of Planar Graphs with Constant Edge-Vertex Resolution
Bekos, Michael A., Gronemann, Martin, Montecchiani, Fabrizio, Symvonis, Antonios
Type: Inproceedings; In: Combinatorial Algorithms; Vol: 13270; Pages: 157-171
Show Abstract
We continue the study of the area requirement of convex straight-line grid drawings of 3-connected plane graphs, which has been intensively investigated in the last decades. Motivated by applications, such as graph editors, we additionally require the obtained drawings to have bounded edge-vertex resolution, that is, the closest distance between a vertex and any non-incident edge is lower bounded by a constant that does not depend on the size of the graph. We present a drawing algorithm that takes as input a 3-connected plane graph with n vertices and f internal faces and computes a convex straight-line drawing with edge-vertex resolution at least 12 on an integer grid of size (n- 2 + a) × (n- 2 + a), where a= min { n- 3, f}. Our result improves the previously best-known area bound of (3 n- 7 ) × (3 n- 7 )/ 2 by Chrobak, Goodrich and Tamassia.

Link to Repositum

From Twin-Width to Propositional Logic and Back
Szeider, Stefan
Type: Presentation

Link to Repositum

SAT-based Local Improvement
Szeider, Stefan
Type: Presentation

Link to Repositum

On Computing Optimal Linear Diagrams
Dobler, Alexander, Nöllenburg, Martin
Type: Inproceedings; In: Diagrammatic Representation and Inference; Pages: 20-36
Show Abstract
Linear diagrams are an effective way to visualize set-based data by representing elements as columns and sets as rows with one or more horizontal line segments, whose vertical overlaps with other rows indicate set intersections and their contained elements. The efficacy of linear diagrams heavily depends on having few line segments. The underlying minimization problem has already been explored heuristically, but its computational complexity has yet to be classified. In this paper, we show that minimizing line segments in linear diagrams is equivalent to a well-studied NP -hard problem, and extend the NP -hardness to a restricted setting. We develop new algorithms for computing linear diagrams with minimum number of line segments that build on a traveling salesperson (TSP) formulation and allow constraints on the element orders, namely, forcing two sets to be drawn as single line segments, giving weights to sets, and allowing hierarchical constraints via PQ-trees. We conduct an experimental evaluation and compare previous algorithms for minimizing line segments with our TSP formulation, showing that a state-of-the art TSP-solver can solve all considered instances optimally, most of them within few milliseconds.

Link to Repositum

Edge-Cut Width: An Algorithmically Driven Analogue of Treewidth Based on Edge Cuts
Brand, Cornelius, Ceylan, Esra, Ganian, Robert, Hatschka, Christian, Korchemna, Viktoriia
Type: Inproceedings; In: Graph-Theoretic Concepts in Computer Science; Vol: 13453; Pages: 98-113
Show Abstract
Decompositional parameters such as treewidth are commonly used to obtain fixed-parameter algorithms for $$\textsf{NP}$$ -hard graph problems. For problems that are $${{\textsf{W}}} [1]$$ -hard parameterized by treewidth, a natural alternative would be to use a suitable analogue of treewidth that is based on edge cuts instead of vertex separators. While tree-cut width has been coined as such an analogue of treewidth for edge cuts, its algorithmic applications have often led to disappointing results: out of twelve problems where one would hope for fixed-parameter tractability parameterized by an edge-cut based analogue to treewidth, eight were shown to be $${{\textsf{W}}} [1]$$ -hard parameterized by tree-cut width. As our main contribution, we develop an edge-cut based analogue to treewidth called edge-cut width. Edge-cut width is, intuitively, based on measuring the density of cycles passing through a spanning tree of the graph. Its benefits include not only a comparatively simple definition, but mainly that it has interesting algorithmic properties: it can be computed by a fixed-parameter algorithm, and it yields fixed-parameter algorithms for all the aforementioned problems where tree-cut width failed to do so.

Link to Repositum

Tractable Abstract Argumentation via Backdoor-Treewidth
Dvořák, Wolfgang, Hecher, Markus, König, Matthias, Schidler, Andre, Szeider, Stefan, Woltran, Stefan
Type: Inproceedings; In: Proceedings of the 36th AAAI Conference on Artificial Intelligence; Vol: 36; Pages: 5608-5615
Show Abstract
Argumentation frameworks (AFs) are a core formalism in the field of formal argumentation. As most standard computational tasks regarding AFs are hard for the first or second level of the Polynomial Hierarchy, a variety of algorithmic approaches to achieve manageable runtimes have been considered in the past. Among them, the backdoor-approach and the treewidth-approach turned out to yield fixed-parameter tractable fragments. However, many applications yield high parameter values for these methods, often rendering them infeasible in practice. We introduce the backdoor-treewidth approach for abstract argumentation, combining the best of both worlds with a guaranteed parameter value that does not exceed the minimum of the backdoor- and treewidth-parameter. In particular, we formally define backdoor-treewidth and establish fixed-parameter tractability for standard reasoning tasks of abstract argumentation. Moreover, we provide systems to find and exploit backdoors of small width, and conduct systematic experiments evaluating the new parameter.

Link to Repositum

Learning Value Functions for Same-Day Delivery Problems
Frohner, Nikolaus, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 18th International Conference on Computer Aided Systems Theory (EUROCAST 2022); Pages: 20-21

Link to Repositum

Threshold Treewidth and Hypertree Width
Ganian, Robert, Schidler, André, Sorge, Manuel, Szeider, Stefan
Type: Article; In: Journal of Artificial Intelligence Research; Vol: 74; Pages: 1687-1713
Show Abstract
Treewidth and hypertree width have proven to be highly successful structural parameters in the context of the Constraint Satisfaction Problem (CSP). When either of these parameters is bounded by a constant, then CSP becomes solvable in polynomial time. However, here the order of the polynomial in the running time depends on the width, and this is known to be unavoidable; therefore, the problem is not fixed-parameter tractable parameterized by either of these width measures. Here we introduce an enhancement of tree and hypertree width through a novel notion of thresholds, allowing the associated decompositions to take into account information about the computational costs associated with solving the given CSP instance. Aside from introducing these notions, we obtain efficient theoretical as well as empirical algorithms for computing threshold treewidth and hypertree width and show that these parameters give rise to fixed-parameter algorithms for CSP as well as other, more general problems. We complement our theoretical results with experimental evaluations in terms of heuristics as well as exact methods based on SAT/SMT encodings.

Link to Repositum

Planarizing Graphs and their Drawings by Vertex Splitting
Nickel, Soeren, Nöllenburg, Martin, Sorge, Manuel, Villedieu, Anais, Wu, Hsiang-Yun, Wulms, Jules
Type: Preprint
Show Abstract
The splitting number of a graph G=(V,E) is the minimum number of vertex splits required to turn G into a planar graph, where a vertex split removes a vertex v∈V, introduces two new vertices v1,v2, and distributes the edges formerly incident to v among its two split copies v1,v2. The splitting number problem is known to be NP-complete. In this paper we shift focus to the splitting number of graph drawings in R2, where the new vertices resulting from vertex splits can be re-embedded into the existing drawing of the remaining graph. We first provide a non-uniform fixed-parameter tractable (FPT) algorithm for the splitting number problem (without drawings). Then we show the NP-completeness of the splitting number problem for graph drawings, even for its two subproblems of (1) selecting a minimum subset of vertices to split and (2) for re-embedding a minimum number of copies of a given set of vertices. For the latter problem we present an FPT algorithm parameterized by the number of vertex splits. This algorithm reduces to a bounded outerplanarity case and uses an intricate dynamic program on a sphere-cut decomposition.

Link to Repositum

Discriminantal subset convolution: Refining exterior-algebraic methods for parameterized algorithms
Brand, Cornelius
Type: Article; In: Journal of Computer and System Sciences; Vol: 129; Pages: 62-71
Show Abstract
We give a simplified account of the recent algebraic methods obtained for the longest path problem of Brand, Dell and Husfeldt (STOC'18) and Brand (ESA'19). To this end, we introduce a new kind of subset convolution, discriminantal subset convolution, which we motivate as a distillate of exterior-algebraic operations. The algorithm in the new presentation achieves the almost competitive bound of 2.619k⋅poly(n), first achieved by Fomin et al. (2016) [8], for deterministically finding paths of length k, while it allows for the same running time for the k-internal out-branching problem, improving upon Brand (ESA'19) and reproducing the state-of-the-art of Brand and Pratt (ICALP'21).

Link to Repositum

A Beam Search for the Shortest Common Supersequence Problem Guided by an Approximate Expected Length Calculation
Mayerhofer, Jonas, Kirchweger, Markus, Huber, Marc, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization; Vol: 13222; Pages: 127-142
Show Abstract
The shortest common supersequence problem (SCSP) is a well-known NP-hard problem with many applications, in particular in data compression, computational molecular biology, and text editing. It aims at finding for a given set of input strings a shortest string such that every string from the set is a subsequence of the computed string. Due to its NP-hardness, many approaches have been proposed to tackle the SCSP heuristically. The currently best-performing one is based on beam search (BS). In this paper, we present a novel heuristic (AEL) for guiding a BS, which approximates the expected length of an SCSP of random strings, and embed the proposed heuristic into a multilevel probabilistic beam search (MPBS). To overcome the arising scalability issue of the guidance heuristic, a cut-off approach is presented that reduces large instances to smaller ones. The proposed approaches are tested on two established sets of benchmark instances. MPBS guided by AEL outperforms the so far leading method on average on a set of real instances. For many instances new best solutions could be obtained.

Link to Repositum

Transitions in Dynamic Map Labeling
Depian, Thomas, Li, Guangping, Nöllenburg, Martin, Wulms, Jules
Type: Presentation

Link to Repositum

47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)
Authors not available
Type: Proceedings

Link to Repositum

Computational Methods for Scheduling the Charging and Assignment of an On-Site Shared Electric Vehicle Fleet
Varga, Johannes, Raidl, Günther, Limmer, Steffen
Type: Article; In: IEEE Access; Vol: 10; Pages: 105786-105806
Show Abstract
We investigate a fleet scheduling problem arising when a company has to manage its own fleet of electric vehicles. Aim is to assign given usage reservations to these vehicles and to devise a suitable charging plan for all vehicles while minimizing a cost function. We formulate the problem as a compact mixed integer linear program, which we strengthen in several ways. As this model is hard to solve in practice, we perform a Benders decomposition, which separates the problem into a master problem and a subproblem and solves them iteratively in an alternating manner. We perform the decomposition in two different ways. First we follow a more classical way, then we enrich the master problem making it stronger but also more complex and the subproblem smaller and simpler to solve. To improve the overall performance, we propose a problem-specific General Variable Neighborhood Search metaheuristic for solving the master problem in earlier iterations. Experimental results show that directly solving the complete mixed integer linear program usually performs well for small to some medium sized problem instances. For larger instances, however, it is not able to find any reasonable primal solutions anymore, while the Benders decomposition scales much better. Especially the variant with the heuristic delivers high quality solutions in reasonable time. The Benders decomposition with the more complex master problem also yields reasonable dual bounds and thus practically relevant quality guarantees for the larger instances.

Link to Repositum

Learning Large Bayesian Networks with Expert Constraints
Peruvemba Ramaswamy, Vaidyanathan, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022); Vol: 180; Pages: 1592-1601
Show Abstract
We propose a new score-based algorithm for learning the structure of a Bayesian Network (BN). It is the first algorithm that simultaneously supports the requirements of (i) learning a BN of bounded treewidth, (ii) satisfying expert constraints, including positive and negative ancestry properties between nodes, and (iii) scaling up to BNs with several thousand nodes. The algorithm operates in two phases. In Phase 1, we utilize a modified version of an existing BN structure learning algorithm, modified to generate an initial Directed Acyclic Graph (DAG) that supports a portion of the given constraints. In Phase 2, we follow the BN-SLIM framework, introduced by Peruvemba Ramaswamy and Szeider (AAAI 2021). We improve the initial DAG by repeatedly running a MaxSAT solver on selected local parts. The MaxSAT encoding entails local versions of the expert constraints as hard constraints. We evaluate a prototype implementation of our algorithm on several standard benchmark sets. The encouraging results demonstrate the power and flexibility of the BN-SLIM framework. It boosts the score while increasing the number of satisfied expert constraints.

Link to Repositum

Unit Disk Representations of Embedded Trees, Outerplanar and Multi-Legged Graphs
Bhore, Sujoy, Löffler, Maarten, Nickel, Soeren, Nöllenburg, Martin
Type: Presentation

Link to Repositum

Multidimensional Manhattan Preferences
Chen, Jiehua, Nöllenburg, Martin, Simola, Sofia, Villedieu, Anaïs, Wallinger, Markus
Type: Inproceedings; In: LATIN 2022: Theoretical Informatics; Vol: 13568; Pages: 273-289
Show Abstract
A preference profile (i.e., a collection of linear preference orders of the voters over a set of alternatives) with m alternatives and n voters is d-Manhattan (resp. d-Euclidean) if both the alternatives and the voters can be placed into a d-dimensional space such that between each pair of alternatives, every voter prefers the one which has a shorter Manhattan (resp. Euclidean) distance to the voter. We initiate the study of how d-Manhattan preference profiles depend on the values m and n. First, we provide explicit constructions to show that each preference profile with m alternatives and n voters is d-Manhattan whenever d ≥ min(n, m − 1). Second, for d = 2, we show that the smallest non d-Manhattan preference profile has either 3 voters and 6 alternatives, or 4 voters and 5 alternatives, or 5 voters and 4 alternatives. This is more complex than the case with d-Euclidean preferences (see [Bogomolnaia and Laslier, 2007] and [Bulteau and Chen, 2022]).

Link to Repositum

A Relative Value Function Based Learning Beam Search for Longest Common Subsequence Problems
Huber, Marc, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory - Extended Abstracts; Pages: 22-23

Link to Repositum

Learning Beam Search: Utilizing Machine Learning to Guide Beam Search for Solving Combinatorial Optimization Problems
Huber, Marc, Raidl, Günther
Type: Inproceedings; In: Machine Learning, Optimization, and Data Science; Vol: 13164; Pages: 283-298
Show Abstract
Beam search (BS) is a well-known incomplete breadth-first-search variant frequently used to find heuristic solutions to hard combinatorial optimization problems. Its key ingredient is a guidance heuristic that estimates the expected length (cost) to complete a partial solution. While this function is usually developed manually for a specific problem, we propose a more general Learning Beam Search (LBS) that uses a machine learning model for guidance. Learning is performed by utilizing principles of reinforcement learning: LBS generates training data on its own by performing nested BS calls on many representative randomly created problem instances. The general approach is tested on two specific problems, the longest common subsequence problem and the constrained variant thereof. Results on established sets of benchmark instances indicate that the BS with models trained via LBS is highly competitive. On many instances new so far best solutions could be obtained, making the approach a new state-of-the-art method for these problems and documenting the high potential of this general framework.

Link to Repositum

Learning for Guiding Metaheuristics
Raidl, Günther
Type: Presentation

Link to Repositum

An Algorithmic Study of Fully Dynamic Independent Sets for Map Labeling
Bhore, Sujoy, Li, Guangping, Nöllenburg, Martin
Type: Article; In: ACM Journal on Experimental Algorithmics; Vol: 27; Pages: 1-36
Show Abstract
Map labeling is a classical problem in cartography and geographic information systems that asks to place labels for area, line, and point features, with the goal to select and place the maximum number of independent (i.e., overlap-free) labels. A practically interesting case is point labeling with axis-parallel rectangular labels of common size. In a fully dynamic setting, at each timestep, either a new label appears or an existing label disappears. Then, the challenge is to maintain a maximum cardinality subset of pairwise independent labels with sublinear update time. Motivated by this, we study the maximal independent set (MIS) and maximum independent set (Max-IS) problems on fully dynamic (insertion/deletion model) sets of axis-parallel rectangles of two types: (i) uniform height and width and (ii) uniform height and arbitrary width; both settings can be modeled as rectangle intersection graphs. We present the first deterministic algorithm for maintaining an MIS (and thus a 4-approximate Max-IS) of a dynamic set of uniform rectangles with polylogarithmic update time. This breaks the natural barrier of Ω(Δ) update time (where Δ is the maximum degree in the graph) for vertex updates presented by Assadi et al. (STOC 2018). We continue by investigating Max-IS and provide a series of deterministic dynamic approximation schemes. For uniform rectangles, we first give an algorithm that maintains a 4-approximate Max-IS with O(1) update time. In a subsequent algorithm, we establish the trade-off between approximation quality 2(1+1k) and update time O(k2logn), for k∈N. We conclude with an algorithm that maintains a 2-approximate Max-IS for dynamic sets of unit-height and arbitrary-width rectangles with O(log2n+ωlogn) update time, where ω is the maximum size of an independent set of rectangles stabbed by any horizontal line. We implement our algorithms and report the results of an experimental comparison exploring the trade-off between solution quality and update time for synthetic and real-world map labeling instances. We made several major observations in our empirical study. First, the original approximations are well above their respective worst-case ratios. Second, in comparison with the static approaches, the dynamic approaches show a significant speedup in practice. Third, the approximation algorithms show their predicted relative behavior. The better the solution quality, the worse the update times. Fourth, a simple greedy augmentation to the approximate solutions of the algorithms boost the solution sizes significantly in practice.

Link to Repositum

Hedonic Diversity Games: A Complexity Picture with More than Two Colors
Ganian, Robert, Hamm, Thekla, Knop, Dušan, Schierreich, Šimon, Suchý, Ondřej
Type: Inproceedings; In: Proceedings of the 36th AAAI Conference on Artificial Intelligence; Vol: 36; Pages: 5034-5042
Show Abstract
Hedonic diversity games are a variant of the classical Hedonic games designed to better model a variety of questions concerning diversity and fairness. Previous works mainly targeted the case with two diversity classes (represented as colors in the model) and provided a set of initial complexity-theoretic and existential results concerning Nash and Individually stable outcomes. Here, we design new algorithms accompanied with lower bounds which provide a full parameterized-complexity picture for computing Nash and Individually stable outcomes with respect to the most natural parameterizations of the problem. Crucially, our results hold for general Hedonic diversity games where the number of colors is not necessarily restricted to two, and show that---apart from two trivial cases---a necessary condition for tractability in this setting is that the number of colors is bounded by the parameter. Moreover, for the special case of two colors we resolve an open question asked in previous work~(Boehmer and Elkind, AAAI 2020).

Link to Repositum

Algorithmic applications of tree-cut width
Ganian, Robert, Kim, Eun Jung, Szeider, Stefan
Type: Article; In: SIAM Journal on Discrete Mathematics; Vol: 36; Issue: 4; Pages: 2635-2666
Show Abstract
The recently introduced graph parameter tree-cut width plays a similar role with respect to immersions as the graph parameter treewidth plays with respect to minors. In this paper, we provide the first algorithmic applications of tree-cut width to hard combinatorial problems. Treecut width is known to be lower-bounded by a function of treewidth, but it can be much larger and hence has the potential to facilitate the efficient solution of problems that are not known to be fixedparameter tractable (FPT) when parameterized by treewidth. We introduce the notion of nice treecut decompositions and provide FPT algorithms for the showcase problems Capacitated Vertex Cover, Capacitated Dominating Set, and Imbalance parameterized by the tree-cut width of an input graph. On the other hand, we show that List Coloring, Precoloring Extension, and Boolean CSP (the last parameterized by the tree-cut width of the incidence graph) are W[1]-hard and hence unlikely to be FPT when parameterized by tree-cut width.

Link to Repositum

Sum-of-Products with Default Values: Algorithms and Complexity Results
Ganian, Robert, Kim, Eun Jung, Slivovsky, Friedrich, Szeider, Stefan
Type: Article; In: Journal of Artificial Intelligence Research; Vol: 73; Pages: 535-552
Show Abstract
Weighted Counting for Constraint Satisfaction with Default Values (#CSPD) is a powerful special case of the sum-of-products problem that admits succinct encodings of #CSP, #SAT, and inference in probabilistic graphical models. We investigate #CSPD under the fundamental parameter of incidence treewidth (i.e., the treewidth of the incidence graph of the constraint hypergraph). We show that if the incidence treewidth is bounded, #CSPD can be solved in polynomial time. More specifically, we show that the problem is fixed-parameter tractable for the combined parameter incidence treewidth, domain size, and support size (the maximum number of non-default tuples in a constraint). This generalizes known results on the fixed-parameter tractability of #CSPD under the combined parameter primal treewidth and domain size. We further prove that the problem is not fixed-parameter tractable if any of the three components is dropped from the parameterization.

Link to Repositum

Towards Uniform Certification in QBF
Chew, Leroy, Slivovsky, Friedrich
Type: Inproceedings; In: 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022); Vol: 219; Pages: 1-23
Show Abstract
We pioneer a new technique that allows us to prove a multitude of previously open simulations in QBF proof complexity. In particular, we show that extended QBF Frege p-simulates clausal proof systems such as IR-Calculus, IRM-Calculus, Long-Distance Q-Resolution, and Merge Resolution. These results are obtained by taking a technique of Beyersdorff et al. (JACM 2020) that turns strategy extraction into simulation and combining it with new local strategy extraction arguments. This approach leads to simulations that are carried out mainly in propositional logic, with minimal use of the QBF rules. Our proofs therefore provide a new, largely propositional interpretation of the simulated systems. We argue that these results strengthen the case for uniform certification in QBF solving, since many QBF proof systems now fall into place underneath extended QBF Frege.

Link to Repositum

SAT Backdoors: Depth Beats Size
Dreier, Jan, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: 30th Annual European Symposium on Algorithms (ESA 2022); Vol: 244; Pages: 1-18
Show Abstract
For several decades, much effort has been put into identifying classes of CNF formulas whose satisfiability can be decided in polynomial time. Classic results are the linear-time tractability of Horn formulas (Aspvall, Plass, and Tarjan, 1979) and Krom (i.e., 2CNF) formulas (Dowling and Gallier, 1984). Backdoors, introduced by Williams, Gomes and Selman (2003), gradually extend such a tractable class to all formulas of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a formula and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021), is a more refined distance measure, which admits the utilization of different backdoor variables in parallel. Bounded backdoor size implies bounded backdoor depth, but there are formulas of constant backdoor depth and arbitrarily large backdoor size. We propose FPT approximation algorithms to compute backdoor depth into the classes Horn and Krom. This leads to a linear-time algorithm for deciding the satisfiability of formulas of bounded backdoor depth into these classes. We base our FPT approximation algorithm on a sophisticated notion of obstructions, extending Mählmann et al.'s obstruction trees in various ways, including the addition of separator obstructions. We develop the algorithm through a new game-theoretic framework that simplifies the reasoning about backdoors. Finally, we show that bounded backdoor depth captures tractable classes of CNF formulas not captured by any known method.

Link to Repositum

CSP Beyond Tractable Constraint Languages
Dreier, Jan, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: 28th International Conference on Principles and Practice of Constraint Programming; Vol: 235; Pages: 1-17
Show Abstract
The constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zuk 2017). Backdoors, introduced by Williams, Gomes, and Selman (2003), gradually extend such a tractable class to all CSP instances of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a CSP instance and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021) for SAT, is a more refined distance measure, which admits the parallel utilization of different backdoor variables. Bounded backdoor size implies bounded backdoor depth, but there are instances of constant backdoor depth and arbitrarily large backdoor size. Dreier, Ordyniak, and Szeider (2022) provided fixed-parameter algorithms for finding backdoors of small depth into the classes of Horn and Krom formulas. In this paper, we consider backdoor depth for CSP. We consider backdoors w.r.t. tractable subproblems CΓ of the CSP defined by a constraint language Γ, i.e., where all the constraints use relations from the language Γ. Building upon Dreier et al.’s game-theoretic approach and their notion of separator obstructions, we show that for any finite, tractable, semi-conservative constraint language Γ, the CSP is fixed-parameter tractable parameterized by the backdoor depth into CΓ plus the domain size. With backdoors of low depth, we reach classes of instances that require backdoors of arbitrary large size. Hence, our results strictly generalize several known results for CSP that are based on backdoor size.

Link to Repositum

The Complexity of Envy-Free Graph Cutting
Deligkas, Argyrios, Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence; Pages: 237-243
Show Abstract
We consider the problem of fairly dividing a set of heterogeneous divisible resources among agents with different preferences. We focus on the setting where the resources correspond to the edges of a connected graph, every agent must be assigned a connected piece of this graph, and the fairness notion considered is the classical envy freeness. The problem is NP-complete, and we analyze its complexity with respect to two natural complexity measures: the number of agents and the number of edges in the graph. While the problem remains NP-hard even for instances with 2 agents, we provide a dichotomy characterizing the complexity of the problem when the number of agents is constant based on structural properties of the graph. For the latter case, we design a polynomial-time algorithm when the graph has a constant number of edges.

Link to Repositum

Graph search and variable neighborhood search for finding constrained longest common subsequences in artificial and real gene sequences
Djukanović, Marko, Kartelj, Aleksandar, Matić, Dragan, Grbić, Milana, Blum, Christian, Raidl, Günther R.
Type: Article; In: Applied Soft Computing; Vol: 122
Show Abstract
We consider the constrained longest common subsequence problem with an arbitrary set of input strings as well as an arbitrary set of pattern strings. This problem has applications, for example, in computational biology where it serves as a measure of similarity for sets of molecules with putative structures in common. We contribute in several ways. First, it is formally proven that finding a feasible solution of arbitrary length is, in general, NP-complete. Second, we propose several heuristic approaches: a greedy algorithm, a beam search aiming for feasibility, a variable neighborhood search, and a hybrid of the latter two approaches. An exhaustive experimental study shows the effectivity and differences of the proposed approaches in respect to finding a feasible solution, finding high-quality solutions, and runtime for both, artificial and real-world instance sets. The latter ones are generated from a set of 12681 bacteria 16S rRNA gene sequences and consider 15 primer contigs as pattern strings.

Link to Repositum

A Large Neighborhood Search for Battery Swapping Station Location Planning for Electric Scooters
Jatschka, Thomas, Rauscher, Matthias, Kreutzer, Bernhard, Okamoto, Yusuke, Kataoka, Hiroaki, Rodemann, Tobias, Raidl, Günther R.
Type: Inproceedings; In: Computer Aided Systems Theory – EUROCAST 2022 : 18th International Conference, Las Palmas de Gran Canaria, Spain, February 20–25, 2022, Revised Selected Papers; Vol: 13789; Pages: 121-129
Show Abstract
We consider the Multi Objective Battery Swapping Station Location Problem (MOBSSLP) for planning the setup of new stations for exchanging depleted batteries of electric scooters with the aim of minimizing a three-part objective function while satisfying an expected amount of demand. Batteries returned at a station are charged and provided to customers again once they are full. We present a large neighborhood search (LNS) for solving MOBSSLP instances. The LNS makes use of a mixed integer linear program (MILP) to quickly find good solutions within a specified neighborhood. Multiple neighborhood structures given by pairs of destroy and repair operators are suggested. The proposed LNS is evaluated on instances generated by adapted approaches from the literature with up to 500 potential station locations and up to 1000 user trips. Solutions obtained from the LNS have on average ten to thirty percent better objective values on these instances than a state-of-the-art MILP solver.

Link to Repositum

Finding a Cluster in Incomplete Data
Eiben, Eduard, Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: 30th Annual European Symposium on Algorithms (ESA 2022); Vol: 244; Pages: 1-14
Show Abstract
We study two variants of the fundamental problem of finding a cluster in incomplete data. In the problems under consideration, we are given a multiset of incomplete d-dimensional vectors over the binary domain and integers k and r, and the goal is to complete the missing vector entries so that the multiset of complete vectors either contains (i) a cluster of k vectors of radius at most r, or (ii) a cluster of k vectors of diameter at most r. We give tight characterizations of the parameterized complexity of the problems under consideration with respect to the parameters k, r, and a third parameter that captures the missing vector entries.

Link to Repositum

On Covering Segments with Unit Intervals
Bergren, Dan, Eiben, Eduard, Ganian, Robert, Kanj, Iyad
Type: Article; In: SIAM Journal on Discrete Mathematics; Vol: 36; Issue: 2; Pages: 1200-1230
Show Abstract
We study the problem of covering a set of segments on a line with the minimum number of unit-length intervals, where an interval covers a segment if at least one of the two endpoints of the segment falls in the unit interval. We also study several variants of this problem. We show that the restrictions of the aforementioned problems to the set of instances in which all the segments have the same length are NP-hard. This result implies several NP-hardness results in the literature for variants and generalizations of the problems under consideration. We then study the parameterized complexity of the aforementioned problems. We provide tight results for most of them by showing that they are fixed-parameter tractable for the restrictions in which all the segments have the same length, and are W[1]-complete otherwise.

Link to Repositum

Further Exploiting c-Closure for FPT Algorithms and Kernels for Domination Problems
Kanesh, Lawqueen, Madathil, Jayakrishnan, Roy, Sanjukta, Sahu, Abhishek, Saurabh, Saket
Type: Inproceedings; In: 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022); Vol: 219; Pages: 1-20
Show Abstract
For a positive integer c, a graph G is said to be c-closed if every pair of non-adjacent vertices in G have at most c - 1 neighbours in common. The closure of a graph G, denoted by cl(G), is the least positive integer c for which G is c-closed. The class of c-closed graphs was introduced by Fox et al. [ICALP ‘18 and SICOMP ‘20]. Koana et al. [ESA ‘20] started the study of using cl(G) as an additional structural parameter to design kernels for problems that are W-hard under standard parameterizations. In particular, they studied problems such as Independent Set, Induced Matching, Irredundant Set and (Threshold) Dominating Set, and showed that each of these problems admits a polynomial kernel, either w.r.t. the parameter k + c or w.r.t. the parameter k for each fixed value of c. Here, k is the solution size and c = cl(G). The work of Koana et al. left several questions open, one of which was whether the Perfect Code problem admits a fixed-parameter tractable (FPT) algorithm and a polynomial kernel on c-closed graphs. In this paper, among other results, we answer this question in the affirmative. Inspired by the FPT algorithm for Perfect Code, we further explore two more domination problems on the graphs of bounded closure. The other problems that we study are Connected Dominating Set and Partial Dominating Set. We show that Perfect Code and Connected Dominating Set are fixed-parameter tractable w.r.t. the parameter k + cl(G), whereas Partial Dominating Set, parameterized by k is W[1]-hard even when cl(G) = 2. We also show that for each fixed c, Perfect Code admits a polynomial kernel on the class of c-closed graphs. And we observe that Connected Dominating Set has no polynomial kernel even on 2-closed graphs, unless NP ⊆ co-NP/poly.

Link to Repositum

Drawing Shortest Paths in Geodetic Graphs
Cornelsen, Sabine, Pfister, Maximilian, Förster, Henry, Gronemann, Martin, Hoffmann, Michael, Kobourov, Stephen, Schneck, Thomas
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 26; Issue: 3; Pages: 353-361
Show Abstract
Motivated by the fact that in a space where shortest paths are unique, no two shortest paths meet twice, we study a question posed by Greg Bodwin: Given a geodetic graph G, i.e., an unweighted graph in which the shortest path between any pair of vertices is unique, is there a philogeodetic drawing of G, i.e., a drawing of G in which the curves of any two shortest paths meet at most once? We answer this question in the negative by showing the existence of geodetic graphs that require some pair of shortest paths to cross at least four times. The bound on the number of crossings is tight for the class of graphs we construct. Furthermore, we exhibit geodetic graphs of diameter two that do not admit a philogeodetic drawing. On the positive side we show that geodetic graphs admit a philogeodetic drawing if both the diameter and the density are very low.

Link to Repositum

The Fine-Grained Complexity of Graph Homomorphism Parameterized by Clique-Width
Ganian, Robert, Hamm, Thekla, Korchemna, Viktoriia, Okrasa, Karolina, Simonov, Kirill
Type: Inproceedings; In: 49th EATCS International Conference on Automata, Languages, and Programming; Vol: 229; Pages: 66:1-66:20
Show Abstract
The generic homomorphism problem, which asks whether an input graph G admits a homomorphism into a fixed target graph H, has been widely studied in the literature. In this article, we provide a fine-grained complexity classification of the running time of the homomorphism problem with respect to the clique-width of G (denoted cw) for virtually all choices of H under the Strong Exponential Time Hypothesis. In particular, we identify a property of H called the signature number s(H) and show that for each H, the homomorphism problem can be solved in time O∗(s(H)cw). Crucially, we then show that this algorithm can be used to obtain essentially tight upper bounds. Specifically, we provide a reduction that yields matching lower bounds for each H that is either a projective core or a graph admitting a factorization with additional properties - allowing us to cover all possible target graphs under long-standing conjectures.

Link to Repositum

Combinatorial and Algorithmic Aspects of Monadic Stability
Dreier, Jan, Mählmann, Nikolas, Mouawad, Amer, Siebertz, Sebastian, Vigny, Alexandre
Type: Inproceedings; In: 33rd International Symposium on Algorithms and Computation (ISAAC 2022); Vol: 248; Pages: 1-17
Show Abstract
Nowhere dense classes of graphs are classes of sparse graphs with rich structural and algorithmic properties, however, they fail to capture even simple classes of dense graphs. Monadically stable classes, originating from model theory, generalize nowhere dense classes and close them under transductions, i.e. transformations defined by colorings and simple first-order interpretations. In this work we aim to extend some combinatorial and algorithmic properties of nowhere dense classes to monadically stable classes of finite graphs. We prove the following results. For every monadically stable class C and fixed integer s ≥ 3, the Ramsey numbers RC(s, t) are bounded from above by O(ts-1-δ) for some δ > 0, improving the bound R(s, t) ∈ O(ts-1/(log t)s-1) known for the class of all graphs and the bounds known for k-stable graphs when s ≤ k. For every monadically stable class C and every integer r, there exists δ > 0 such that every graph G ∈ C that contains an r-subdivision of the biclique Kt,t as a subgraph also contains Ktδ, tδ as a subgraph. This generalizes earlier results for nowhere dense graph classes. We obtain a stronger regularity lemma for monadically stable classes of graphs. Finally, we show that we can compute polynomial kernels for the independent set and dominating set problems in powers of nowhere dense classes. Formerly, only fixed-parameter tractable algorithms were known for these problems on powers of nowhere dense classes.

Link to Repositum

Edge-Path Bundling: A Less Ambiguous Edge Bundling Approach
Wallinger, Markus, Archambault, Daniel, Auber, David, Nöllenburg, Martin, Peltonen, Jaakko
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 28; Issue: 1; Pages: 313-323
Show Abstract
Edge bundling techniques cluster edges with similar attributes (i.e. similarity in direction and proximity) together to reduce the visual clutter. All edge bundling techniques to date implicitly or explicitly cluster groups of individual edges, or parts of them, together based on these attributes. These clusters can result in ambiguous connections that do not exist in the data. Confluent drawings of networks do not have these ambiguities, but require the layout to be computed as part of the bundling process. We devise a new bundling method, Edge-Path bundling, to simplify edge clutter while greatly reducing ambiguities compared to previous bundling techniques. Edge-Path bundling takes a layout as input and clusters each edge along a weighted, shortest path to limit its deviation from a straight line. Edge-Path bundling does not incur independent edge ambiguities typically seen in all edge bundling methods, and the level of bundling can be tuned through shortest path distances, Euclidean distances, and combinations of the two. Also, directed edge bundling naturally emerges from the model. Through metric evaluations, we demonstrate the advantages of Edge-Path bundling over other techniques.

Link to Repositum

The mixed page number of graphs
Md. Alam, Jawaherul, Bekos, Michael A., Gronemann, Martin, Kaufmann, Michael, Pupyrev, Sergey
Type: Article; In: Theoretical Computer Science; Vol: 931; Pages: 131-141
Show Abstract
A linear layout of a graph typically consists of a total vertex order, and a partition of the edges into sets either of non-crossing edges, called stacks, or of non-nested edges, called queues. The stack (queue) number of a graph is the minimum number of required stacks (queues) in any linear layout of it. Mixed linear layouts combine these layouts by allowing each set of edges to form either a stack or a queue. In this work we initiate the study of the mixed page number of a graph, which corresponds to the minimum number of such sets. First, we study the edge density of graphs with bounded mixed page number. Then, we focus on complete and complete bipartite graphs, for which we derive lower and upper bounds on their mixed page number. Our findings indicate that combining stacks and queues is more powerful in various ways compared to the two traditional layout models.

Link to Repositum

On Turn-Regular Orthogonal Representations
Bekos, Michael A., Binucci, Carla, Di Battista, Giuseppe, Didimo, Walter, Gronemann, Martin, Klein, Karsten, Patrignani, Maurizio, Rutter, Ignaz
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 26; Issue: 3; Pages: 285-306
Show Abstract
An interesting class of orthogonal representations consists of the so-called turn-regular ones, i.e., those that do not contain any pair of reflex corners that “point to each other” inside a face. For such a representation H it is possible to compute in linear time a minimum-area drawing, i.e., a drawing of minimum area over all possible assignments of vertex and bend coordinates of H. In contrast, finding a minimum-area drawing of H is NP-hard if H is non-turn-regular. This scenario naturally motivates the study of which graphs admit turn-regular orthogonal representations. In this paper we identify notable classes of biconnected planar graphs that always admit such repre-sentations, which can be computed in linear time. We also describe a linear-time testing algorithm for trees and provide a polynomial-time algorithm that tests whether a bi-connected plane graph with “small” faces has a turn-regular orthogonal representation without bends.

Link to Repositum

Recognizing Map Graphs of Bounded Treewidth
Angelini, Patrizio, Bekos, Michael A., Da Lozzo, Giordano, Gronemann, Martin, Montecchiani, Fabrizio, Tappini, Alessandra
Type: Inproceedings; In: 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022); Vol: 227; Pages: 1-18
Show Abstract
A map graph is one admitting a representation in which vertices are nations on a spherical map and edges are shared curve segments or points between nations. We present an explicit fixed-parameter tractable algorithm for recognizing map graphs parameterized by treewidth. The algorithm has time complexity that is linear in the size of the graph and, if the input is a yes-instance, it reports a certificate in the form of a so-called witness. Furthermore, this result is developed within a more general algorithmic framework that allows to test, for any k, if the input graph admits a k-map (where at most k nations meet at a common point) or a hole-free k-map (where each point is covered by at least one nation). We point out that, although bounding the treewidth of the input graph also bounds the size of its largest clique, the latter alone does not seem to be a strong enough structural limitation to obtain an efficient time complexity. In fact, while the largest clique in a k-map graph is 3k/2⌉, the recognition of k-map graphs is still open for any fixed k ≥ 5.

Link to Repositum

Constant Congestion Brambles in Directed Graphs
Masařík, Tomáš, Pilipczuk, Marcin, Rzążewski, Paweł, Sorge, Manuel
Type: Article; In: SIAM Journal on Discrete Mathematics; Vol: 36; Issue: 2; Pages: 922-938
Show Abstract
The Directed Grid Theorem, stating that there is a function f such that a directed graph of directed treewidth at least f(k) contains a directed grid of size at least k as a butterfly minor, after being a conjecture for nearly 20 years, was proved in 2015 by Kawarabayashi and Kreutzer. However, the function f obtained in the proof is very fast growing. In this work, we show that if one relaxes directed grid to bramble of constant congestion, one can obtain a polynomial bound. More precisely, we show that for every k ≥ 1 there exists t = O(k48 log13 k) such that every directed graph of directed treewidth at least t contains a bramble of congestion at most 8 and size at least k.

Link to Repositum

Turbocharging Heuristics for Weak Coloring Numbers
Dobler, Alexander, Sorge, Manuel, Villedieu, Anaïs
Type: Inproceedings; In: 30th Annual European Symposium on Algorithms (ESA 2022); Vol: 244; Pages: 1-18
Show Abstract
Bounded expansion and nowhere-dense classes of graphs capture the theoretical tractability for several important algorithmic problems. These classes of graphs can be characterized by the so-called weak coloring numbers of graphs, which generalize the well-known graph invariant degeneracy (also called k-core number). Being NP-hard, weak-coloring numbers were previously computed on real-world graphs mainly via incremental heuristics. We study whether it is feasible to augment such heuristics with exponential-time subprocedures that kick in when a desired upper bound on the weak coloring number is breached. We provide hardness and tractability results on the corresponding computational subproblems. We implemented several of the resulting algorithms and show them to be competitive with previous approaches on a previously studied set of benchmark instances containing 86 graphs with up to 183831 edges. We obtain improved weak coloring numbers for over half of the instances.

Link to Repositum

Bounding and Computing Obstacle Numbers of Graphs
Balko, Martin, Chaplick, Steven, Ganian, Robert, Gupta, Siddharth, Hoffmann, Michael, Valtr, Pavel, Wolff, Alexander
Type: Inproceedings; In: 30th Annual European Symposium on Algorithms (ESA 2022); Vol: 244; Pages: 1-13
Show Abstract
An obstacle representation of a graph G consists of a set of pairwise disjoint simply-connected closed regions and a one-to-one mapping of the vertices of G to points such that two vertices are adjacent in G if and only if the line segment connecting the two corresponding points does not intersect any obstacle. The obstacle number of a graph is the smallest number of obstacles in an obstacle representation of the graph in the plane such that all obstacles are simple polygons. It is known that the obstacle number of each n-vertex graph is O(n log n) [Balko, Cibulka, and Valtr, 2018] and that there are n-vertex graphs whose obstacle number is Ω(n/(log log n)2) [Dujmovic and Morin, 2015]. We improve this lower bound to Ω(n/ log log n) for simple polygons and to Ω(n) for convex polygons. To obtain these stronger bounds, we improve known estimates on the number of n-vertex graphs with bounded obstacle number, solving a conjecture by Dujmovic and Morin. We also show that if the drawing of some n-vertex graph is given as part of the input, then for some drawings Ω(n2) obstacles are required to turn them into an obstacle representation of the graph. Our bounds are asymptotically tight in several instances. We complement these combinatorial bounds by two complexity results. First, we show that computing the obstacle number of a graph G is fixed-parameter tractable in the vertex cover number of G. Second, we show that, given a graph G and a simple polygon P, it is NP-hard to decide whether G admits an obstacle representation using P as the only obstacle.

Link to Repositum

Multi-Dimensional Stable Roommates in 2-Dimensional Euclidean Space
Chen, Jiehua, Roy, Sanjukta
Type: Inproceedings; In: 30th Annual European Symposium on Algorithms (ESA 2022); Vol: 244; Pages: 1-16
Show Abstract
We investigate the Euclidean d-Dimensional Stable Roommates problem, which asks whether a given set V of d n points from the 2-dimensional Euclidean space can be partitioned into n disjoint (unordered) subsets Π = {V1, , Vn} with |Vi| = d for each Vi ϵ Π such that Π is stable. Here, stability means that no point subset W ⊆ V is blocking Π, and W is said to be blocking Πif |W| = d such that Σ w ϵW δ(w,w) < Σ vϵΠ(w) δ(w, v) holds for each point w ϵ W, where Π (w) denotes the subset Vi ϵ Π which contains w and δ(a, b) denotes the Euclidean distance between points a and b. Complementing the existing known polynomial-time result for d = 2, we show that such polynomial-time algorithms cannot exist for any fixed number d ≥ 3 unless P=NP. Our result for d = 3 answers a decade-long open question in the theory of Stable Matching and Hedonic Games [18, 1, 10, 26, 21].

Link to Repositum

Twin-width and generalized coloring numbers
Dreier, Jan, Gajarský, Jakub, Jiang, Yiting, Ossona de Mendez, Patrice, Raymond, Jean-Florent
Type: Article; In: Discrete Mathematics; Vol: 345; Issue: 3
Show Abstract
In this paper, we prove that a graph G with no Ks,s-subgraph and twin-width d has r-admissibility and r-coloring numbers bounded from above by an exponential function of r and that we can construct graphs achieving such a dependency in r.

Link to Repositum

A Large Neighborhood Search for a Cooperative Optimization Approach to Distribute Service Points in Mobility Applications
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther
Type: Inproceedings; In: Metaheuristics and Nature Inspired Computing; Vol: 1541; Pages: 3-17
Show Abstract
We present a large neighborhood search (LNS) as optimization core for a cooperative optimization approach (COA) to optimize locations of service points for mobility applications. COA is an iterative interactive algorithm in which potential customers can express preferences during the optimization. A machine learning component processes the feedback obtained from the customers. The learned information is then used in an optimization component to generate an optimized solution. The LNS replaces a mixed integer linear program (MILP) that has been used as optimization core so far. A particular challenge for developing the LNS is that a fast way for evaluating the non-trivial objective function for candidate solutions is needed. To this end, we propose an evaluation graph, making an efficient incremental calculation of the objective value of a modified solution possible. We evaluate the LNS on artificial instances as well as instances derived from real-world data and compare its performance to the previously developed MILP. Results show that the LNS as optimization core scales significantly better to larger instances while still being able to obtain solutions close to optimality.

Link to Repositum

Parallel Beam Search for Combinatorial Optimization (Extended Abstract)
Frohner, Nikolaus, Gmys, Jan, MELAB, NOUREDINE, Raidl, Günther, Talbi, El-ghazali
Type: Inproceedings; In: Fifteenth International Symposium on Combinatorial Search; Vol: 15; Pages: 273-275

Link to Repositum

Parameterized Algorithms for Queue Layouts
Bhore, Sujoy, Ganian, Robert, Montecchiani, Fabrizio, Nöllenburg, Martin
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 26; Issue: 3; Pages: 335-352
Show Abstract
An h-queue layout of a graph G consists of a linear order of its vertices and a partition of its edges into h sets, called queues, such that no two independent edges of the same queue nest. The minimum h such that G admits an h-queue layout is the queue number of G. We present two fixed-parameter tractable algorithms that exploit structural properties of graphs to compute optimal queue layouts. As our first result, we show that deciding whether a graph G has queue number 1 and computing a corresponding layout is fixed-parameter tractable when parameterized by the treedepth of G. Our second result then uses a more restrictive parameter, the vertex cover number, to solve the problem for arbitrary h.

Link to Repositum

Preface: Ninth workshop on graph classes, optimization, and Width Parameters, Vienna, Austria
Ganian, Robert, Kratochvíl, Jan, Szeider, Stefan
Type: Book Contribution; In: Ninth workshop on graph classes, optimization, and Width Parameters; Vol: 312

Link to Repositum

Multicriteria Optimization for Dynamic Demers Cartograms
Nickel, Soeren, Sondag, Max, Meulemans, Wouter, Kobourov, Stephen, Peltonen, Jaakko, Nöllenburg, Martin
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 28; Issue: 6; Pages: 2376-2387
Show Abstract
Cartograms are popular for visualizing numerical data for administrative regions in thematic maps. When there are multiple data values per region (over time or from different datasets) shown as animated or juxtaposed cartograms, preserving the viewer's mental map in terms of stability between multiple cartograms is another important criterion alongside traditional cartogram criteria such as maintaining adjacencies. We present a method to compute stable stable Demers cartograms, where each region is shown as a square scaled proportionally to the given numerical data and similar data yield similar cartograms. We enforce orthogonal separation constraints using linear programming, and measure quality in terms of keeping adjacent regions close (cartogram quality) and using similar positions for a region between the different data values (stability). Our method guarantees the ability to connect most lost adjacencies with minimal-length planar orthogonal polylines. Experiments show that our method yields good quality and stability on multiple quality criteria.

Link to Repositum

Testing Upward Planarity of Partial 2-Trees
Chaplick, Steven, Di Giacomo, Emilio, Frati, Fabrizio, Ganian, Robert, Raftopoulou, Chrysanthi, Simonov, Kirill
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2022; Vol: 13764; Pages: 175-187
Show Abstract
We present an O(n2)-time algorithm to test whether an n-vertex directed partial 2-tree is upward planar. This result improves upon the previously best known algorithm, which runs in O(n4) time.

Link to Repositum

Parameterized Algorithms for Upward Planarity
Chaplick, Steven, Di Giacomo, Emilio, Frati, Fabrizio, Ganian, Robert, Raftopoulou, Chrysanthi, Simonov, Kirill
Type: Inproceedings; In: 38th International Symposium on Computational Geometry (SoCG 2022); Vol: 224; Pages: 1-16
Show Abstract
We obtain new parameterized algorithms for the classical problem of determining whether a directed acyclic graph admits an upward planar drawing. Our results include a new fixed-parameter algorithm parameterized by the number of sources, an XP-algorithm parameterized by treewidth, and a fixed-parameter algorithm parameterized by treedepth. All three algorithms are obtained using a novel framework for the problem that combines SPQR tree-decompositions with parameterized techniques. Our approach unifies and pushes beyond previous tractability results for the problem on series-parallel digraphs, single-source digraphs and outerplanar digraphs.

Link to Repositum

Removing Popular Faces in Curve Arrangements by Inserting one more Curve
de Nooijer, Phoebe, Nickel, Soeren, Weinberger, Alexandra, Masárová, Zuzana, Mchedlidze, Tamara, Löffler, Maarten, Rote, Günter
Type: Inproceedings; In: 38th European Workshop on Computational Geometry - Booklet of abstracts; Pages: 1-8

Link to Repositum

SAT-Based Local Search for Plane Subgraph Partitions
Schidler, André
Type: Inproceedings; In: 38th International Symposium on Computational Geometry (SoCG 2022); Vol: 224; Pages: 1-8
Show Abstract
The Partition into Plane Subgraphs Problem (PPS) asks to partition the edges of a geometric graph with straight line segments into as few classes as possible, such that the line segments within a class do not cross. We discuss our approach GC-SLIM: a local search method that views PPS as a graph coloring problem and tackles it with a new and unique combination of propositional satisfiability (SAT) and tabu search, achieving the fourth place in the 2022 CG:SHOP Challenge.

Link to Repositum

Parameterised Partially-Predrawn Crossing Number
Hamm, Thekla, Hliněný, Petr
Type: Inproceedings; In: 38th International Symposium on Computational Geometry (SoCG 2022); Vol: 224; Pages: 46:1-46:15
Show Abstract
Inspired by the increasingly popular research on extending partial graph drawings, we propose a new perspective on the traditional and arguably most important geometric graph parameter, the crossing number. Specifically, we define the partially predrawn crossing number to be the smallest number of crossings in any drawing of a graph, part of which is prescribed on the input (not counting the prescribed crossings). Our main result - an FPT-algorithm to compute the partially predrawn crossing number - combines advanced ideas from research on the classical crossing number and so called partial planarity in a very natural but intricate way. Not only do our techniques generalise the known FPT-algorithm by Grohe for computing the standard crossing number, they also allow us to substantially improve a number of recent parameterised results for various drawing extension problems.

Link to Repositum

Finding a Battleship of Uncertain Shape
Hainzl, Eva-Maria, Löffler, Maarten, Perz, Daniel, Tkadlec, Josef, Wallinger, Markus
Type: Inproceedings; In: 38th European Workshop on Computational Geometry. March 14-16, 2022, Perugia, Italy. Booklet of abstracts
Show Abstract
Motivated by a game of Battleship, we consider the problem of efficiently hitting a ship of an uncertain shape within a large playing board. Formally, we fix a dimension d ∈ {1, 2}. A ship is a subset of Zd. Given a family F of ships, we say that an infinite subset X ⊂ Zd of the cells pierces F, if it intersects each translate of each ship in F (by a vector in Zd). In this work, we study the lowest possible (asymptotic) density π(F) of such a piercing subset. To our knowledge, this problem has previously been studied only in the special case ∣F∣ = 1 (a single ship). As our main contribution, we present a formula for π(F) when F consists of 2 ships of size 2 each, and we identify the toughest families in several other cases. We also implement an algorithm for finding π(F) in 1D.

Link to Repositum

A Unifying Framework for Characterizing and Computing Width Measures
Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Jaffke, Lars, Kwon, O-Joung
Type: Inproceedings; In: 13th Innovations in Theoretical Computer Science Conference (ITCS 2022); Vol: 215; Pages: 1-23
Show Abstract
Algorithms for computing or approximating optimal decompositions for decompositional parameters such as treewidth or clique-width have so far traditionally been tailored to specific width parameters. Moreover, for mim-width, no efficient algorithms for computing good decompositions were known, even under highly restrictive parameterizations. In this work we identify F-branchwidth as a class of generic decompositional parameters that can capture mim-width, treewidth, clique-width as well as other measures. We show that while there is an infinite number of F-branchwidth parameters, only a handful of these are asymptotically distinct. We then develop fixed-parameter and kernelization algorithms (under several structural parameterizations) that can approximate every possible F-branchwidth, providing a unifying parameterized framework that can efficiently obtain near-optimal tree-decompositions, k-expressions, as well as optimal mim-width decompositions.

Link to Repositum

A Learning Large Neighborhood Search for the Staff Rerostering Problem
Oberweger, Fabio Francisco, Raidl, Günther, Rönnberg, Elina, Huber, Marc
Type: Inproceedings; In: Integration of Constraint Programming, Artificial Intelligence, and Operations Research; Vol: 13292; Pages: 300-317
Show Abstract
To effectively solve challenging staff rerostering problems, we propose to enhance a large neighborhood search (LNS) with a machine learning guided destroy operator. This operator uses a conditional generative model to identify variables that are promising to select and combines this with the use of a special sampling strategy to make the actual selection. Our model is based on a graph neural network (GNN) and takes a problem-specific graph representation as input. Imitation learning is applied to mimic a time-expensive approach that solves a mixed-integer program (MIP) for finding an optimal destroy set in each iteration. An additional GNN is employed to predict a suitable temperature for the destroy set sampling process. The repair operator is realized by solving a MIP. Our learning LNS outperforms directly solving a MIP with Gurobi and yields improvements compared to a well-performing LNS with a manually designed destroy operator, also when generalizing to schedules with various numbers of employees.

Link to Repositum

A Large Neighborhood Search for Battery Swapping Station Location Planning for Electric Scooters
Jatschka, Thomas, Rauscher, Matthias, Kreutzer, Bernhard, Rodemann, Tobias, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 18th International Conference on Computer Aided Systems Theory (EUROCAST 2022); Pages: 32-33

Link to Repositum

The Complexity of Temporal Vertex Cover in Small-Degree Graphs
Hamm, Thekla, Klobas, Nina, Mertzios, George, Spirakis, Paul G.
Type: Inproceedings; In: Proceedings of the 36th AAAI Conference on Artificial Intelligence; Vol: 36; Pages: 10193-10201
Show Abstract
Temporal graphs naturally model graphs whose underlying topology changes over time. Recently, the problems Temporal Vertex Cover (or TVC) and Sliding-Window Temporal Vertex Cover (or Delta-TVC for time-windows of a fixed-length Delta) have been established as natural extensions of the classic Vertex Cover problem on static graphs with connections to areas such as surveillance in sensor networks. In this paper we initiate a systematic study of the complexity of TVC and Delta-TVC on sparse graphs. Our main result shows that for every Delta geq 2, Delta-TVC is NP-hard even when the underlying topology is described by a path or a cycle. This resolves an open problem from literature and shows a surprising contrast between Delta-TVC and TVC for which we provide a polynomial-time algorithm in the same setting. To circumvent this hardness, we present a number of exact and approximation algorithms for temporal graphs whose underlying topologies are given by a path, that have bounded vertex degree in every time step, or that admit a small-sized temporal vertex cover.

Link to Repositum

Slim Tree-Cut Width
Ganian, Robert, Korchemna, Viktoriia
Type: Inproceedings; In: 17th International Symposium on Parameterized and Exact Computation (IPEC 2022); Vol: 249; Pages: 1-18
Show Abstract
Tree-cut width is a parameter that has been introduced as an attempt to obtain an analogue of treewidth for edge cuts. Unfortunately, in spite of its desirable structural properties, it turned out that tree-cut width falls short as an edge-cut based alternative to treewidth in algorithmic aspects. This has led to the very recent introduction of a simple edge-based parameter called edge-cut width [WG 2022], which has precisely the algorithmic applications one would expect from an analogue of treewidth for edge cuts, but does not have the desired structural properties. In this paper, we study a variant of tree-cut width obtained by changing the threshold for so-called thin nodes in tree-cut decompositions from 2 to 1. We show that this “slim tree-cut width” satisfies all the requirements of an edge-cut based analogue of treewidth, both structural and algorithmic, while being less restrictive than edge-cut width. Our results also include an alternative characterization of slim tree-cut width via an easy-to-use spanning-tree decomposition akin to the one used for edge-cut width, a characterization of slim tree-cut width in terms of forbidden immersions as well as an approximation algorithm for computing the parameter.

Link to Repositum

The Complexity of k-Means Clustering when Little is Known
Ganian, Robert, Hamm, Thekla, Korchemna, Viktoriia, Okrasa, Karolina, Simonov, Kirill
Type: Inproceedings; In: Proceedings of the 39th International Conference on Machine Learning; Vol: 162; Pages: 6960-6987
Show Abstract
In the area of data analysis and arguably even in machine learning as a whole, few approaches have been as impactful as the classical k-means clustering. Here, we study the complexity of k-means clustering in settings where most of the data is not known or simply irrelevant. To obtain a more fine-grained understanding of the tractability of this clustering problem, we apply the parameterized complexity paradigm and obtain three new algorithms for k-means clustering of incomplete data: one for the clustering of bounded-domain (i.e., integer) data, and two incomparable algorithms that target real-valued data. Our approach is based on exploiting structural properties of a graphical encoding of the missing entries, and we show that tractability can be achieved using significantly less restrictive parameterizations than in the complementary case of few missing entries.

Link to Repositum

Compacting Squares: Input-Sensitive In-Place Reconfiguration of Sliding Squares
Akitaya, Hugo, Demaine, Erik, Korman, Matias, Kostitsyna, Irina, Parada, Irene, Sonke, Willem, Speckmann, Bettina, Uehara, Ryuhei, Wulms, Jules
Type: Presentation
Show Abstract
Edge-connected configurations of square modules, which can reconfigure through so-called sliding moves, are a well-established theoretical model for modular robots in two dimensions. Dumitrescu and Pach [Graphs and Combinatorics, 2006] proved that it is always possible to reconfigure one such configuration of n squares into any other using O(n2) sliding moves, while maintaining connectivity. For certain pairs of configurations, reconfiguration may require Ω(n2) sliding moves. However, significantly fewer moves may be sufficient. We present Gather&Compact, an input-sensitive in-place algorithm that requires only O(Pn) sliding moves to transform one configuration into the other, where ̄P is the maximum perimeter of the two bounding boxes. Our algorithm is built on the basic principle that well-connected components of modular robots can be transformed efficiently. Hence we iteratively increase the connectivity within a configuration, to finally make it xy-monotone. We implemented Gather&Compact and compared it experimentally to the in-place modification by Moreno and Sacristán [EuroCG 2020] of the Dumitrescu and Pach algorithm (MSDP). Our experiments show that Gather&Compact consistently outperforms MSDP by a significant margin.

Link to Repositum

Recognizing weighted and seeded disk graphs
Klemz, Boris, Nöllenburg, Martin, Prutkin, Roman
Type: Article; In: Journal of Computational Geometry (JOCG); Vol: 13; Issue: 1; Pages: 327-376
Show Abstract
Disk intersection representations realize graphs by mapping vertices bijectively to disks in the plane such that two disks intersect each other if and only if the corresponding vertices are adjacent in the graph. If intersections are restricted to touching points of the boundaries, we call them disk contact representations. Deciding whether a vertex-weighted planar graph can be realized such that the disks’ radii coincide with the vertex weights is known to be NP-hard for both contact and intersection representations. In this work, we reduce the gap between hardness and tractability by analyzing the problem for special graph classes. We show that in the contact scenario it remains NP-hard for outerplanar graphs with unit weights and for stars with arbitrary weights, strengthening the previous hardness results. On the positive side, we present a constructive linear-time recognition algorithm for embedded stars with arbitrary weights. We also consider a version of the problem in which the disks of a representation are supposed to cover preassigned points, called seeds. We show that both for contact and intersection representations this problem is NP-hard for unit weights even if the given graph is a path. If the disks’ radii are not prescribed, the problem remains NP-hard for trees in the contact scenario.

Link to Repositum

Shape-Guided Mixed Metro Map Layout
Batik, Tobias, Terziadis, Soeren, Wang, Yu-Shuen, Nöllenburg, Martin, Wu, Hsiang-Yun
Type: Inproceedings; In: Pacific Graphics 2022; Vol: 41; Pages: 495-506

Link to Repositum

Minimum Link Fencing
Bhore, Sujoy, Klute, Fabian, Löffler, Maarten, Nöllenburg, Martin, Terziadis, Soeren, Villedieu, Anais
Type: Inproceedings; In: 33rd International Symposium on Algorithms and Computation (ISAAC 2022); Vol: 248; Pages: 34:1-34:14
Show Abstract
We study a variant of the geometric multicut problem, where we are given a set 𝒫 of colored and pairwise interior-disjoint polygons in the plane. The objective is to compute a set of simple closed polygon boundaries (fences) that separate the polygons in such a way that any two polygons that are enclosed by the same fence have the same color, and the total number of links of all fences is minimized. We call this the minimum link fencing (MLF) problem and consider the natural case of bounded minimum link fencing (BMLF), where 𝒫 contains a polygon Q that is unbounded in all directions and can be seen as an outer polygon. We show that BMLF is NP-hard in general and that it is XP-time solvable when each fence contains at most two polygons and the number of segments per fence is the parameter. Finally, we present an O(n log n)-time algorithm for the case that the convex hull of 𝒫⧵{Q} does not intersect Q.

Link to Repositum

Weighted Model Counting with Twin-Width
Ganian, Robert, Pokrývka, Filip, Schidler, André, Simonov, Kirill, Szeider, Stefan
Type: Inproceedings; In: 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022); Vol: 236; Pages: 1-17
Show Abstract
Bonnet et al. (FOCS 2020) introduced the graph invariant twin-width and showed that many NP-hard problems are tractable for graphs of bounded twin-width, generalizing similar results for other width measures, including treewidth and clique-width. In this paper, we investigate the use of twin-width for solving the propositional satisfiability problem (SAT) and propositional model counting. We particularly focus on Bounded-ones Weighted Model Counting (BWMC), which takes as input a CNF formula F along with a bound k and asks for the weighted sum of all models with at most k positive literals. BWMC generalizes not only SAT but also (weighted) model counting. We develop the notion of “signed” twin-width of CNF formulas and establish that BWMC is fixed-parameter tractable when parameterized by the certified signed twin-width of F plus k. We show that this result is tight: it is neither possible to drop the bound k nor use the vanilla twin-width instead if one wishes to retain fixed-parameter tractability, even for the easier problem SAT. Our theoretical results are complemented with an empirical evaluation and comparison of signed twin-width on various classes of CNF formulas.

Link to Repositum

Quantified CDCL with Universal Resolution
Slivovsky, Friedrich
Type: Inproceedings; In: 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022); Vol: 236; Pages: 1-16
Show Abstract
Quantified Conflict-Driven Clause Learning (QCDCL) solvers for QBF generate Q-resolution proofs. Pivot variables in Q-resolution must be existentially quantified. Allowing resolution on universally quantified variables leads to a more powerful proof system called QU-resolution, but so far, QBF solvers have used QU-resolution only in very limited ways. We present a new version of QCDCL that generates proofs in QU-resolution by leveraging propositional unit propagation. We detail how conflict analysis must be adapted to handle universal variables assigned by propagation, and show that the procedure is still sound and terminating. We further describe how dependency learning can be incorporated in the algorithm to increase the flexibility of decision heuristics. Experiments with crafted instances and benchmarks from recent QBF evaluations demonstrate the viability of the resulting version of QCDCL.

Link to Repositum

Treelike Decompositions for Transductions of Sparse Graphs
Dreier, Jan, Gajarský, Jakub, Kiefer, Sandra, Pilipczuk, Michał, Toruńczyk, Szymon
Type: Inproceedings; In: Proceedings of the 37th Annual ACM/IEEE Symposium on Logic in Computer Science
Show Abstract
We give new decomposition theorems for classes of graphs that can be transduced in frst-order logic from classes of sparse graphs - more precisely, from classes of bounded expansion and nowhere dense classes. In both cases, the decomposition takes the form of a single colored rooted tree of bounded depth where, in addition, there can be links between nodes that are not related in the tree. The constraint is that the structure formed by the tree and the links has to be sparse. Using the decomposition theorem for transductions of nowhere dense classes, we show that they admit low-shrubdepth covers of size O(ne), where n is the vertex count and e > 0 is any fxed real. This solves an open problem posed by Gajarský et al. (ACM TOCL '20) and also by Brianski et al. (SIDMA '21).

Link to Repositum

Model Checking on Interpretations of Classes of Bounded Local Cliquewidth
Bonnet, Édouard, Dreier, Jan, Gajarský, Jakub, Kreutzer, Stephan, Mählmann, Nikolas, Simon, Pierre, Toruńczyk, Szymon
Type: Inproceedings; In: Proceedings of the 37th Annual ACM/IEEE Symposium on Logic in Computer Science; Pages: 1-13
Show Abstract
An interpretation is an operation that maps an input graph to an output graph by redefning its edge relation using a frst-order formula. This rich framework includes operations such as taking the complement or a fxed power of a graph as (very) special cases. We prove that there is an FPT algorithm for the frst-order model checking problem on classes of graphs which are frst-order interpretable in classes of graphs with bounded local cliquewidth. Notably, this includes interpretations of planar graphs, and of classes of bounded genus in general. To obtain this result we develop a new tool which works in a very general setting of NIP classes and which we believe can be an important ingredient in obtaining similar results in the future.

Link to Repositum

A SAT Attack on Rota’s Basis Conjecture
Kirchweger, Markus, Scheucher, Manfred, Szeider, Stefan
Type: Inproceedings; In: 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022); Vol: 236; Pages: 1-18
Show Abstract
The SAT modulo Symmetries (SMS) is a recently introduced framework for dynamic symmetry breaking in SAT instances. It combines a CDCL SAT solver with an external lexicographic minimality checking algorithm. We extend SMS from graphs to matroids and use it to progress on Rota’s Basis Conjecture (1989), which states that one can always decompose a collection of r disjoint bases of a rank r matroid into r disjoint rainbow bases. Through SMS, we establish that the conjecture holds for all matroids of rank 4 and certain special cases of matroids of rank 5. Furthermore, we extend SMS with the facility to produce DRAT proofs. External tools can then be used to verify the validity of additional axioms produced by the lexicographic minimality check. As a byproduct, we have utilized our framework to enumerate matroids modulo isomorphism and to support the investigation of various other problems on matroids.

Link to Repositum

Constant Congestion Brambles
Hatzel, Meike, Pilipczuk, Marcin, Komosa, Paweł, Sorge, Manuel
Type: Article; In: Discrete Mathematics & Theoretical Computer Science; Vol: 24; Issue: 1
Show Abstract
A bramble in an undirected graph G is a family of connected subgraphs of G such that for every two subgraphs H1 and H2 in the bramble either V (H1) ∩ V (H2) ≠ θ or there is an edge of G with one endpoint in V (H1) and the second endpoint in V (H2). The order of the bramble is the minimum size of a vertex set that intersects all elements of a bramble. Brambles are objects dual to treewidth: As shown by Seymour and Thomas, the maximum order of a bramble in an undirected graph G equals one plus the treewidth of G. However, as shown by Grohe and Marx, brambles of high order may necessarily be of exponential size: In a constant-degree n-vertex expander a bramble of order Ω(n1/2+δ) requires size exponential in Ω(n2δ) for any fixed δ ∈ (0, 1/2 ]. On the other hand, the combination of results of Grohe and Marx and Chekuri and Chuzhoy shows that a graph of treewidth k admits a bramble of order Ω(k1/2) and size O(k3/2). (Ω and O hide polylogarithmic divisors and factors, respectively.) In this note, we first sharpen the second bound by proving that every graph G of treewidth at least k contains a bramble of order Ω(k1/2) and congestion 2, i.e., every vertex of G is contained in at most two elements of the bramble (thus the bramble is of size linear in its order). Second, we provide a tight upper bound for the lower bound of Grohe and Marx: For every δ ∈ (0, 1/2 ], every graph G of treewidth at least k contains a bramble of order Ω(k1/2+δ) and size 2O(k2δ).

Link to Repositum

2021
Computational aspects of multiwinner approval voting via p-norm Hamming distance vectors
Chen, Jiehua, Hermelin, Danny, Sorge, Manuel
Type: Presentation
Show Abstract
We consider a family of multiwinner approval voting rules, which generalize the classical minisum and minimax procedures. Specifically, given a rational number p and approval ballots, the p-norm Hamming rule chooses a subset of co-winners which minimizes the p-norm of the vector of Hamming distances (i.e., the sizes of the symmetric differences) to the ballots. The minisum and minimax procedures are hence special cases and correspond to p = 1 and p = ∞, respectively. It is well-known that determining a winner set under the minisum procedure (p = 1) can be done in polynomial time, while it becomes NP-hard under the minimax procedure (p = ∞). In this work, we show that winner determination remains NP-hard for every fixed rational p > 1, closing the gap for all rational values of p between 1 and infinity. We also provide an almost tight exponential-time algorithm and a simple factor-2 approximation algorithm for all fixed p > 1.

Link to Repositum

Fractional Matchings under Preferences: Stability and Optimality
Chen, Jiehua
Type: Presentation
Show Abstract
We study generalizations of stable matching in which agents may be matched fractionally; this models time-sharing assignments. We focus on the so-called ordinal stability and cardinal stability, and investigate the computational complexity of finding an ordinally stable or cardinally stable fractional matching which either maximizes the social welfare (i.e., the overall utilities of the agents) or the number of fully matched agents (i.e., agents whose matching values sum up to one). We complete the complexity classification of both optimization problems for both ordinal stability and cardinal stability, distinguishing between the marriage (bipartite) and roommates (non-bipartite) cases and the presence or absence of ties in the preferences. In particular, we prove a surprising result that finding a cardinally stable fractional matching with maximum social welfare is NP-hard even for the marriage case without ties. This answers an open question and exemplifies a rare variant of stable marriage that remains hard for preferences without ties. We also complete the picture of the relations of the stability notions and derive structural properties.

Link to Repositum

A Large Neighborhood Search for a Cooperative Optimization Approach to Distribute Service Points in Mobility Applications
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther
Type: Presentation
Show Abstract
COA is a cooperative optimization approach for optimizing locations of service points for mobility applications. A machine learning component processes the feedback obtained from the customers. The learned information is then used in an optimization component to generate an optimized solution. As optimization core a Large Neighborhood Search (LNS) is used. A particular challenge for developing the LNS is that a fast way for evaluating the non-trivial objective function for candidate solutions is needed. The LNS is evaluated on artificial instances as well as instances derived from real-world data.

Link to Repositum

Worbel: Aggregating Point Labels intoWord Clouds
Bhore, Sujoy, Ganian, Robert, Li, Guangping, Nöllenburg, Martin, Wulms, Jules
Type: Inproceedings; In: Proceedings of the 29th International Conference on Advances in Geographic Information Systems
Show Abstract
Point feature labeling is a classical problem in cartography and GIS that has been extensively studied for geospatial point data. At the same time, word clouds are a popular visualization tool to show the most important words in text data which has also been extended to visualize geospatial data (Buchin et al. PacificVis 2016). In this paper, we study a hybrid visualization, which combines aspects of word clouds and point labeling. In the considered setting, the input data consists of a set of points grouped into categories and our aim is to place multiple disjoint and axis-aligned rectangles, each representing a category, such that they cover points of (mostly) the same category under some natural quality constraints. In our visualization, we then place category names inside the computed rectangles to produce a labeling of the covered points which summarizes the predominant categories globally (in a word-cloud-like fashion) while locally avoiding excessive misrepresentation of points (i.e., retaining the precision of point labeling). We show that computing a minimum set of such rectangles is NP-hard. Hence, we turn our attention to developing heuristics and exact SAT models to compute our visualizations. We evaluate our algorithms quantitatively, measuring running time and quality of the produced solutions, on several artificial and real-world data sets. Our experiments show that the heuristics produce solutions of comparable quality to the SAT models while running much faster.

Link to Repositum

Turbocharging Treewidth-Bounded Bayesian Network Structure Learning
Ramaswamy, Vaidyanathan P., Szeider, Stefan
Type: Inproceedings; In: Thirty-Fifth AAAI Conference on Artificial Intelligence; Pages: 3895-3903
Show Abstract
We present a new approach for learning the structure of a treewidth-bounded Bayesian Network (BN). The key to our approach is applying an exact method (based on MaxSAT) locally, to improve the score of a heuristically computed BN. This approach allows us to scale the power of exact methods- so far only applicable to BNs with several dozens of random variables-to large BNs with several thousands of random variables. Our experiments show that our method improves the score of BNs provided by state-of-the-art heuristic methods, often significantly

Link to Repositum

Parameterized Complexity in Graph Drawing
Ganian, Robert, Montecchiani, Fabrizio, Nöllenburg, Martin, Zehavi, Meirav
Type: Inproceedings; In: Seminar on Parameterized Complexity in Graph Drawing; Pages: 82-123
Show Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 21293 "Parameterized Complexity in Graph Drawing". The seminar was held mostly in-person from July 18 to July 23, 2021. It brought together 28 researchers from the Graph Drawing and the Parameterized Complexity research communities with the aim to discuss and explore open research questions on the interface between the two fields. The report collects the abstracts of talks and open problems presented in the seminar, as well as brief progress reports from the working groups.

Link to Repositum

Parameterized Complexity of Small Decision Tree Learning
Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Thirty-Fifth AAAI Conference on Artificial Intelligence; Pages: 1-9
Show Abstract
We study the NP-hard problem of learning a decision tree (DT) of smallest depth or size from data. We provide the first parameterized complexity analysis of the problem and draw a detailed parameterized complexity map for the natural parameters: size or depth of the DT, maximum domain size of all features, and the maximum Hamming distance between any two examples. Our main result shows that learning DTs of smallest depth or size is fixed-parameter tractable (FPT) parameterized by the combination of all three of these parameters. We contrast this FPT-result by various hardness results that underline the algorithmic significance of the considered parameters.

Link to Repositum

SAT Modulo Symmetries for Graph Generation
Kirchweger, Markus, Szeider, Stefan
Type: Inproceedings; In: 27th International Conference on Principles and Practice of Constraint Programming (CP 2021); Pages: 1-16
Show Abstract
We propose a novel constraint-based approach to graph generation. Our approach utilizes the interaction between a CDCL SAT solver and a special symmetry propagator where the SAT solver runs on an encoding of the desired graph property. The symmetry propagator checks partially generated graphs for minimality w.r.t. a lexicographic ordering during the solving process. This approach has several advantages over a static symmetry breaking: (i) symmetries are detected early in the generation process, (ii) symmetry breaking is seamlessly integrated into the CDCL procedure, and (iii) the propagator can perform a complete symmetry breaking without causing a prohibitively large initial encoding. We instantiate our approach by generating extremal graphs with certain restrictions in terms of girth and diameter. With our approach, we could confirm the Simon-Murty Conjecture (1979) on diameter-2-critical graphs for graphs up to 18 vertices.

Link to Repositum

First-Order Logic in Finite Domains: Where Semantic Evaluation Competes with SMT Solving
Schreiner, Wolfgang, Reichl, Franz-Xaver
Type: Inproceedings; In: Electronic Proceedings in Theoretical Computer Science; Pages: 99-113
Show Abstract
formulas over finite domains supported by the mathematical model checker RISCAL: first, the original approach of "semantic evaluation" (based on an implementation of the denotational semantics of the RISCAL language) and, second, the later approach of SMT solving (based on satisfiability preserving translations of RISCAL formulas to SMT-LIB formulas as inputs for SMT solvers). After a short presentation of the two approaches and a discussion of their fundamental pros and cons, we quantitatively evaluate them, both by a set of artificial benchmarks and by a set of benchmarks taken from real-life applications of RISCAL; for this, we apply the state-of-the-art SMT solvers Boolector, CVC4, Yices, and Z3. Our benchmarks demonstrate that (while SMT solving generally vastly outperforms semantic evaluation), the various SMT solvers exhibit great performance differences. More important, we identify classes of formulas where semantic evaluation is able to compete with (or even outperform) satisfiability solving, outlining some room for improvements in the translation of RISCAL formulas to SMT-LIB formulas as well as in the current SMT technology.

Link to Repositum

Efficient fully dynamic elimination forests with applications to detecting long paths and cycles
Chen, Jiehua, Czerwinski, Wojciech, Disser, Yann, Feldmann, Andreas Emil, Hermelin, Danny, Nadara, Wojciech, Pilipczuk, Marcin, Pilipczuk, Michał, Sorge, Manuel, Wróblewski, Bartłomiej, Zych-Pawlewicz, Anna
Type: Inproceedings; In: Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA); Pages: 796-809
Show Abstract
We present a data structure that in a dynamic graph of treedepth at most d, which is modified over time by edge insertions and deletions, maintains an optimumheight elimination forest. The data structure achieves worst-case update time 2O(d2), which matches the best known parameter dependency in the running time of a static fpt algorithm for computing the treedepth of a graph. This improves a result of Dvorák et al. [ESA 2014], who for the same problem achieved update time f(d) for some non-elementary (i.e. tower-exponential) function f. As a by-product, we improve known upper bounds on the sizes of minimal obstructions for having treedepth d from doubly-exponential in d to dO(d). As applications, we design new fully dynamic parameterized data structures for detecting long paths and cycles in general graphs. More precisely, for a fixed parameter k and a dynamic graph G, modified over time by edge insertions and deletions, our data structures maintain answers to the following queries: . Does G contain a simple path on k vertices? . Does G contain a simple cycle on at least k vertices? In the first case, the data structure achieves amortized update time 2O(k2). In the second case, the amortized update time is 2O(k4) + O(k log n). In both cases we assume access to a dictionary on the edges of G.

Link to Repositum

Approximate Evaluation of First-Order Counting Queries
Dreier, Jan, Rossmanith, Peter
Type: Inproceedings; In: Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA); Pages: 1720-1739
Show Abstract
Kuske and Schweikardt introduced the very expressive rstorder counting logic FOC(P) to model database queries with counting operations. They showed that there is an e cient model-checking algorithm on graphs with bounded degree, while Grohe and Schweikardt showed that probably no such algorithm exists for trees of bounded depth. We analyze the fragment FO(f>0g) of this logic. While we remove for example subtraction and comparison between two nonatomic counting terms, this logic remains quite expressive: We allow nested counting and comparison between counting terms and arbitrarily large numbers. Our main result is an approximation scheme of the model-checking problem for FO(f>0g) that runs in linear fpt time on structures with bounded expansion. This scheme either gives the correct answer or says \I do not know." The latter answer may only be given if small perturbations in the number-symbols of the formula could make it both satis ed and unsatis ed. This is complemented by showing that exactly solving the model-checking problem for FO(f>0g) is already hard on trees of bounded depth and just slightly increasing the expressiveness of FO(f>0g) makes even approximation hard on trees.

Link to Repositum

The Parameterized Complexity of Clustering Incomplete Data
Eiben, Eduard, Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Thirty-Fifth AAAI Conference on Artificial Intelligence; Pages: 7296-7304
Show Abstract
We study fundamental clustering problems for incomplete data. Specifically, given a set of incomplete d-dimensional vectors (representing rows of a matrix), the goal is to complete the missing vector entries in a way that admits a partitioning of the vectors into at most k clusters with radius or diameter at most r. We give tight characterizations of the parameterized complexity of these problems with respect to the parameters k, r, and the minimum number of rows and columns needed to cover all the missing entries. We show that the considered problems are fixed-parameter tractable when parameterized by the three parameters combined, and that dropping any of the three parameters results in parameterized intractability. A byproduct of our results is that, for the complete data setting, all problems under consideration are fixed-parameter tractable parameterized by k + r.

Link to Repositum

Lacon- and Shrub-Decompositions: A New Characterization of First-Order Transductions of Bounded Expansion Classes
Dreier, Jan
Type: Inproceedings; In: 2021 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)
Show Abstract
The concept of bounded expansion provides a robust way to capture sparse graph classes with interesting algorithmic properties. Most notably, every problem definable in first-order logic can be solved in linear time on bounded expansion graph classes. First-order interpretations and transductions of sparse graph classes lead to more general, dense graph classes that seem to inherit many of the nice algorithmic properties of their sparse counterparts. In this work we introduce lacon- and shrub-decompositions and use them to characterize transductions of bounded expansion graph classes and other graph classes. If one can efficiently compute sparse shrub- or lacon-decompositions of transductions of bounded expansion classes then one can solve every problem definable in first-order logic in linear time on these classes.

Link to Repositum

Gerrymandering on Graphs: Computational Complexity and Parameterized Algorithms
Gupta, Sushmita, Jain, Pallavi, Panolan, Fahad, Roy, Sanjukta, Saurabh, Saket
Type: Inproceedings; In: Algorithmic Game Theory; Pages: 140-155
Show Abstract
This paper studies gerrymandering on graphs from a computational viewpoint (introduced by Cohen-Zemach et al. [AAMAS 2018] and continued by Ito et al. [AAMAS 2019]). Our contributions are twofold: conceptual and computational. We propose a generalization of the model studied by Ito et al., where the input consists of a graph on n vertices representing the set of voters, a set of m candidates C, a weight function wv : C → Z+ for each voter v ∈ V (G) representing the preference of the voter over the candidates, a distinguished candidate p ∈ C, and a positive integer k. The objective is to decide if it is possible to partition the vertex set into k districts (i.e., pairwise disjoint connected sets) such that the candidate p wins more districts than any other candidate. There are several natural parameters associated with the problem: the number of districts (k), the number of voters (n), and the number of candidates (m). The problem is known to be NP-complete even if k = 2, m = 2, and G is either a complete bipartite graph (in fact K2,n, i.e., partitions of size 2 and n) or a complete graph. Moreover, recently we and Bentert et al. [WG 2021], independently, showed that the problem is NP-hard for paths. This means that the search for FPT algorithms needs to focus either on the parameter n, or subclasses of forest (as the problem is NP-complete on K2,n, a family of graphs that can be transformed into a forest by deleting one vertex). Circumventing these intractability results we successfully obtain the following algorithmic results

Link to Repositum

On (Coalitional) Exchange-Stable Matching
Chen, Jiehua, Chmurovic, Adrian, Jogl, Fabian, Sorge, Manuel
Type: Inproceedings; In: Algorithmic Game Theory; Pages: 205-220
Show Abstract
We study (coalitional) exchange stability, which Alcalde [Economic Design, 1995] introduced as an alternative solution concept for matching markets involving property rights, such as assigning persons to two-bed rooms. Here, a matching of a given Stable Marriage or Stable Roommates instance is called coalitional exchange-stable if it does not admit any exchange-blocking coalition, that is, a subset S of agents in which everyone prefers the partner of some other agent in S. The matching is exchange-stable if it does not admit any exchange-blocking pair, that is, an exchange-blocking coalition of size two. We investigate the computational and parameterized complexity of the Coalitional Exchange-Stable Marriage (resp. Coalitional Exchange Roommates) problem, which is to decide whether a Stable Marriage (resp. Stable Roommates) instance admits a coalitional exchange-stable matching. Our findings resolve an open question and confirm the conjecture of Cechlárová and Manlove [Discrete Applied Mathematics, 2005] that Coalitional Exchange-Stable Marriage is NP-hard even for complete preferences without ties. We also study bounded-length preference lists and a local-search variant of deciding whether a given matching can reach an exchange-stable one after at most k swaps, where a swap is defined as exchanging the partners of the two agents in an exchange-blocking pair.

Link to Repositum

Fractional Matchings under Preferences: Stability and Optimality
Chen, Jiehua, Roy, Sanjukta, Sorge, Manuel
Type: Inproceedings; In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Show Abstract
We study generalizations of stable matching in which agents may be matched fractionally; this models time-sharing assignments. We focus on the so-called ordinal stability and cardinal stability, and investigate the computational complexity of finding an ordinally stable or cardinally stable fractional matching which either maximizes the social welfare (i.e., the overall utilities of the agents) or the number of fully matched agents (i.e., agents whose matching values sum up to one). We complete the complexity classification of both optimization problems for both ordinal stability and cardinal stability, distinguishing between the marriage (bipartite) and roommates (non-bipartite) cases and the presence or absence of ties in the preferences. In particular, we prove a surprising result that finding a cardinally stable fractional matching with maximum social welfare is NP-hard even for the marriage case without ties. This answers an open question and exemplifies a rare variant of stable marriage that remains hard for preferences without ties. We also complete the picture of the relations of the stability notions and derive structural properties.

Link to Repositum

External Labeling: Fundamental Concepts and Algorithmic Techniques
Bekos, Michael A., Niedermann, Benjamin, Nöllenburg, Martin
Type: Book; Pages: 130
Show Abstract
This book focuses on techniques for automating the procedure of creating external labelings, also known as callout labelings. In this labeling type, the features within an illustration are connected by thin leader lines (called leaders) with their labels, which are placed in the empty space surrounding the image. In general, textual labels describing graphical features in maps, technical illustrations (such as assembly instructions or cutaway illustrations), or anatomy drawings are an important aspect of visualization that convey information on the objects of the visualization and help the reader understand what is being displayed. Most labeling techniques can be classified into two main categories depending on the "distance" of the labels to their associated features. Internal labels are placed inside or in the direct neighborhood of features, while external labels, which form the topic of this book, are placed in the margins outside the illustration, where they do not occlude the illustration itself. Both approaches form well-studied topics in diverse areas of computer science with several important milestones. The goal of this book is twofold. The first is to serve as an entry point for the interested reader who wants to get familiar with the basic concepts of external labeling, as it introduces a unified and extensible taxonomy of labeling models suitable for a wide range of applications. The second is to serve as a point of reference for more experienced people in the field, as it brings forth a comprehensive overview of a wide range of approaches to produce external labelings that are efficient either in terms of different algorithmic optimization criteria or in terms of their usability in specific application domains. The book mostly concentrates on algorithmic aspects of external labeling, but it also presents various visual aspects that affect the aesthetic quality and usability of external labeling.

Link to Repositum

A local search framework for industrial test laboratory scheduling
Mischek, Florian, Musliu, Nysret
Type: Article; In: Annals of Operations Research; Vol: 302; Pages: 533-562
Show Abstract
In this paper we introduce a complex scheduling problem that arises in a real-world industrial test laboratory, where a large number of activities has to be performed using qualified personnel and specialized equipment, subject to time windows and several other constraints. The problem is an extension of the well-known Resource-Constrained Project Scheduling Problem and features multiple heterogeneous resources with very general availability restrictions, as well as a grouping phase, where the jobs have to be assembled from smaller units. We describe an instance generator for this problem and publicly available instance sets, both randomly generated and real-world data. Finally, we present and evaluate different metaheuristic approaches to solve the scheduling subproblem, where the assembled jobs are already provided. Our results show that Simulated Annealing can be used to achieve very good results, in particular for large instances, where it is able to consistently find better solutions than a state-of-the-art constraint programming solver within reasonable time.

Link to Repositum

Smart Charging of Electric Vehicles Considering SOC-Dependent Maximum Charging Powers
Schaden, Benjamin, Jatschka, Thomas, Limmer, Steffen, Raidl, Günther
Type: Article; In: Energies; Vol: 14; Issue: 22; Pages: 1-33
Show Abstract
The aim of this work is to schedule the charging of electric vehicles (EVs) at a single charging station such that the temporal availability of each EV as well as the maximum available power at the station are considered. The total costs for charging the vehicles should be minimized w.r.t. time-dependent electricity costs. A particular challenge investigated in this work is that the maximum power at which a vehicle can be charged is dependent on the current state of charge (SOC) of the vehicle. Such a consideration is particularly relevant in the case of fast charging. Considering this aspect for a discretized time horizon is not trivial, as the maximum charging power of an EV may also change in between time steps. To deal with this issue, we instead consider the energy by which an EV can be charged within a time step. For this purpose, we show how to derive the maximum charging energy in an exact as well as an approximate way. Moreover, we propose two methods for solving the scheduling problem. The first is a cutting plane method utilizing a convex hull of the, in general, nonconcave SOC–power curves. The second method is based on a piecewise linearization of the SOC– energy curve and is effectively solved by branch-and-cut. The proposed approaches are evaluated on benchmark instances, which are partly based on real-world data. To deal with EVs arriving at different times as well as charging costs changing over time, a model-based predictive control strategy is usually applied in such cases. Hence, we also experimentally evaluate the performance of our approaches for such a strategy. The results show that optimally solving problems with general piecewise linear maximum power functions requires high computation times. However, problems with concave, piecewise linear maximum charging power functions can efficiently be dealt with by means of linear programming. Approximating an EV’s maximum charging power with a concave function may result in practically infeasible solutions, due to vehicles potentially not reaching their specified target SOC. However, our results show that this error is negligible in practice.

Link to Repositum

A General Cooperative Optimization Approach for Distributing Service Points in Mobility Applications
Jatschka, Thomas, Raidl, Günther R., Rodemann, Tobias
Type: Article; In: Algorithms; Vol: 14; Issue: 8; Pages: 232
Show Abstract
This article presents a cooperative optimization approach (COA) for distributing service points for mobility applications, which generalizes and refines a previously proposed method. COA is an iterative framework for optimizing service point locations, combining an optimization component with user interaction on a large scale and a machine learning component that learns user needs and provides the objective function for the optimization. The previously proposed COA was designed for mobility applications in which single service points are sufficient for satisfying individual user demand. This framework is generalized here for applications in which the satisfaction of demand relies on the existence of two or more suitably located service stations, such as in the case of bike/car sharing systems. A new matrix factorization model is used as surrogate objective function for the optimization, allowing us to learn and exploit similar preferences among users w.r.t. service point locations. Based on this surrogate objective function, a mixed integer linear program is solved to generate an optimized solution to the problem w.r.t. the currently known user information. User interaction, refinement of the matrix factorization, and optimization are iterated. An experimental evaluation analyzes the performance of COA with special consideration of the number of user interactions required to find near optimal solutions. The algorithm is tested on artificial instances, as well as instances derived from real-world taxi data from Manhattan. Results show that the approach can effectively solve instances with hundreds of potential service point locations and thousands of users, while keeping the user interactions reasonably low. A bound on the number of user interactions required to obtain full knowledge of user preferences is derived, and results show that with 50% of performed user interactions the solutions generated by COA feature optimality gaps of only 1.45% on average.

Link to Repositum

The complexity landscape of decompositional parameters for ILP: Programs with few global variables and constraints
Dvořák, Pavel, Eiben, Eduard, Ganian, Robert, Knop, Dušan, Ordyniak, Sebastian
Type: Article; In: Artificial Intelligence; Vol: 300; Issue: 103561; Pages: 103561
Show Abstract
Integer Linear Programming (ILP) has a broad range of applications in various areas of artificial intelligence. Yet in spite of recent advances, we still lack a thorough understanding of which structural restrictions make ILP tractable. Here we study ILP instances consisting of a small number of "global" variables and/or constraints such that the remaining part of the instance consists of small and otherwise independent components; this is captured in terms of a structural measure we call fracture backdoorswhich generalizes, for instance, the well-studied class of N-fold ILP instances. Our main contributions can be divided into three parts. First, we formally develop fracture backdoors and obtain exact and approximation algorithms for computing these. Second, we exploit these backdoors to develop several new parameterized algorithms for ILP; the performance of these algorithms will naturally scale based on the number of global variables or constraints in the instance. Finally, we complement the developed algorithms with matching lower bounds. Altogether, our results paint a near-complete complexity landscape of ILP with respect to fracture backdoors.

Link to Repositum

Balanced stable marriage: How close is close enough?
Gupta, Sushmita, Roy, Sanjukta, Saurabh, Saket, Zehavi, Meirav
Type: Article; In: Theoretical Computer Science; Vol: 883; Pages: 19-43
Show Abstract
Balanced Stable Marriage (BSM)is a central optimization version of the classicStable Marriage (SM)problem. We studyBSMfrom the viewpoint of Parameterized Complexity. Informally, the input ofBSMconsists of nmen, nwomen, and an integer k. Each person ahas a (sub)set of acceptable partners, A(a), whom aranks strictly; we use pa(b)to denote the position of b ∈A(a)in a's preference list. The objective is to decide whether there exists a stable matching μsuch that balance(μ) max{ (m,w)∈μpm(w), (m,w)∈μpw(m)} ≤k. InSM, all stable matchings match the same set of agents, A which can be computed in polynomial time. As balance(μ) ≥|A |2for any stable matching μ,BSMis trivially fixed-parameter tractable (FPT) with respect to k. Thus, a natural question is whetherBSMis FPT with respect to k −|A |2. With this viewpoint in mind, we draw a line between tractability and intractability in relation to the target value. This line separates additional natural parameterizations higher/lower than ours (e.g., we automatically resolve the parameterization k −|A |2). The two extreme stable matchings are the man-optimal μMand the woman-optimal μW. Let OM= (m,w)∈μMpm(w), and OW= (m,w)∈μWpw(m).

Link to Repositum

Matchings under Preferences: Strength of Stability and Tradeoffs
Chen, Jiehua, Skowron, Piotr, Sorge, Manuel
Type: Article; In: ACM Transactions on Economics and Computation; Vol: 9; Issue: 4; Pages: 1-55
Show Abstract
We propose two solution concepts for matchings under preferences: robustness and near stability. The former strengthens while the latter relaxes the classical definition of stability by Gale and Shapley (1962). Informally speaking, robustness requires that a matching must be stable in the classical sense, even if the agents slightly change their preferences. Near stability, however, imposes that a matching must become stable (again, in the classical sense) provided the agents are willing to adjust their preferences a bit. Both of our concepts are quantitative; together they provide means for a fine-grained analysis of the stability of matchings. Moreover, our concepts allow the exploration of tradeoffs between stability and other criteria of social optimality, such as the egalitarian cost and the number of unmatched agents.We investigate the computational complexity of finding matchings that implement certain predefined tradeoffs.We provide a polynomial-time algorithm that, given agent preferences, returns a socially optimal robust matching (if it exists), and we prove that finding a socially optimal and nearly stable matching is computationally hard.

Link to Repositum

On Structural Parameterizations of the Edge Disjoint Paths Problem
Ganian, Robert, Ordyniak, Sebastian, Ramanujan, M. S.
Type: Article; In: Algorithmica; Vol: 83; Issue: 6; Pages: 1605-1637
Show Abstract
In this paper we revisit the classical edge disjoint paths (EDP) problem, where one is given an undirected graph G and a set of terminal pairs P and asks whether G contains a set of pairwise edge-disjoint paths connecting every terminal pair in P. Our focus lies on structural parameterizations for the problem that allow for efficient (polynomial-time or FPT) algorithms. As our first result, we answer an open question stated in Fleszar et al. (Proceedings of the ESA, 2016), by showing that the problem can be solved in polynomial time if the input graph has a feedback vertex set of size one. We also show that EDP parameterized by the treewidth and the maximum degree of the input graph is fixed-parameter tractable. Having developed two novel algorithms for EDP using structural restrictions on the input graph, we then turn our attention towards the augmented graph, i.e., the graph obtained from the input graph after adding one edge between every terminal pair. In constrast to the input graph, where EDP is known to remain NP-hard even for treewidth two, a result by Zhou et al. (Algorithmica 26(1):3--30, 2000) shows that EDP can be solved in non-uniform polynomial time if the augmented graph has constant treewidth; we note that the possible improvement of this result to an FPT-algorithm has remained open since then. We show that this is highly unlikely by establishing the W[1]-hardness of the problem parameterized by the treewidth (and even feedback vertex set) of the augmented graph. Finally, we develop an FPT-algorithm for EDP by exploiting a novel structural parameter of the augmented graph.

Link to Repositum

New width parameters for SAT and #SAT
Ganian, Robert, Szeider, Stefan
Type: Article; In: Artificial Intelligence; Vol: 295; Issue: 103460; Pages: 103460
Show Abstract
We study the parameterized complexity of the propositional satisfiability (SAT) and the more general model counting (#SAT) problems and obtain novel fixed-parameter algorithms that exploit the structural properties of input formulas. In the first part of the paper, we parameterize by the treewidth of the following two graphs associated with CNF formulas: the consensus graph and the conflict graph. Both graphs have as vertices the clauses of the formula; in the consensus graph two clauses are adjacent if they do not contain a complementary pair of literals, while in the conflict graph two clauses are adjacent if they do contain a complementary pair of literals. We show that #SAT is fixed-parameter tractable when parameterized by the treewidth of the former graph, but SAT is W[1]-hard when parameterized by the treewidth of the latter graph. In the second part of the paper, we turn our attention to a novel structural parameter we call h-modularity which is loosely inspired by the well-established notion of community structure. The new parameter is defined in terms of a partition of clauses of the given CNF formula into strongly interconnected communities which are sparsely interconnected with each other. Each community forms a hitting formula, whereas the interconnections between communities form a graph of small treewidth. Our algorithms first identify the community structure and then use them for an efficient solution of SAT and #SAT, respectively.

Link to Repositum

Your rugby mates don't need to know your colleagues: Triadic closure with edgecolors
Bulteau, Laurent, Grüttemeier, Niels, Komusiewicz, Christian, Sorge, Manuel
Type: Article; In: Journal of Computer and System Sciences; Vol: 120; Pages: 75-96
Show Abstract
Given an undirected graph G =(V, E)the NP-hard Strong Triadic Closure (STC) problem asks for a labeling of the edges as weak and strong such that at most kedges are weak and for each induced P3in Gat least one edge is weak. We study the following generalizations of STC with cdifferent strong edge colors. In Multi-STC an induced P3may receive two strong labels as long as they are different. In Edge-List Multi-STC and Vertex-List Multi-STC we may restrict the set of permitted colors for each edge of G. We show that, under the Exponential Time Hypothesis (ETH), Edge-List Multi-STC and Vertex-List Multi-STC cannot be solved in time 2o(|V|2). We proceed with a parameterized complexity analysis in which we extend previous algorithms and kernelizations for STC [11,14]to the three variants or outline the limits of such an extension. ©

Link to Repositum

Small one‑dimensional Euclidean preference profiles
Chen, Jiehua, Grottke, Sven
Type: Article; In: Social Choice and Welfare; Vol: 57; Issue: 1; Pages: 117-144
Show Abstract
We characterize one-dimensional Euclidean preference profiles with a small number of alternatives and voters. We show that every single-peaked preference profile with two voters is one-dimensional Euclidean, and that every preference profile with up to five alternatives is one-dimensional Euclidean if and only if it is both singlepeaked and single-crossing. By the work of Chen et al. (Social Choice and Welfare 48(2):409-432, 2017), we thus obtain that the smallest single-peaked and singlecrossing preference profiles that are not one-dimensional Euclidean consist of three voters and six alternatives

Link to Repositum

Geometric planar networks on bichromatic collinear points
Bandyapadhyay, Sayan, Banik, Aritra, Bhore, Sujoy, Nöllenburg, Martin
Type: Article; In: Theoretical Computer Science; Vol: 895; Pages: 124-136
Show Abstract
We study three classical graph problems - Hamiltonian path, minimum spanning tree, and minimum perfect matching on geometric graphs induced by bichromatic (red and blue) points. These problems have been widely studied for points in the Euclidean plane, and many of them are NP-hard. In this work, we consider these problems for collinear points. We show that almost all of these problems can be solved in linear time in this setting.

Link to Repositum

On Strict (Outer-)Confluent Graphs
Förster, Henry, Ganian, Robert, Klute, Fabian, Nöllenburg, Martin
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 25; Issue: 1; Pages: 481-512
Show Abstract
A strict confluent (SC) graph drawing is a drawing of a graph with vertices as points in the plane, where vertex adjacencies are represented not by individual curves but rather by unique smooth paths through a planar system of junctions and arcs. If all vertices of the graph lie in the outer face of the drawing, the drawing is called a strict outerconfluent (SOC) drawing. SC and SOC graphs were first considered by Eppstein et al. in Graph Drawing 2013. Here, we establish several new relationships between the class of SC graphs and other graph classes, in particular string graphs and unit-interval graphs. Further, we extend earlier results about special bipartite graph classes to the notion of strict outerconfluency, show that SOC graphs have cop number two, and establish that tree-like (Δ-)SOC graphs have bounded cliquewidth.

Link to Repositum

Labeling nonograms: Boundary labeling for curve arrangements
Klute, Fabian, Löffler, Maarten, Nöllenburg, Martin
Type: Article; In: Computational Geometry; Vol: 98; Issue: 101791; Pages: 101791
Show Abstract
Slanted and curved nonograms are a new type of picture puzzles introduced by Van de Kerkhof et al. (2019). They consist of an arrangement of lines or curves within a frame B, where some of the cells need to be colored in order to obtain the solution picture. For solving the puzzle, up to two clues need to be attached as numeric labels to each line on either side of B. In this paper we study the algorithmic problem of optimizing or deciding the existence of a placement of the given clue labels to such a nonogram. We provide polynomial-time algorithms for restricted cases and prove NP-completeness in general.

Link to Repositum

The Complexity of Bayesian Network Learning: Revisiting the Superstructure
Ganian, Robert, Korchemna, Viktoria
Type: Inproceedings; In: Advances in Neural Information Processing Systems 34 (NeurIPS 2021); Vol: 34; Pages: 430-442
Show Abstract
We investigate the parameterized complexity of Bayesian Network Structure Learning (BNSL), a classical problem that has received significant attention in empirical but also purely theoretical studies. We follow up on previous works that have analyzed the complexity of BNSL w.r.t. the so-called superstructure of the input. While known results imply that BNSL is unlikely to be fixed-parameter tractable even when parameterized by the size of a vertex cover in the superstructure, here we show that a different kind of parameterization - notably by the size of a feedback edge set - yields fixed-parameter tractability. We proceed by showing that this result can be strengthened to a localized version of the feedback edge set, and provide corresponding lower bounds that complement previous results to provide a complexity classification of BNSL w.r.t. virtually all well-studied graph parameters.We then analyze how the complexity of BNSL depends on the representation of the input. In particular, while the bulk of past theoretical work on the topic assumed the use of the so-called non-zero representation, here we prove that if an additive representation can be used instead then BNSL becomes fixed-parameter tractable even under significantly milder restrictions to the superstructure, notably when parameterized by the treewidth alone. Last but not least, we show how our results can be extended to the closely related problem of Polytree Learning.

Link to Repositum

Avoiding Monochromatic Rectangles Using Shift Patterns
Liu, Zhenjun, Chew, Leroy, Heule, Marijn
Type: Inproceedings; In: Proceedings of the Fourteenth International Symposium on Combinatorial Search; Pages: 225-227
Show Abstract
Ramsey Theory (Graham and Rothschild 1990) deals with patterns that cannot be avoided indefinitely. In this paper we focus on a pattern of coloring a n by m grid with k colors: Consider all possible rectangles within the grid whose length and width are at least 2. Try to color the grid using k colors so that no such rectangle has the same color for its four corners. When this is possible, we say that the n by m grid is k-colorable while avoiding monochromatic rectangles. Many results regarding this problem have been derived by pure combinatorial approach: for example, a generalization of Van der Waerden´s Theorem can give an upper bound; it was shown (Fenner et al. 2010) that for each prime power k, a k2 + k by k2 grid is k-colorable but adding a row makes it not k-colorable. However, these results are unable to decide many grid sizes: whether an 18 by 18 grid is 4- colorable is an example. This grid had been the last missing piece of the question of 4-colorability, and a challenge prize was raised to close the gap (Hayes 2009). Three years later, a valid 4-coloring of that grid was found by encoding the problem into propositional logic and applying SAT-solving techniques (Steinbach and Posthoff 2012). That solution has highly symmetric color assignments by construction: assignments of red are obtained by rotating the assignments of white around the center by 90 degrees, blue by 180 degrees, and so on. By now, the k-colorability has been decided for k 2 f2; 3; 4g for all grids. Therefore, it is natural to ask, what about 5 colors? Applying the aforementioned theorem (Fenner et al. 2010), the 25 by 30 grid is 5-colorable, but for other grids such as 26 by 26 the problem remains open. Like many combinatorial search problems, the rectangle-free grid coloring problem is characterized by enormous search space and rich symmetries. Symmetry breaking is a common technique to trim down the search space while preserving satisfiability. While breaking symmetries between different solutions is definitely helpful, breaking the so-called "internal symmetries" that is within a specific solution has also been proved to be effective (Heule andWalsh 2010). Enforcing observed patterns is also known as "streamlining" (Gomes and Sellmann 2004) and "resolution tunnels" (Kouril and Franco 2005) and has been effective to improve lower bounds of various combinatorial problems including Van der Waerden numbers (Kouril and Franco 2005; Heule 2017), Latin squares (Gomes and Sellmann 2004), and graceful graphs (Heule and Walsh 2010). However, the rotation internal symmetry that Steinbach and Posthoff applied cannot translate to 5 colors. In finding a 4-coloring of the 18 by 18 grid, Steinbach and Posthoff generated a "cyclic reusable assignment" for one color, and rotated the solution by 90, 180, and 270 degrees to assign to the remaining three. Rotation by 90 degrees does not apply naturally when the number of colors are not multiples of 4. Thus, to find a 5-coloring of 26 by 26, or rather, to find a valid coloring for any number of colors k in general, an internal symmetry that is applicable to all k is very desirable. We found a novel internal symmetry that is unrestricted by the number of colors k. Further analysis on this symmetry gives further constraints on the number of occurrences of each color. Factoring in these constraints, the search time for G24;24 and G25;25 can be reduced to a few minutes. We also attempted to solve the 26 by 26 grid; many attempts came down to only 2 or 3 unsatisfied clauses, but none succeeded.

Link to Repositum

Hamiltonian cycles in planar cubic graphs with facial 2-factors, and a new partial solution of Barnette's Conjecture
Bagheri Ghavam Abadi, Behrooz, Feder, Tomas, Fleischner, Herbert, Subi, Carlos
Type: Article; In: Journal of Graph Theory; Vol: 96; Issue: 2; Pages: 269-288
Show Abstract
We study the existence of hamiltonian cycles in plane cubic graphs 𝐺 having a facial 2-factor Q. Thus hamiltonicity in 𝐺 is transformed into the existence of a (quasi) spanning tree of faces in the contraction 𝐺∕Q. In particular, we study the case where 𝐺 is the leapfrog extension (called vertex envelope of a plane cubic graph 𝐺₀. As a consequence we prove hamiltonicity in the leapfrog extension of planar cubic cyclically 4-edge-connected bipartite graphs. This and other results of this paper establish partial solutions of Barnette's Conjecture according to which every 3-connected cubic planar bipartite graph is hamiltonian. These results go considerably beyond Goodey's result on this topic.

Link to Repositum

Untangling Circular Drawings: Algorithms and Complexity
Bhore, Sujoy, Li, Guangping, Nöllenburg, Martin, Rutter, Ignaz, Wu, Hsiang-Yun
Type: Inproceedings; In: 32nd International Symposium on Algorithms and Computation (ISAAC 2021); Pages: 1-17
Show Abstract
We consider the problem of untangling a given (non-planar) straight-line circular drawing δG of an outerplanar graph G = (V,E) into a planar straight-line circular drawing by shifting a minimum number of vertices to a new position on the circle. For an outerplanar graph G, it is clear that such a crossing-free circular drawing always exists and we define the circular shifting number shift◦(δG) as the minimum number of vertices that need to be shifted to resolve all crossings of δG. We show that the problem Circular Untangling, asking whether shift◦(δG) ≤ K for a given integer K, is NP-complete. Based on this result we study Circular Untangling for almost-planar circular drawings, in which a single edge is involved in all the crossings. In this case we provide a tight upper bound shift◦(δG) ≤ ⌊n2 ⌋ − 1, where n is the number of vertices in G, and present a polynomial-time algorithm to compute the circular shifting number of almost-planar drawings.

Link to Repositum

Hardness and Optimality in QBF Proof Systems Modulo NP
Chew, Leroy
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2021; Pages: 98-115
Show Abstract
In this paper we show that extended Q-resolution is optimal among all QBF proof systems that allow strategy extraction modulo an NP oracle. In other words, for any QBF refutation system f where circuits witnessing the Herbrand functions can be extracted in polynomial time from f-refutations, f can be simulated by extended Q-resolution augmented with an NP oracle as described by Beyersdor et al.We argue that using NP oracles and strategy extraction gives a natural framework to study QBF systems as they have relations to SAT calls and game instances, respectively, in QBF solving. A weaker version of QBF extension variables also put forward by Jussila et al. does not have this optimality result, and we show that under an NP oracle there is no improvement of weak extended Q-Resolution compared to ordinary Q-Resolution.

Link to Repositum

A best possible result for the square of a 2-block to be hamiltonian
Ekstein, Jan, Fleischner, Herbert
Type: Article; In: Discrete Mathematics; Vol: 344; Issue: 112158; Pages: 112158
Show Abstract
It is shown that for any choice of four different vertices x1, . . . , x4 in a 2-block G of order p > 3, there is a hamiltonian cycle in G2 containing four different edges xiyi of E(G) for certain vertices yi, i = 1, 2, 3, 4. This result is best possible.

Link to Repositum

Multivalued decision diagrams for prize-collecting job sequencing with one common and multiple secondary resources
Maschler, Johannes, Raidl, Günther
Type: Article; In: Annals of Operations Research; Vol: 302; Pages: 507-531
Show Abstract
Multivalued decision diagrams (MDD) are a powerful tool for approaching combinatorial optimization problems. Relatively compact relaxed and restricted MDDs are applied to obtain dual bounds and heuristic solutions and provide opportunities for new branching schemes. We consider a prize-collecting sequencing problem in which a subset of given jobs has to be found that is schedulable and yields maximum total prize. The primary aim of this work is to study different methods for creating relaxed MDDs for this problem. To this end, we adopt and extend the two main MDD compilation approaches found in the literature: top down construction and incremental refinement. In a series of computational experiments these methods are compared. The results indicate that for our problem the incremental refinement method produces MDDs with stronger bounds. Moreover, heuristic solutions are derived by compiling restricted MDDs and by applying a general variable neighborhood search (GVNS). Here we observe that the top down construction of restricted MDDs is able to yield better solutions as the GVNS on small to medium-sized instances.

Link to Repositum

Certified DQBF Solving by Definition Extraction
Reichl, Franz-Xaver, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2021; Pages: 499-517
Show Abstract
We propose a new decision procedure for dependency quantified Boolean formulas (DQBFs) that uses interpolation-based definition extraction to compute Skolem functions in a counter-example guided inductive synthesis (CEGIS) loop. In each iteration, a family of candidate Skolem functions is tested for correctness using a SAT solver, which either determines that a model has been found, or returns an assignment of the universal variables as a counterexample. Fixing a counterexample generally involves changing candidates of multiple existential variables with incomparable dependency sets. Our procedure introduces auxiliary variables-which we call arbiter variables-that each represent the value of an existential variable for a particular assignment of its dependency set. Possible repairs are expressed as clauses on these variables, and a SAT solver is invoked to find an assignment that deals with all previously seen counterexamples. Arbiter variables define the values of Skolem functions for assignments where they were previously undefined, and may lead to the detection of further Skolem functions by definition extraction. A key feature of the proposed procedure is that it is certifying by design: for true DQBF, models can be returned at minimal overhead. Towards certification of false formulas, we prove that clauses can be derived in an expansion-based proof system for DQBF. In an experimental evaluation on standard benchmark sets, a prototype implementation was able to match (and in some cases, surpass) the performance of state-of-the-art-solvers. Moreover, models could be extracted and validated for all true instances that were solved.

Link to Repositum

Layered Area-Proportional Rectangle Contact Representations
Nöllenburg, Martin, Villedieu, Anaïs, Wulms, Jules
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2021; Vol: 12868; Pages: 318-326
Show Abstract
We investigate two optimization problems on area-proportional rectangle contact representations for layered, embedded planar graphs. The vertices are represented as interior-disjoint unit-height rectangles of prescribed widths, grouped in one row per layer, and each edge is ideally realized as a rectangle contact of positive length. Such rectangle contact representations find applications in semantic word or tag cloud visualizations, where a collection of words is displayed such that pairs of semantically related words are close to each other. In this paper, we want to maximize the number of realized rectangle contacts or minimize the overall area of the rectangle contact representation, while avoiding any false adjacencies. We present a network flow model for area minimization, a linear-time algorithm for contact maximization of two-layer graphs, and an ILP model for maximizing contacts of k-layer graphs.

Link to Repositum

Unit Disk Representations of Embedded Trees, Outerplanar and Multi-legged Graphs
Bhore, Sujoy, Löffler, Maarten, Nickel, Soeren, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2021; Vol: 12868; Pages: 304-317
Show Abstract
A unit disk intersection representation (UDR) of a graph G represents each vertex of G as a unit disk in the plane, such that two disks intersect if and only if their vertices are adjacent in G. A UDR with interior-disjoint disks is called a unit disk contact representation (UDC). We prove that it is NP-hard to decide if an outerplanar graph or an embedded tree admits a UDR. We further provide a linear-time decidable characterization of caterpillar graphs that admit a UDR. Finally we show that it can be decided in linear time if a lobster graph admits a weak UDC, which permits intersections between disks of non-adjacent vertices.

Link to Repositum

Cluster Editing Parameterized Above Modification-Disjoint P3-Packings
Li, Shaohua, Pilipczuk, Marcin, Sorge, Manuel
Type: Inproceedings; In: 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021); Pages: 1-16
Show Abstract
transform G into a union of vertex-disjoint cliques by at most k modifications (edge deletions or insertions). In this paper, we study the following variant of Cluster Editing. We are given a graph G = (V,E), a packing H of modification-disjoint induced P3s (no pair of P3s in H share an edge or non-edge) and an integer ℓ. The task is to decide whether G can be transformed into a union of vertex-disjoint cliques by at most ℓ + |H| modifications (edge deletions or insertions). We show that this problem is NP-hard even when ℓ = 0 (in which case the problem asks to turn G into a disjoint union of cliques by performing exactly one edge deletion or insertion per element of H) and when each vertex is in at most 23 P3s of the packing. This answers negatively a question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated by C. Komusiewicz at Shonan meeting no. 144 in March 2019. We then initiate the study to find the largest integer c such that the problem remains tractable when restricting to packings such that each vertex is in at most c packed P3s. Van Bevern et al. showed that the case c = 1 is fixed-parameter tractable with respect to ℓ and we show that the case c = 2 is solvable in |V |2ℓ+O(1) time.

Link to Repositum

Stable Visual Summaries for Trajectory Collections
Wulms, Jules, Buchmüller, Juri, Meulemans, Wouter, Verbeek, Kevin, Speckmann, Bettina
Type: Inproceedings; In: 2021 IEEE 14th Pacific Visualization Symposium (PacificVis)
Show Abstract
The availability of devices that track moving objects has led to an explosive growth in trajectory data. When exploring the resulting large trajectory collections, visual summaries are a useful tool to identify time intervals of interest. A typical approach is to represent the spatial positions of the tracked objects at each time step via a one-dimensional ordering; visualizations of such orderings can then be placed in temporal order along a time line. There are two main criteria to assess the quality of the resulting visual summary: spatial quality - how well does the ordering capture the structure of the data at each time step, and stability - how coherent are the orderings over consecutive time steps or temporal ranges? In this paper we introduce a new Stable Principal Component (SPC) method to compute such orderings, which is explicitly parameterized for stability, allowing a trade-off between the spatial quality and stability. We conduct extensive computational experiments that quantitatively compare the orderings produced by ours and other stable dimensionality-reduction methods to various stateof- the-art approaches using a set of well-established quality metrics that capture spatial quality and stability. We conclude that stable dimensionality reduction outperforms existing methods on stability, without sacrificing spatial quality or efficiency; in particular, our new SPC method does so at a fraction of the computational costs.

Link to Repositum

Optimal Discretization is Fixed-parameter Tractable
Kratsch, Stefan, Masařík, Tomáš, Muzi, Irene, Pilipczuk, Marcin, Sorge, Manuel
Type: Inproceedings; In: Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA); Pages: 1702-1719
Show Abstract
Given two disjoint sets W1 and W2 of points in the plane, the Optimal Discretization problem asks for the minimum size of a family of horizontal and vertical lines that separate W1 from W2, that is, in every region into which the lines partition the plane there are either only points of W1, or only points of W2, or the region is empty. Equivalently, Optimal Discretization can be phrased as a task of discretizing continuous variables: We would like to discretize the range of x-coordinates and the range of y-coordinates into as few segments as possible, maintaining that no pair of points from W1 W2 are projected onto the same pair of segments under this discretization. We provide a fixed-parameter algorithm for the problem, parameterized by the number of lines in the solution. Our algorithm works in time 2O(k2 log k)nO(1), where k is the bound on the number of lines to find and n is the number of points in the input. Our result answers in positive a question of Bonnet, Giannopolous, and Lampis [IPEC 2017] and of Froese (PhD thesis, 2018) and is in contrast with the known intractability of two closely related generalizations: the Rectangle Stabbing problem and the generalization in which the selected lines are not required to be axisparallel.

Link to Repositum

Learning Surrogate Functions for the Short-Horizon Planning in Same-Day Delivery Problems
Bracher, Adrian, Frohner, Nikolaus, Raidl, Günther R.
Type: Inproceedings; In: Integration of Constraint Programming, Artificial Intelligence, and Operations Research; Pages: 283-298
Show Abstract
Same-day delivery problems are challenging stochastic vehicle routing problems, where dynamically arriving orders have to be delivered to customers within a short time while minimizing costs. In this work, we consider the short-horizon planning of a problem variant where every order has to be delivered with the goal to minimize delivery tardiness, travel times, and labor costs of the drivers involved. Stochastic information as spatial and temporal order distributions is available upfront. Since timely routing decisions have to be made over the planning horizon of a day, the well-known sampling approach from the literature for considering expected future orders is not suitable due to its high runtimes. To mitigate this, we suggest to use a surrogate function for route durations that predicts the future delivery duration of the orders belonging to a route at its planned starting time. This surrogate function is directly used in the online optimization replacing the myopic current route duration. The function is trained offline by data obtained from running full day-simulations, sampling and solving a number of scenarios for each route at each decision point in time. We consider three different models for the surrogate function and compare with a sampling approach on challenging real-world inspired artificial instances. Results indicate that the new approach can outperform the sampling approach by orders of magnitude regarding runtime while significantly reducing travel costs in most cases.

Link to Repositum

Driver Shift Planning for an Online Store with Short Delivery Times
Horn, Matthias, Frohner, Nikolaus, Raidl, Günther R.
Type: Inproceedings; In: Proceedings of the 2nd International Conference on Industry 4.0 and Smart Manufacturing (ISM 2020); Vol: 180; Pages: 517-524
Show Abstract
In this work we derive daily driver shift plans for an online store which delivers goods to customers within short times. The goal is to minimize the total labor time (total shift lengths) over all shifts. Thereby orders must be assigned to shifts s.t. all orders are delivered in time. We model this optimization problem by means of a mixed integer linear program using a time-index based formulation. This model features strengthening inequalities that allow to solve it also reasonably well with an open source branch-and-cut solver. Furthermore we use a coarse-grained variant of the model to quickly derive high-quality heuristic solutions within one minute even for larger instances with up to two thousand orders. On a realistic benchmark instance set the overall approach is able to obtain solutions with remaining optimality gaps below 1%.

Link to Repositum

Balanced Independent and Dominating Sets on Colored Interval Graphs
Bhore, Sujoy, Haunert, Jan-Henrik, Klute, Fabian, Li, Guangping, Nöllenburg, Martin
Type: Inproceedings; In: SOFSEM 2021: Theory and Practice of Computer Science; Pages: 89-103
Show Abstract
We study two new versions of independent and dominating set problems on vertex-colored interval graphs, namely f-Balanced Independent Set (f-BIS) and f-Balanced Dominating Set (f-BDS). Let G = (V,E) be an interval graph with a color assignment function γ : V → {1, . . . , k} that maps all vertices in G onto k colors. A subset of vertices S ⊆ V is called f-balanced if S contains f vertices from each color class. In the f-BIS and f-BDS problems, the objective is to compute an independent set or a dominating set that is f-balanced. We show that both problems are NP-complete even on proper interval graphs. For the BIS problem on interval graphs, we design two FPT algorithms, one parameterized by (f, k) and the other by the vertex cover number of G. Moreover, for an optimization variant of BIS on interval graphs, we present a polynomial time approximation scheme (PTAS) and an O(n log n) time 2-approximation algorithm.

Link to Repositum

ClusterSets: Optimizing Planar Clusters in Categorical Point Data
Geiger, J., Cornelsen, S., Haunert, J.‐H., Kindermann, P., Mchedlidze, T., Nöllenburg, M., Okamoto, Y., Wolff, A.
Type: Inproceedings; In: Computer Graphics Forum; Pages: 471-481
Show Abstract
In geographic data analysis, one is often given point data of different categories (such as facilities of a university categorized by department). Drawing upon recent research on set visualization, we want to visualize category membership by connecting points of the same category with visual links. Existing approaches that follow this path usually insist on connecting all members of a category, which may lead to many crossings and visual clutter. We propose an approach that avoids crossings between connections of different categories completely. Instead of connecting all data points of the same category, we subdivide categories into smaller, local clusters where needed. We do a case study comparing the legibility of drawings produced by our approach and those by existing approaches. In our problem formulation, we are additionally given a graph G on the data points whose edges express some sort of proximity. Our aim is to find a subgraph G0 of G with the following properties: (i) edges connect only data points of the same category, (ii) no two edges cross, and (iii) the number of connected components (clusters) is minimized. We then visualize the clusters in G0. For arbitrary graphs, the resulting optimization problem, Cluster Minimization, is NP-hard (even to approximate). Therefore, we introduce two heuristics. We do an extensive benchmark test on real-world data. Comparisons with exact solutions indicate that our heuristics do astonishing well for certain relative-neighborhood graphs.

Link to Repositum

Parameterized Complexity of Feature Selection for Categorical Data Clustering
Bandyapadhyay, Sayan, Fomin, Fedor, Golovach, Petr, Simonov, Kirill
Type: Inproceedings; In: 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021); Pages: 1-14
Show Abstract
We develop new algorithmic methods with provable guarantees for feature selection in regard to categorical data clustering. While feature selection is one of the most common approaches to reduce dimensionality in practice, most of the known feature selection methods are heuristics. We study the following mathematical model. We assume that there are some inadvertent (or undesirable) features of the input data that unnecessarily increase the cost of clustering. Consequently, we want to select a subset of the original features from the data such that there is a small-cost clustering on the selected features. More precisely, for given integers ℓ (the number of irrelevant features) and k (the number of clusters), budget B, and a set of n categorical data points (represented by m-dimensional vectors whose elements belong to a finite set of values Σ), we want to select m − ℓ relevant features such that the cost of any optimal k-clustering on these features does not exceed B. Here the cost of a cluster is the sum of Hamming distances (ℓ0-distances) between the selected features of the elements of the cluster and its center. The clustering cost is the total sum of the costs of the clusters. We use the framework of parameterized complexity to identify how the complexity of the problem depends on parameters k, B, and |Σ|. Our main result is an algorithm that solves the Feature Selection problem in time f(k,B, |Σ|) · mg(k,|Σ|) · n2 for some functions f and g. In other words, the problem is fixed-parameter tractable parameterized by B when |Σ| and k are constants. Our algorithm for Feature Selection is based on a solution to a more general problem, Constrained Clustering with Outliers. In this problem, we want to delete a certain number of outliers such that the remaining points could be clustered around centers satisfying specific constraints. One interesting fact about Constrained Clustering with Outliers is that besides Feature Selection, it encompasses many other fundamental problems regarding categorical data such as Robust Clustering, Binary and Boolean Low-rank Matrix Approximation with Outliers, and Binary Robust Projective Clustering. Thus as a byproduct of our theorem, we obtain algorithms for all these problems. We also complement our algorithmic findings with complexity lower bounds.

Link to Repositum

Proof Complexity of Symbolic QBF Reasoning
Mengel, Stefan, Slivovsky, Friedrich
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2021; Pages: 399-416
Show Abstract
We introduce and investigate symbolic proof systems for Quantified Boolean Formulas (QBF) operating on Ordered Binary Decision Diagrams (OBDDs). These systems capture QBF solvers that perform symbolic quantifier elimination, and as such admit short proofs of formulas of bounded path-width and quantifier complexity. As a consequence, we obtain exponential separations from standard clausal proof systems, specifically (long-distance) QU-Resolution and IR-Calc. We further develop a lower bound technique for symbolic QBF proof systems based on strategy extraction that lifts known lower bounds from communication complexity. This allows us to derive strong lower bounds against symbolic QBF proof systems that are independent of the variable ordering of the underlying OBDDs, and that hold even if the proof system is allowed access to an NP-oracle.

Link to Repositum

On the Upward Book Thickness Problem: Combinatorial and Complexity Results
Bhore, Sujoy, Da Lozzo, Giordano, Montecchiani, Fabrizio, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2021; Vol: 12868; Pages: 242-256
Show Abstract
A long-standing conjecture by Heath, Pemmaraju, and Trenk states that the upward book thickness of outerplanar DAGs is bounded above by a constant. In this paper, we show that the conjecture holds for subfamilies of upward outerplanar graphs, namely those whose underlying graph is an internally-triangulated outerpath or a cactus, and those whose biconnected components are st-outerplanar graphs. On the complexity side, it is known that deciding whether a graph has upward book thickness k is NP-hard for any fixed k ≥ 3. We show that the problem, for any k ≥ 5, remains NP-hard for graphs whose domination number is O(k), but it is FPT in the vertex cover number.

Link to Repositum

A$$^*$$-Based Compilation of Relaxed Decision Diagrams for the Longest Common Subsequence Problem
Horn, Matthias, Raidl, Günther R.
Type: Inproceedings; In: Integration of Constraint Programming, Artificial Intelligence, and Operations Research; Pages: 72-88
Show Abstract
We consider the longest common subsequence (LCS) problem and propose a new method for obtaining tight upper bounds on the solution length. Our method relies on the compilation of a relaxed multivalued decision diagram (MDD) in a special way that is based on the principles of A ∗ search. An extensive experimental evaluation on several standard LCS benchmark instance sets shows that the novel construction algorithm clearly outperforms a traditional top-down construction (TDC) of MDDs. We are able to obtain stronger and at the same time more compact relaxed MDDs than TDC and this in shorter time. For several groups of benchmark instances new best known upper bounds are obtained. In comparison to existing simple upper bound procedures, the obtained bounds are on average 14.8% better.

Link to Repositum

Multivalued decision diagrams for prize-collecting job sequencing with one common and multiple secondary resources
Maschler, Johannes, Raidl, Günther R.
Type: Article; In: Annals of Operations Research; Vol: 302; Issue: 2; Pages: 507-531
Show Abstract
Multivalued decision diagrams (MDD) are a powerful tool for approaching combinatorial optimization problems. Relatively compact relaxed and restrictedMDDsare applied to obtain dual bounds and heuristic solutions and provide opportunities for new branching schemes. We consider a prize-collecting sequencing problem in which a subset of given jobs has to be found that is schedulable and yields maximum total prize. The primary aim of this work is to study different methods for creating relaxed MDDs for this problem. To this end, we adopt and extend the two main MDD compilation approaches found in the literature: top down construction and incremental refinement. In a series of computational experiments these methods are compared. The results indicate that for our problem the incremental refinement method produces MDDs with stronger bounds. Moreover, heuristic solutions are derived by compiling restrictedMDDs and by applying a general variable neighborhood search (GVNS). Here we observe that the top down construction of restricted MDDs is able to yield better solutions as the GVNS on small to medium-sized instances

Link to Repositum

Solving the Longest Common Subsequence Problem Concerning Non-Uniform Distributions of Letters in Input Strings
Nikolic, Bojan, Kartelj, Aleksandar, Djukanovic, Marko, Grbic, Milana, Blum, Christian, Raidl, Günther
Type: Article; In: Mathematics; Vol: 9; Issue: 13; Pages: 1515
Show Abstract
The longest common subsequence (LCS) problem is a prominent NP-hard optimization problem where, given an arbitrary set of input strings, the aim is to find a longest subsequence, which is common to all input strings. This problem has a variety of applications in bioinformatics, molecular biology and file plagiarism checking, among others. All previous approaches from the literature are dedicated to solving LCS instances sampled from uniform or near-to-uniform probability distributions of letters in the input strings. In this paper, we introduce an approach that is able to effectively deal with more general cases, where the occurrence of letters in the input strings follows a non-uniform distribution such as a multinomial distribution. The proposed approach makes use of a time-restricted beam search, guided by a novel heuristic named GMPSUM. This heuristic combines two complementary scoring functions in the form of a convex combination. Furthermore, apart from the close-to-uniform benchmark sets from the related literature, we introduce three new benchmark sets that differ in terms of their statistical properties. One of these sets concerns a case study in the context of text analysis. We provide a comprehensive empirical evaluation in two distinctive settings: (1) short-time execution with fixed beam size in order to evaluate the guidance abilities of the compared search heuristics; and (2) long-time executions with fixed target duration times in order to obtain high-quality solutions. In both settings, the newly proposed approach performs comparably to state-of-the-art techniques in the context of close-to-uniform instances and outperforms state-of-the-art approaches for non-uniform instances.

Link to Repositum

Chapter 17. Fixed-Parameter Tractability
Samer, Marko, Szeider, Stefan
Type: Book Contribution; In: Frontiers in Artificial Intelligence and Applications

Link to Repositum

Threshold Treewidth and Hypertree Width
Ganian, Robert, Schidler, Andre, Sorge, Manuel, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Show Abstract
Treewidth and hypertree width have proven to be highly successful structural parameters in the context of the Constraint Satisfaction Problem (CSP). When either of these parameters is bounded by a constant, then CSP becomes solvable in polynomial time. However, here the order of the polynomial in the running time depends on the width, and this is known to be unavoidable; therefore, the problem is not fixed-parameter tractable parameterized by either of these width measures. Here we introduce an enhancement of tree and hypertree width through a novel notion of thresholds, allowing the associated decompositions to take into account information about the computational costs associated with solving the given CSP instance. Aside from introducing these notions, we obtain efficient theoretical as well as empirical algorithms for computing threshold treewidth and hypertree width and show that these parameters give rise to fixed-parameter algorithms for CSP as well as other, more general problems. We complement our theoretical results with experimental evaluations in terms of heuristics as well as exact methods based on SAT/SMT encodings.

Link to Repositum

Computing Optimal Hypertree Decompositions with SAT
Schidler, Andre, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Show Abstract
Hypertree width is a prominent hypergraph invariant with many algorithmic applications in constraint satisfaction and databases. We propose a novel characterization for hypertree width in terms of linear elimination orderings. We utilize this characterization to generate a new SAT encoding that we evaluate on an extensive set of benchmark instances. We compare it to state-of-the-art exact methods for computing optimal hypertree width. Our results show that the encoding based on the new characterization is not only significantly more compact than known encodings but also outperforms the other methods.

Link to Repositum

SAT-based Decision Tree Learning for Large Data Sets
Schidler, Andre, Szeider, Stefan
Type: Inproceedings; In: Thirty-Fifth AAAI Conference on Artificial Intelligence; Pages: 3904-3912
Show Abstract
Decision trees of low depth are beneficial for understanding and interpreting the data they represent. Unfortunately, finding a decision tree of lowest depth that correctly represents given data is NP-hard. Hence known algorithms either (i) utilize heuristics that do not optimize the depth or (ii) are exact but scale only to small or medium-sized instances. We propose a new hybrid approach to decision tree learning, combining heuristic and exact methods in a novel way. More specifically, we employ SAT encodings repeatedly to local parts of a decision tree provided by a standard heuristic, leading to a global depth improvement. This allows us to scale the power of exact SAT-based methods to almost arbitrarily large data sets. We evaluate our new approach experimentally on a range of realworld instances that contain up to several thousand samples. In almost all cases, our method successfully decreases the depth of the initial decision tree; often, the decrease is significant.

Link to Repositum

Crossing-Optimal Extension of Simple Drawings
Ganian, Robert, Hamm, Thekla, Klute, Fabian, Parada, Irene, Vogtenhuber, Birgit
Type: Inproceedings; In: 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021); Pages: 1-17
Show Abstract
In extension problems of partial graph drawings one is given an incomplete drawing of an input graph G and is asked to complete the drawing while maintaining certain properties. A prominent area where such problems arise is that of crossing minimization. For plane drawings and various relaxations of these, there is a number of tractability as well as lower-bound results exploring the computational complexity of crossing-sensitive drawing extension problems. In contrast, comparatively few results are known on extension problems for the fundamental and broad class of simple drawings, that is, drawings in which each pair of edges intersects in at most one point. In fact, the extension problem of simple drawings has only recently been shown to be NP-hard even for inserting a single edge. In this paper we present tractability results for the crossing-sensitive extension problem of simple drawings. In particular, we show that the problem of inserting edges into a simple drawing is fixed-parameter tractable when parameterized by the number of edges to insert and an upper bound on newly created crossings. Using the same proof techniques, we are also able to answer several closely related variants of this problem, among others the extension problem for k-plane drawings. Moreover, using a different approach, we provide a single-exponential fixed-parameter algorithm for the case in which we are only trying to insert a single edge into the drawing.

Link to Repositum

The Complexity of Object Association in Multiple Object Tracking
Ganian, Robert, Hamm, Thekla, Ordyniak, Sebastian
Type: Inproceedings; In: The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21); Pages: 1388-1396
Show Abstract
Object association, i.e., the identification of which observations correspond to the same object, is a central task for the area of multiple object tracking. Two prominent models capturing this task have been introduced in the literature: the Lifted Multicut model and the more recent Lifted Paths model. Here, we carry out a detailed complexity-theoretic study of the problems arising from these two models that is aimed at complementing previous empirical work on object association. We obtain a comprehensive complexity map for both models that takes into account natural restrictions to instances such as possible bounds on the number of frames, number of tracked objects and branching degree, as well as less explicit structural restrictions such as having bounded treewidth. Our results include new fixed-parameter and XP algorithms for the problems as well as hardness proofs which altogether indicate that the Lifted Paths problem exhibits a more favorable complexity behavior than Lifted Multicut.

Link to Repositum

The Parameterized Complexity of Connected Fair Division
Deligkas, Argyrios, Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Show Abstract
We study the Connected Fair Division problem (CFD), which generalizes the fundamental problem of fairly allocating resources to agents by requiring that the items allocated to each agent form a connected subgraph in a provided item graph G. We expand on previous results by providing a comprehensive complexity-theoretic understanding of CFD based on new algorithms and lower bounds while taking into account several well-established notions of fairness: proportionality, envy-freeness, EF1 and EFX. In particular, we show that to achieve tractability, one needs to restrict both the agents and the item graph in a meaningful way. We design XPalgorithms for the problem parameterized by (1) clique-width of G plus the number of agents and (2) treewidth of G plus the number of agent types, along with corresponding lower bounds. Finally, with respect to the restrictions considered here, we show that to achieve fixed-parameter tractability one needs to not only use a more restrictive parameterization of G, but also include the maximum item valuation as an additional parameter.

Link to Repositum

Computing Kemeny Rankings from d-Euclidean Preferences
Hamm, Thekla, Lackner, Martin, Rapberger, Anna
Type: Inproceedings; In: Algorithmic Decision Theory; Pages: 147-161
Show Abstract
Kemeny´s voting rule is a well-known and computationally intractable rank aggregation method. In this work, we propose an algorithm that finds an embeddable Kemeny ranking in d-Euclidean elections. This algorithm achieves a polynomial runtime (for a fixed dimension d) and thus demonstrates the algorithmic usefulness of the d-Euclidean restriction. We further investigate how well embeddable Kemeny rankings approximate optimal (unrestricted) Kemeny rankings.

Link to Repositum

Stable Matchings with Diversity Constraints: Affirmative Action is beyond NP
Chen, Jiehua, Ganian, Robert, Hamm, Thekla
Type: Inproceedings; In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence; Pages: 1-7
Show Abstract
Weinvestigatethefollowingmany-to-onestable matching problem with diversity con-straints (SMTI-DIVERSE): Given a set of studentsand a set of colleges which have preferences overeach other, where the students have overlappingtypes, and the colleges each have a total capacityas well as quotas for individual types (the diversityconstraints), is there a matching satisfying alldiversity constraints such that no unmatchedstudent-college pair has an incentive to deviate?SMTI-DIVERSEis known to be NP-hard. How-ever, as opposed to the NP-membership claims inthe literature[Azizet al., 2019; Huang, 2010], weprove that it is beyond NP: it is complete for thecomplexity class P2. In addition, we provide acomprehensive analysis of the problem´s complex-ity from the viewpoint of natural restrictions to in-puts and obtain new algorithms for the problem.

Link to Repositum

The Complexity Landscape of Resource-Constrained Scheduling
Ganian, Robert, Hamm, Thekla, Mescoff, Guillaume
Type: Inproceedings; In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Show Abstract
The Resource-Constrained Project Scheduling Problem (RCPSP) and its extension via activity modes (MRCPSP) are well-established scheduling frameworks that have found numerous applications in a broad range of settings related to artificial intelligence. Unsurprisingly, the problem of finding a suitable schedule in these frameworks is known to be NP-complete-however, aside from a few results for special cases, we have lacked an in-depth and comprehensive understanding of the complexity of the problems from the viewpoint of natural restrictions of the considered instances. In the first part of our paper, we develop new algorithms and give hardness-proofs in order to obtain a detailed complexity map of (M)RCPSP that settles the complexity of all 1024 considered variants of the problem defined in terms of explicit restrictions of natural parameters of instances. In the second part, we turn to implicit structural restrictions defined in terms of the complexity of interactions between individual activities. In particular, we show that if the treewidth of a graph which captures such interactions is bounded by a constant, then we can solve MRCPSP in polynomial time.

Link to Repositum

Measuring what matters: Ahybrid approach to dynamic programming with treewidth
Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Kwon, O-joung
Type: Article; In: Journal of Computer and System Sciences; Vol: 121; Pages: 57-75
Show Abstract
We develop a framework for applying treewidth-based dynamic programming on graphs with "hybrid structure", i.e., with parts that may not have small treewidth but instead possess other structural properties. Informally, this is achieved by defining a refinement of treewidth which only considers parts of the graph that do not belong to a pre-specified tractable graph class. Our approach allows us to not only generalize existing fixed-parameter algorithms exploiting treewidth, but also fixed-parameter algorithms which use the size of a modulator as their parameter. As the flagship application of our framework, we obtain a parameter that combines treewidth and rank-width to obtain fixed-parameter algorithms forChromatic Number,Hamiltonian Cycle, andMax-Cut.

Link to Repositum

On the Readability of Abstract Set Visualizations
Wallinger, Markus, Jacobsen, Ben, Kobourov, Stephen G., Nöllenburg, Martin
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 27; Issue: 6; Pages: 2821-2832
Show Abstract
Set systems are used to model data that naturally arises in many contexts: social networks have communities, musicians have genres, and patients have symptoms. Visualizations that accurately reflect the information in the underlying set system make it possible to identify the set elements, the sets themselves, and the relationships between the sets. In static contexts, such as print media or infographics, it is necessary to capture this information without the help of interactions. With this in mind, we consider three different systems for medium-sized set data, LineSets, EulerView, and MetroSets, and report the results of a controlled human-subjects experiment comparing their effectiveness. Specifically, we evaluate the performance, in terms of time and error, on tasks that cover the spectrum of static set-based tasks. We also collect and analyze qualitative data about the three different visualization systems. Our results include statistically significant differences, suggesting that MetroSets performs and scales better.

Link to Repositum

MetroSets: Visualizing Sets as Metro Maps
Jacobsen, Ben, Wallinger, Markus, Kobourov, Stephen G., Nöllenburg, Martin
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 27; Issue: 2; Pages: 1257-1267
Show Abstract
We propose MetroSets, a new, flexible online tool for visualizing set systems using the metro map metaphor. We model a given set system as a hypergraph H = (V, S), consisting of a set V of vertices and a set S, which contains subsets of V called hyperedges. Our system then computes a metro map representation of H, where each hyperedge E in S corresponds to a metro line and each vertex corresponds to a metro station. Vertices that appear in two or more hyperedges are drawn as interchanges in the metro map, connecting the different sets. MetroSets is based on a modular 4-step pipeline which constructs and optimizes a path-based hypergraph support, which is then drawn and schematized using metro map layout algorithms. We propose and implement multiple algorithms for each step of the MetroSet pipeline and provide a functional prototype with easy-to-use preset configurations. Furthermore, using several real-world datasets, we perform an extensive quantitative evaluation of the impact of different pipeline stages on desirable properties of the generated maps, such as octolinearity, monotonicity, and edge uniformity.

Link to Repositum

Davis and Putnam Meet Henkin: Solving DQBF with Resolution
Blinkhorn, Joshua, Peitl, Tomáš, Slivovsky, Friedrich
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2021; Pages: 30-46
Show Abstract
Davis-Putnam resolution is one of the fundamental theoretical decision procedures for both propositional logic and quantified Boolean formulas. Dependency quantified Boolean formulas (DQBF) are a generalisation of QBF in which dependencies of variables are listed explicitly rather than being implicit in the order of quantifiers. Since DQBFs can succinctly encode synthesis problems that ask for Boolean functions matching a given specification, efficient DQBF solvers have a wide range of potential applications. We present a new decision procedure for DQBF in the style of Davis-Putnam resolution. Based on the merge resolution proof system, it directly constructs partial strategy functions for derived clauses. The procedure requires DQBF in a normal form called H-Form. We prove that the problem of evaluating DQBF in H-Form is NEXPcomplete. In fact, we show that any DQBF can be converted into H-Form in linear time.

Link to Repositum

Finding the Hardest Formulas for Resolution (Extended Abstract)
Peitl, Tomáš, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Show Abstract
A CNF formula is harder than another CNF formula with the same number of clauses if it requires a longer resolution proof. We introduce resolution hardness numbers; they give for m = 1; 2; : : : the length of a shortest proof of a hardest formula on m clauses. We compute the first ten resolution hardness numbers, along with the corresponding hardest formulas. To achieve this, we devise a candidate filtering and symmetry breaking search scheme for limiting the number of potential candidates for hardest formulas, and an efficient SAT encoding for computing a shortest resolution proof of a given candidate formula.

Link to Repositum

Mixed Metro Maps with User-Specied Motifs
Batik, Tobias, Terziadis, Soeren, Nöllenburg, Martin, Wang, Yu-Shen, Wu, Hsiang-Yun
Type: Inproceedings; In: 29th International Symposium on Graph Drawing and Network Visualization (GD 2021); Pages: 1-4
Show Abstract
In this poster, we propose an approach to generalize mixed metro map layouts with user-defined shapes for route-finding and ad-vertisement purposes. In a mixed layout, specific lines are arranged in an iconic shape, and the remaining are in octilinear styles. The shape is expected to be recognizable, while the layout still fulfilling the classical octilinear design criteria for metro maps. The approach is in three steps, where we first search for the best fitting edge segment that approximates the guide shape and utilize least squares optimization to synthesize the layout automatically.

Link to Repositum

Graphs with Two Moplexes
Dallard, Clément, Ganian, Robert, Hatzel, Meike, Krnc, Matjaž, Milanič, Martin
Type: Inproceedings; In: Proceedings of the XI Latin and American Algorithms, Graphs and Optimization Symposium; Vol: 195; Pages: 248-256
Show Abstract
Moplexes are natural graph structures that arise when lifting Dirac's classical theorem from chordal graphs to general graphs. The notion is known to be closely related to lexicographic searches in graphs as well as to asteroidal triples, and has been applied in several algorithms related to graph classes such as interval graphs, claw-free, and diamond-free graphs. However, while every non-complete graph has at least two moplexes, little is known about structural properties of graphs with a bounded number of moplexes. The study of these graphs is, among others, motivated by the parallel between moplexes in general graphs and simplicial modules in chordal graphs: unlike in the moplex setting, properties of chordal graphs with a bounded number of simplicial modules are well understood. For instance, chordal graphs having at most two simplicial modules are interval. In this work we initiate an investigation of k-moplex graphs, which are defined as graphs containing at most k moplexes. Of particular interest is the smallest nontrivial case, k = 2, which forms a counterpart to the class of interval graphs. As our main structural result, we show that the class of connected 2-moplex graphs is sandwiched between the classes of proper interval graphs and cocomparability graphs; moreover, both inclusions are tight for hereditary classes. From a complexity theoretic viewpoint, this leads to the natural question of whether the presence of at most two moplexes guarantees a sufficient amount of structure to efficiently solve problems that are known to be intractable on cocomparability graphs, but not on proper interval graphs. We develop new reductions that answer this question negatively for two prominent problems fitting this profile, namely Graph Isomorphism and Max-Cut. Furthermore, for graphs with a higher number of moplexes, we lift the previously known result that graphs without asteroidal triples have at most two moplexes to the more general setting of larger asteroidal sets. We also discuss sufficient conditions for the existence of Hamiltonian paths in 2-moplex graphs as well as connections with avoidable vertices.

Link to Repositum

2020
Crossing Layout in Non-planar Graph Drawings
Nöllenburg, Martin
Type: Book Contribution; In: Beyond Planar Graphs; Pages: 187-209
Show Abstract
Edge crossings are a major obstruction for the readability of graph layouts as has been shown in several empirical studies. Yet, non-planar graphs are abundant in network visualization applications. Therefore, graph layout techniques are needed that optimize readability and comprehensibility of graph drawings in the presence of edge crossings. This chapter deals with aesthetic ideas for improving the appearance of crossings and presents alternative layout styles and algorithmic results that go beyond solely optimizing the crossing-number metric. In particular, we review edge casing in geometric graphs as a way to represent crossings, the slanted layout of crossings in orthogonal graph layouts, and minimizing bundled rather than individual crossings. Further, we look at concepts such as confluent graph layout and partial edge drawings, which both have no visible crossings.

Link to Repositum

On the Use of Decision Diagrams for Finding Repetition-Free Longest Common Subsequences
Horn, Matthias, Djukanovic, Marko, Blum, Christian, Raidl, Günther
Type: Presentation
Show Abstract
The goal of the repetition-free longest common subsequence (RFLCS) problem is to find a longest sequence which is common to two input strings such that each character in the common subsequence appears at most once. In this work, the RFLCS problem is solved by transforming an instance to an instance of the maximum independent set (MIS) problem which is then solved by a mixed integer linear programming solver. To reduce the size of the underlying conflict graph of the MIS problem, a relaxed decision diagram is utilized.

Link to Repositum

Balanced Independent and Dominating Sets onColored Interval Graphs
Bhore, Sujoy, Haunert, Jan-Henrik, Klute, Fabian, Li, Guangping, Nöllenburg, Martin
Type: Presentation
Show Abstract
We study two new versions of independent and dominating set problems on vertex-colored intervalgraphs, namelyf-Balanced Independent Set(f-BIS) andf-Balanced Dominating Set(f-BDS).LetG= (V,E)be a vertex-colored interval graph with ak-coloringγ:V→{1,...,k}for somek∈N. A subset of verticesS⊆Vis calledf-balancedifScontainsfvertices from each colorclass. In thef-BIS andf-BDS problems, the objective is to compute an independent set or adominating set that isf-balanced. We show that both problems areNP-complete even on properinterval graphs. For the BIS problem on interval graphs, we design twoFPTalgorithms, oneparameterized by(f,k)and the other by the vertex cover number ofG. Moreover, we present a2-approximation algorithm for a slight variation of BIS on proper interval graphs.

Link to Repositum

Labeling Nonograms
Löffler, Maarten, Nöllenburg, Martin
Type: Presentation
Show Abstract
Slanted and curved nonograms are a new type of picture puzzles introduced by van de Kerkhofet al.(2019). They consist of an arrangement of lines or curves within a frameB, where some ofthe cells need to be colored in order to obtain the solution picture. Up to two clues are attachedas numeric labels to each line on either side ofB. In this paper we study the algorithmic problemof optimizing or deciding the existence of a placement of the given clue labels to a nonogram. Weprovide polynomial-time algorithms for restricted cases and proveNP-completeness in general.

Link to Repositum

A Time Leap Challenge for SAT-Solving
Fichte, Johannes K., Hecher, Markus, Szeider, Stefan
Type: Inproceedings; In: Principles and Practice of Constraint Programming 26th International Conference, CP 2020, Louvain-la-Neuve, Belgium, September 7–11, 2020, Proceedings; Pages: 267-285
Show Abstract
We compare the impact of hardware advancement and algorithm advancement for SAT solving over the last two decades. In particular, we compare 20-year-old SAT-solvers on new computer hardware with modern SAT-solvers on 20-year-old hardware. Our findings show that the progress on the algorithmic side has at least as much impact as the progress on the hardware side.

Link to Repositum

Mixed Labeling: Integrating Internal and External Labels
Cmolik, Ladislav, Pavlovec, Vaclav, Wu, Hsiang-Yun, Nöllenburg, Martin
Type: Inproceedings; In: IEEE Transactions on Visualization and Computer Graphics; Pages: 1848-1861
Show Abstract
In this paper, we present an algorithm capable of mixed labeling of 2D and 3D objects. In mixed labeling, the given objects are labeled with both internal labels placed (at least partially) over the objects and external labels placed in the space around the objects and connected with the labeled objects with straight-line leaders. The proposed algorithm determines the position and type of each label based on the user-specified ambiguity threshold and eliminates overlaps between the labels, as well as between the internal labels and the straight-line leaders of external labels. The algorithm is a screen-space technique; it operates in an image where the 2D objects or projected 3D objects are encoded. In other words, we can use the algorithm whenever we can render the objects to an image, which makes the algorithm fit for use in many domains. The algorithm operates in real-time, giving the results immediately. Finally, we present results from an expert evaluation, in which a professional illustrator has evaluated the label layouts produced with the proposed algorithm.

Link to Repositum

Breaking Symmetries with RootClique and LexTopSort
Fichte, Johannes K., Hecher, Markus, Szeider, Stefan
Type: Inproceedings; In: Principles and Practice of Constraint Programming 26th International Conference, CP 2020, Louvain-la-Neuve, Belgium, September 7–11, 2020, Proceedings; Pages: 286-303
Show Abstract
Bounded fractional hypertree width is the most general known structural property that guarantees polynomial-time solvability of the constraint satisfaction problem. Fichte et al. (CP 2018) presented a robust and scalable method for finding optimal fractional hypertree decompositions, based on an encoding to SAT Modulo Theory (SMT). In this paper, we provide an in-depth study of two powerful symmetry breaking predicates that allow us to further speed up the SMT-based decomposition: RootClique fixes the root of the decomposition tree; LexTopSort fixes the elimination ordering with respect to an underlying DAG. We perform an extensive empirical evaluation of both symmetry-breaking predicates with respect to the primal graph (which is known in advance) and the induced graph (which is generated during the search).

Link to Repositum

A Variable Neighborhood Search for the Job Sequencing with One Common and Multiple Secondary Resources Problem
Kaufmann, Thomas, Horn, Matthias, Raidl, Günther R.
Type: Inproceedings; In: Parallel Problem Solving from Nature – PPSN XVI; Pages: 385-398
Show Abstract
In this work we consider a scheduling problem where a set of non-preemptive jobs needs to be scheduled such that the makespan is minimized. Each job requires two resources: (1) a common resource, shared by all jobs and (2) a secondary resource, shared with only a subset of the other jobs. The secondary resource is required during the job´s entire processing time whereas the common resource is only required during a part of a job´s execution. The problem models, for instance, the scheduling of patients during one day in a particle therapy facility for cancer treatment. We heuristically tackle the problem by a general variable neighborhood search (GVNS) based on move and exchange neighborhoods and an efficient evaluation scheme to scan the neighborhoods of the current incumbent solution. An experimental evaluation on two benchmark instance sets, including instances with up to 2000 jobs, shows the effectiveness of the GVNS. In particular for larger instances our GVNS outperforms an anytime A ∗ algorithm that was the so far leading method in heuristic terms as well as a constrained programming model solved by ILOG CP optimizer.

Link to Repositum

VNS and PBIG as Optimization Cores in a Cooperative Optimization Approach for Distributing Service Points
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther R.
Type: Inproceedings; In: Computer Aided Systems Theory – EUROCAST 2019; Pages: 255-262
Show Abstract
We present a cooperative optimization approach for distributing service points in a geographical area with the example of setting up charging stations for electric vehicles. Instead of estimating customer demands upfront, customers are incorporated directly into the optimization process. The method iteratively generates solution candidates that are presented to customers for evaluation. In order to reduce the number of solutions presented to the customers, a surrogate objective function is trained by the customers' feedback. This surrogate function is then used by an optimization core for generating new improved solutions. In this paper we investigate two different metaheuristics, a \gls{vns} and a \gls{pbig} as core of the optimization. The metaheuristics are compared in experiments using artificial benchmark scenarios with idealized simulated user behavior.

Link to Repositum

A Unified Model and Algorithms for Temporal Map Labeling
Gemsa, Andreas, Niedermann, Benjamin, Nöllenburg, Martin
Type: Article; In: Algorithmica; Vol: 82; Issue: 10; Pages: 2709-2736
Show Abstract
We consider map labeling for the case that a map undergoes a sequence of operations such as rotation, zoom and translation over a specified time span. We unify and generalize several previous models for dynamic map labeling into one versatile and flexible model. In contrast to previous research, we completely abstract from the particular operations and express the labeling problem as a set of time intervals representing the labels' presences, activities and conflicts. One of the model's strength is manifested in its simplicity and broad range of applications. In particular, it supports label selection both for map features with fixed position as well as for moving entities (e.g., for tracking vehicles in logistics or air traffic control). We study the active range maximization problem in this model. We prove that the problem is NPcomplete and W[1]-hard, and present constant-factor approximation algorithms. In the restricted, yet practically relevant case that no more than k labels can be active at any time, we give polynomial-time algorithms as well as constant-factor approximation algorithms.

Link to Repositum

A Double-Horizon Approach to a Purely Dynamic and Stochastic Vehicle Routing Problem with Delivery Deadlines and Shift Flexibility
Frohner, Nikolaus, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 13th International Conference on the Practice and Theory of Automated Timetabling; Pages: 58-76
Show Abstract
We are facing a purely dynamic and stochastic vehicle routing problem with delivery deadlines motivated by a real-world application where orders arrive at an online store dynamically over a day to be delivered within short time. Pure dynamism is given since we do not know any orders in advance, whereas the stochastic aspect comes into play by having estimates for the hourly numbers of orders. The goal is to satisfy the daily demand by constructing closed routes from a single depot to the customers given a set of drivers with a predefined shift plan and the hourly demand estimates as input while first minimizing due time violations and then labor and travel costs. Labor costs are subject to optimization since the end times of shifts have a certain amount of flexibility and a decision has to made whether to send home a driver earlier than planned or to extend the shift. In this work, we present a novel double-horizon approach based on the shifts and the hourly demand estimation. Within the shorter horizon we optimize the routes for the orders currently available whereas within the longer horizon we extrapolate until the end of the day to determine target shift end times for the drivers. Furthermore, we devise a route departure time strategy that balances between route quality and risking due time violations. The routing is performed by a classical adaptive large neighborhood search. We consider artificial instances and compare the results for the online problem with those for the offline scenario where all orders are known from the beginning. We observe superior performance of our approach as compared to fixed route departure time and driver send home strategies.

Link to Repositum

Parameterized Algorithms for Book Embedding Problems
Bhore, Sujoy, Ganian, Robert, Montecchiani, Fabrizio, Nöllenburg, Martin
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 24; Issue: 4; Pages: 603-620
Show Abstract
A k-page book embedding of a graph G draws the vertices of G on a line and the edges on k half-planes (called pages) bounded by this line, such that no two edges on the same page cross. We study the problem of determining whether G admits a k-page book embedding both when the linear order of the vertices is xed, called Fixed-Order Book Thick- ness, or not xed, called Book Thickness. Both problems are known to be NP-complete in general. We show that Fixed-Order Book Thick- ness and Book Thickness are xed-parameter tractable parameterized by the vertex cover number of the graph and that Fixed-Order Book Thickness is xed-parameter tractable parameterized by the pathwidth of the vertex order.

Link to Repositum

Route schematization with landmarks
Galvão, Marcelo De Lima, Krukar, Jakub, Nöllenburg, Martin, Schwering, Angela
Type: Article; In: Journal of Spatial Information Science; Issue: 21
Show Abstract
Modern navigation applications make use of a turn-by-turn instructions approach and are mostly supported by digital devices with limited display size. This combination does little to improve users' orientation or spatial knowledge acquisition. Considering this limitation, we propose a route schematization method to facilitate the readability of route information and survey knowledge acquisition. Current schematization methods focus on the route path and ignore context information, specially polygonal landmarks such as lakes, parks, and regions, which is crucial for promoting orientation. Our schematization method, in addition to the route path, takes as input: adjacent streets, point-like landmarks, and polygonal landmarks. Moreover, the schematic layout highlights spatial relations between route and context information, improves the readability of turns at decision points, and the visibility of the surroundings. The focus of the paper is the schematization method that combines geometric transformations and integer linear programming to produce the maps. Two routes are used as examples to present the execution information and the outputs. We complement our results with a user study that indicates a preference for our schematic layout in matching textual route instructions. The contribution of this paper is a method that produces schematic route maps with context information to support the user in wayfinding and orientation.

Link to Repositum

Adapting Stable Matchings to Evolving Preferences
Bredereck, Robert, Chen, Jiehua, Knop, Dusan, Luo, Junjie, Niedermeier, Rolf
Type: Inproceedings; In: Proceedings of AAAI2020; Pages: 1830-1837
Show Abstract
Adaptivity to changing environments and constraints is key tosuccess in modern society. We address this by proposing "in-crementalized versions" of STABLEMARRIAGEand STABLEROOMMATES. That is, we try to answer the following question:for both problems, what is the computational cost of adaptingan existing stable matching after some of the preferences ofthe agents have changed. While doing so, we also model theconstraint that the new stable matching shall be not too dif-ferent from the old one. After formalizing these incrementalversions, we provide a fairly comprehensive picture of thecomputational complexity landscape of INCREMENTALSTA-BLEMARRIAGEand INCREMENTALSTABLEROOMMATES.To this end, we exploit the parameters "degree of change" bothin the input (difference between old and new preference pro-file) and in the output (difference between old and new stablematching). We obtain both hardness and tractability results, inparticular showing a fixed-parameter tractability result withrespect to the parameter "distance between old and new stablematching".

Link to Repositum

Short Q-Resolution Proofs with Homomorphisms
Shukla, Ankit, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2020; Pages: 412-428
Show Abstract
We introduce new proof systems for quantified Boolean formulas (QBFs) by enhancing Q-resolution systems with rules which exploit local and global symmetries. The rules are based on homomorphisms that admit non-injective mappings between literals. This results in systems that are stronger than Q-resolution with (injective) symmetry rules. We further strengthen the systems by utilizing a dependency system D in a way that surpasses Q(D)-resolution in relative strength

Link to Repositum

Parameterized Study of Steiner Tree on Unit DiskGraphs
Bhore, Sujoy Kumar, Carmi, Paz, Kolay, Sudeshna, Zehavi, Meirav
Type: Inproceedings; In: 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020); Pages: 1-18
Show Abstract
We study theSteiner Treeproblem on unit disk graphs. Given anvertex unit disk graphG, asubsetR⊆V(G)oftvertices and a positive integerk, the objective is to decide if there exists atreeTinGthat spans over all vertices ofRand uses at mostkvertices fromV\R. The verticesofRare referred to asterminalsand the vertices ofV(G)\RasSteinervertices. First, we showthat the problem isNP-hard. Next, we prove that theSteiner Treeproblem on unit disk graphscan be solved innO(√t+k)time. We also show that theSteiner Treeproblem on unit disk graphsparameterized bykhas an FPT algorithm with running time2O(k)nO(1). In fact, the algorithmsare designed for a more general class of graphs, called clique-grid graphs [16]. We mention that thealgorithmic results can be made to work forSteiner Treeon disk graphs with bounded aspectratio. Finally, we prove thatSteiner Treeon disk graphs parameterized bykis W[1]-hard.

Link to Repositum

On Covering Segments with Unit Intervals
Bergren, Dan, Eiben, Eduard, Ganian, Robert, Kanj, Iyad
Type: Inproceedings; In: 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020); Pages: 17
Show Abstract
We study the problem of covering a set of segments on a line with the minimum number of unit-lengthintervals, where an interval covers a segment if at least one of the two endpoints of the segment fallsin the unit interval. We also study several variants of this problem.We show that the restrictions of the aforementioned problems to the set of instances in which allthe segments have the same length areNP-hard. This result implies severalNP-hardness results inthe literature for variants and generalizations of the problems under consideration.We then study the parameterized complexity of the aforementioned problems. We provide tightresults for most of them by showing that they are fixed-parameter tractable for the restrictions inwhich all the segments have the same length, and areW[1]-complete otherwise.

Link to Repositum

Foreword: Eighth Workshop on Graph Classes, Optimization, and Width Parameters, Toronto, Ontario, Canada
Corneil, Derek G., Ganian, Robert, Proskurowski, Andrzej
Type: Inproceedings; In: Discrete Applied Mathematics; Pages: 1-2

Link to Repositum

On the Use of Decision Diagrams for Finding Repetition-Free Longest Common Subsequences
Horn, Matthias, Djukanovic, Marko, Blum, Christian, Raidl, Günther R.
Type: Inproceedings; In: Optimization and Applications; Pages: 134-149
Show Abstract
We consider the repetition-free longest common subsequence(RFLCS) problem, where the goal is to find a longest sequence thatappears as subsequence in two input strings and in which each characterappears at most once. Our approach is to transform a RFLCS instanceto an instance of the maximum independent set (MIS) problem whichis subsequently solved by a mixed integer linear programming solver.To reduce the size of the underlying conflict graph of the MIS problem,a relaxed decision diagram is utilized. An experimental evaluation ontwo benchmark instance sets shows the advantages of the reduction ofthe conflict graphs in terms of shorter total computation times and thenumber of instances solved to proven optimality. A further advantage ofthe created relaxed decision diagrams is that heuristic solutions can beeffectively derived. For some instances that could not be solved to provenoptimality, new state-of-the-art results were obtained in this way.

Link to Repositum

A lower bound for the smallest uniquely hamiltonian planar graph with minimum degree three
Klocker, Benedikt, Fleischner, Herbert, Raidl, Günther R.
Type: Article; In: Applied Mathematics and Computation; Vol: 380; Issue: 125233; Pages: 125233
Show Abstract
Bondy and Jackson conjectured in 1998 that every planar uniquely hamiltonian graph must have a vertex of degree two. In this work we verify computationally Bondy and Jackson's conjecture for graphs with up to 25 vertices. Using a reduction we search for graphs that contain a stable fixed-edge cycle or equivalently a stable cycle with one vertex of degree two. For generating candidate graphs we use plantri and for checking if they contain a stable fixed-edge cycle we propose three approaches. Two of them are based on integer linear programming (ILP) and the other is a cycle enumeration algorithm. To reduce the search space we prove several properties a minimum planar graph with minimum degree at least three containing a stable fixed-edge cycle must satisfy, the most significant being triangle freeness. Comparing the three algorithms shows that the enumeration is more effective on small graphs while for larger graphs the ILP-based approaches perform better. Finally, we use the enumeration approach together with plantri to check that there does not exist a planar graph with minimum degree at least three which contains a stable fixed- edge cycle with 24 or fewer vertices.

Link to Repositum

Anytime algorithms for the longest common palindromic subsequence problem
Djukanovic, Marko, Raidl, Günther R., Blum, Christian
Type: Article; In: Computers and Operations Research; Vol: 114; Issue: 104827; Pages: 104827
Show Abstract
The longest common palindromic subsequence (LCPS) problem aims at finding a longest string that ap- pears as a subsequence in each of a set of input strings and is a palindrome at the same time. The problem is a special variant of the well known longest common subsequence problem and has applica- tions in particular in genomics and biology, where strings correspond to DNA or protein sequences and similarities among them shall be detected or quantified. We first present a more traditional A ∗search that makes use of an advanced upper bound calculation for partial solutions. This exact approach works well for instances with two input strings and, as shown in experiments, outperforms several other exact methods from the literature. However, the A ∗search also has natural limitations when a larger number of strings shall be considered due to the problem's complexity. To effectively deal with this case in practice, anytime A ∗search variants are investigated, which are able to return a reasonable heuristic solution at almost any time and are expected to find better and better solutions until reaching a proven optimum when enough time given. In particular a novel approach is proposed in which Anytime Column Search (ACS) is interleaved with traditional A ∗node expansions. The ACS iterations are guided by a new heuris- tic function that approximates the expected length of an LCPS in subproblems usually much better than the available upper bound calculation. This A ∗+ACS hybrid is able to solve small to medium-sized LCPS instances to proven optimality while returning good heuristic solutions together with upper bounds for large instances. In rigorous experimental evaluations we compare A ∗+ACS to several other anytime A ∗search variants and observe its superiority

Link to Repositum

A model for finding transition-minors
Klocker, Benedikt, Fleischner, Herbert, Raidl, Günther R.
Type: Article; In: Discrete Applied Mathematics; Vol: 283; Pages: 242-264
Show Abstract
The well known cycle double cover conjecture in graph theory is strongly related to the compatible circuit decomposition problem. A recent result by Fleischner et al. (2018) gives a su cient condition for the existence of a compatible circuit decomposition in a transitioned 2-connected Eulerian graph, which is based on an extension of the de nition of K5-minors to transitioned graphs. Graphs satisfying this condition are called SUD-K5-minor-free graphs. In this work we formulate a generalization of this property by replacing the K5 by a 4-regular transitioned graph H, which is part of the input. Furthermore, we consider the decision problem of checking for two given graphs if the extended property holds. We prove that this problem is NP- complete and xed parameter tractable with the size of H as parameter. We then formulate an equivalent problem, present a mathematical model for it, and prove its correctness. This mathematical model is then translated into a mixed integer linear program (MIP) for solving it in practice. Computational results show that the MIP formulation can be solved for small instances in reasonable time. In our computations we found snarks with perfect matchings whose contraction leads to SUD-K5-minor-free graphs that contain K5-minors. Furthermore, we veri ed that there exists a perfect pseudo-matching whose contraction leads to a SUD-K5-minor-free graph for all snarks with up to 22 vertices. Keywords: Transition Minor, Cycle Double Cover, Compatible Circuit Decomposition, Integer Programming

Link to Repositum

An A∗Search Algorithm for the Constrained Longest Common Subsequence Problem
Djukanovic, Marko, Berger, Christoph, Raidl, Günther R., Blum, Christian
Type: Article; In: Information Processing Letters; Vol: 166; Issue: 106041; Pages: 106041
Show Abstract
The constrained longest common subsequence (CLCS) problem was introduced as a specific measure of similarity between molecules.It is a special case of the constrained sequence alignment problem and of the longest common subsequence (LCS) problem, whichare both well-studied problems in the scientific literature. Finding similarities between sequences plays an important role in the fieldsof molecular biology, gene recognition, pattern matching, text analysis, and voice recognition, among others. The CLCS problemin particular represents an interesting measure of similarity for molecules that have a putative structure in common. This paperproposes an exact A∗search algorithm for effectively solving the CLCS problem. This A∗search is guided by a tight upper boundcalculation for the cost-to-go for the LCS problem. Our computational study shows that on various artificial and real benchmark setsthis algorithm scales better with growing instance size and requires significantly less computation time to prove optimality thanearlier state-of-the-art approaches from the literature.

Link to Repositum

Finding Longest Common Subsequences: New anytime A* search results
Djukanovic, Marko, Raidl, Günther R., Blum, Christian
Type: Article; In: Applied Soft Computing; Vol: 95
Show Abstract
The Longest Common Subsequence (LCS) problem aims at finding a longest string that is a subsequence of each string from a given set of input strings. This problem has applications, in particular, in the context of bioinformatics, where strings represent DNA or protein sequences. Existing approaches include numerous heuristics, but only a few exact approaches, limited to rather small problem instances. Adopting various aspects from leading heuristics for the LCS, we first propose an exact A* search approach, which performs well in comparison to earlier exact approaches in the context of small instances. On the basis of A* search we then develop two hybrid A*–based algorithms in which classical A* iterations are alternated with beam search and anytime column search, respectively. A key feature to guide the heuristic search in these approaches is the usage of an approximate expected length calculation for the LCS of uniform random strings. Even for large problem instances these anytime A* variants yield reasonable solutions early during the search and improve on them over time. Moreover, they terminate with proven optimality if enough time and memory is given. Furthermore, they yield upper bounds and, thus, quality guarantees when terminated early. We comprehensively evaluate the proposed methods using most of the available benchmark sets from the literature and compare to the current state-of-the-art methods. In particular, our algorithms are able to obtain new best results for 82 out of 117 instance groups. Moreover, in most cases they also provide significantly smaller optimality gaps than other anytime algorithms.

Link to Repositum

Solving longest common subsequence problems via a transformation to the maximum clique problem
Blum, Christian, Djukanovic, Marko, Santini, Alberto, Jiang, Hua, Li, Chu-Min, Manyà, Felip, Raidl, Günter R.
Type: Article; In: Computers and Operations Research; Vol: 125; Issue: 105089; Pages: 105089
Show Abstract
Longest common subsequence problems find various applications in bioinformatics, data compression and text editing, just to name a few. Even though numerous heuristic approaches were published in the related literature for many of the considered problem variants during the last decades, solving these problems to optimality remains an important challenge. This is particularly the case when the number and the length of the input strings grows. In this work we define a new way to transform instances of the classical longest common subsequence problem and of some of its variants into instances of the maximum clique problem. Moreover, we propose a technique to reduce the size of the resulting graphs. Finally, a comprehensive experimental evaluation using recent exact and heuristic maximum clique solvers is presented. Numerous, so-far unsolved problem instances from benchmark sets taken from the literature were solved to optimality in this way.

Link to Repositum

On Erdős-Szekeres-type problems for 𝑘-convex point sets
Balko, Martin, Bhore, Sujoy Kumar, Martínez-Sandoval, Leonardo, Valtr, Pavel
Type: Article; In: European Journal of Combinatorics; Vol: 89
Show Abstract
We study Erdős-Szekeres-type problems for 𝑘-convex point sets,a recently introduced notion that naturally extends the concept of convex position. A finite set 𝑆 of 𝑛 points is 𝑘-convex if there exists a spanning simple polygonization of 𝑆 such that the intersection of any straight line with its interior consists of at most 𝑘 connected components. We address several open problems about 𝑘-convex pointsets. In particular, we extend the well-known Erdős-Szekeres Theorem by showing that, for every fixed 𝑘∈ℕ, every set ofnpoints in the plane ingeneral position(with no three collinearpoints) contains a 𝑘-convex subset of size at least Ω(logk𝑛).We also show that there are arbitrarily large 3-convex sets of n points in the plane in general position whose largest 1-convex subset has size O(log𝑛). This gives a solution to a problem posed by Aichholzer et al. (2014).We prove that there is a constant c>0 such that, for every 𝑛∈ℕ, there is a set S of n points in the plane in general position such that every 2-convex polygon spanned by at least c·log 𝑛 points from S contains a point of S in its interior. This matches an earlier upper bound by Aichholzer et al. (2014) up to a multiplicative constant and answers another of their open problems.

Link to Repositum

The Power of Cut‑Based Parameters for Computing Edge‑Disjoint Paths
Ganian, Robert, Ordyniak, Sebastian
Type: Article; In: Algorithmica; Vol: 83; Issue: 2; Pages: 726-752
Show Abstract
This paper revisits the classical edge-disjoint paths (EDP) problem, where one is given an undirected graph G and a set of terminal pairs P and asks whether G contains a set of pairwise edge-disjoint paths connecting every terminal pair in P. Our aim is to identify structural properties (parameters) of graphs which allow the efficient solution of EDP without restricting the placement of terminals in P in any way. In this setting, EDP is known to remain NP-hard even on extremely restricted graph classes, such as graphs with a vertex cover of size 3. We present three results which use edge-separator based parameters to chart new islands of tractability in the complexity landscape of EDP. Our first and main result utilizes the fairly recent structural parameter tree-cut width (a parameter with fundamental ties to graph immersions and graph cuts): we obtain a polynomial-time algorithm for EDP on every graph class of bounded tree-cut width. Our second result shows that EDP parameterized by tree-cut width is unlikely to be fixed-parameter tractable. Our final, third result is a polynomial kernel for EDP parameterized by the size of a minimum feedback edge set in the graph.

Link to Repositum

On Structural Parameterizations of the Bounded‑Degree Vertex Deletion Problem
Ganian, Robert, Klute, Fabian, Ordyniak, Sebastian
Type: Article; In: Algorithmica; Vol: 83; Issue: 1; Pages: 297-336
Show Abstract
We study the parameterized complexity of the Bounded-Degree Vertex Deletion problem (BDD), where the aim is to find a maximum induced subgraph whose max-imum degree is below a given degree bound. Our focus lies on parameters that meas-ure the structural properties of the input instance. We first show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. We thereby resolve an open question stated in Betzler, Bredereck, Niedermeier and Uhlmann (2012) concerning the complexity of BDD parameterized by the feedback vertex set number. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to the decompositional parameter treecut width and a novel problem-specific parameter called the core fracture number.

Link to Repositum

On Existential MSO and Its Relation to ETH
Ganian, Robert, Haan, Ronald de, Kanj, Iyad, Szeider, Stefan
Type: Article; In: ACM Transactions on Computation Theory; Vol: 12; Issue: 4; Pages: 1-32
Show Abstract
Impagliazzo et al. proposed a framework, based on the logic fragment defining the complexity class SNP, to identify problems that are equivalent to k-CNF-Sat modulo subexponential-time reducibility (serf-reducibility). The subexponential-time solvability of any of these problems implies the failure of the Exponential Time Hypothesis (ETH). In this article, we extend the framework of Impagliazzo et al. and identify a larger set of problems that are equivalent to k-CNF-Sat modulo serf-reducibility. We propose a complexity class, referred to as Linear Monadic NP, that consists of all problems expressible in existential monadic second-order logic whose expressions have a linear measure in terms of a complexity parameter, which is usually the universe size of the problem. This research direction can be traced back to Fagin's celebrated theorem stating that NP coincides with the class of problems expressible in existential second-order logic. Monadic NP, a well-studied class in the literature, is the restriction of the aforementioned logic fragment to existential monadic second-order logic. The proposed class Linear Monadic NP is then the restriction of Monadic NP to problems whose expressions have linear measure in the complexity parameter. We show that Linear Monadic NP includes many natural complete problems such as the satisfiability of linear-size circuits, dominating set, independent dominating set, and perfect code. Therefore, for any of these problems, its subexponential-time solvability is equivalent to the failure of ETH. We prove, using logic games, that the aforementioned problems are inexpressible in the monadic fragment of SNP, and hence, are not captured by the framework of Impagliazzo et al. Finally, we show that Feedback Vertex Set is inexpressible in existential monadic second-order logic, and hence is not in Linear Monadic NP, and investigate the existence of certain reductions between Feedback Vertex Set (and variants of it) and 3-CNF-Sat.

Link to Repositum

Usingdecomposition-parametersforQBF:Mindtheprefix!
Eiben, Eduard, Ganian, Robert, Ordyniak, Sebastian
Type: Article; In: Journal of Computer and System Sciences; Vol: 110; Pages: 1-21
Show Abstract
Similar to the satisfiability (SAT) problem, which can be seen to be the archetypical problem for NP, the quantified Boolean formula problem (QBF) is the archetypical problem for PSPACE. Recently, Atserias and Oliva (2014) showed that, unlike for SAT, many of the well-known decompositional parameters (such as treewidth and pathwidth) do not allow efficient algorithms for QBF. The main reason for this seems to be the lack of awareness of these parameters towards the dependencies between variables of a QBF formula. In this paper we extend the ordinary pathwidth to the QBF-setting by introducing prefix pathwidth, which takes into account the dependencies between variables in a QBF, and show that it leads to an efficient algorithm for QBF. We hope that our approach will help to initiate the study of novel tailor-made decompositional parameters for QBF and thereby help to lift the success of these decompositional parameters from SAT to QBF.

Link to Repositum

Towards a Polynomial Kernel for Directed Feedback VertexSet
Bergougnoux, Benjamin, Eiben, Eduard, Ganian, Robert, Ordyniak, Sebastian, Ramanujan, M. S.
Type: Article; In: Algorithmica; Vol: 83; Issue: 5; Pages: 1201-1221
Show Abstract
In theDirected Feedback Vertex Set (DFVS)problem, the input is a directedgraphDand an integerk. The objective is to determine whether there exists a setof at mostkvertices intersecting every directed cycle ofD. DFVS was shown tobe fixed-parameter tractable when parameterized by solution size by Chen et al. (JACM 55(5):177-186, 2008); since then, the existence of a polynomial kernel for thisproblem has become one of the largest open problems in the area of parameterizedalgorithmics. Since this problem has remained open in spite of the best efforts ofa number of prominent researchers and pioneers in the field, a natural step forwardis to study the kernelization complexity ofDFVSparameterized by a naturallargerparameter. In this paper, we study DFVS parameterized by the feedback vertex setnumber of the underlyingundirected graph. We provide two main contributions: apolynomial kernel for this problem on general instances, and a linear kernel for thecase where the input digraph is embeddable on a surface of bounded genus.

Link to Repositum

Efficient non-segregated routing for reconfigurable demand-aware networks
Fenz, Thomas, Foerster, Klaus-Tycho, Schmid, Stefan, Villedieu, Anaïs
Type: Article; In: Computer Communications; Vol: 164; Pages: 138-147
Show Abstract
More and more networks are becoming reconfigurable: not just the routing can be programmed, but the physical layer itself as well. Various technologies enable this programmability, ranging from optical circuit switches to beamformed wireless connections and free-space optical interconnects. Existing reconfigurable network topologies are typically hybrid in nature, consisting of static and a reconfigurable links. However, even though the static and reconfigurable links form a joint structure, routing policies are artificially segregated and hence do not fully exploit the network resources: the state of the art is to route large elephant flows on direct reconfigurable links, whereas the remaining traffic is left to the static network topology. Recent work showed that such artificial segregation is inefficient, but did not provide the tools to actually leverage the benefits on non-segregated routing. In this paper, we provide several algorithms which take advantage of non-segregated routing, by jointly optimizing topology and routing. We compare our algorithms to segregated routing policies and also evaluate their performance in workload-driven simulations, based on real-world traffic traces. We find that our algorithms do not only outperform segregated routing policies, in various settings, but also come close to the optimal solution, computed by a integer linear program formulation, also presented in this paper. Finally, we also provide insights into the complexity of the underlying combinatorial optimization problem, by deriving approximation hardness results.

Link to Repositum

A* Search for Prize-Collecting Job Sequencing with One Common and Multiple Secondary Resources
Horn, Matthias, Raidl, Günther R., Rönnberg, Elina
Type: Article; In: Annals of Operations Research; Vol: 302; Issue: 2; Pages: 477-505
Show Abstract
We consider a sequencing problem with time windows, in which a subset of a given set of jobs shall be scheduled. A scheduled job has to execute without preemption and during this time, the job needs both a common resource for a part of the execution as well as a secondary resource for the whole execution time. The common resource is shared by all jobs while a secondary resource is shared only by a subset of the jobs. Each job has one or more time windows and due to these, it is not possible to schedule all jobs. Instead, each job is associated with a prize and the task is to select a subset of jobs which yields a feasible schedule with a maximum sum of prizes. First, we argue that the problem is NP-hard. Then, we present an exact A* algorithm and derive different upper bounds for the total prize; these bounds are based on constraint and Lagrangian relaxations of a linear programming relaxation of a multidimensional knapsack problem. For comparison, a compact mixed integer programming (MIP) model and a constraint programming model are also presented. An extensive experimental evaluation on three types of problem instances shows that the A* algorithm outperforms the other approaches and is able to solve small to medium size instances with up to about 40 jobs to proven optimality. In cases where A* does not prove that an optimal solution is found, the obtained upper bounds are stronger than those of the MIP model.

Link to Repositum

A*-based construction of decision diagrams for a prize-collecting scheduling problem
Horn, Matthias, Maschler, Johannes, Raidl, Günther R., Rönnberg, Elina
Type: Article; In: Computers and Operations Research; Vol: 126; Issue: 105125; Pages: 105125
Show Abstract
Decision diagrams (DDs) have proven to be useful tools in combinatorial optimization. Relaxed DDs represent discrete relaxations of problems, can encode essential structural information in a compact form, and may yield strong dual bounds. We propose a novel construction scheme for relaxed multi-valued DDs for a scheduling problem in which a subset of elements has to be selected from a ground set and the selected elements need to be sequenced. The proposed construction scheme builds upon A$^*$ search guided by a fast-to-calculate problem-specific dual bound heuristic. In contrast to traditional DD compilation methods, the new approach does not rely on a correspondence of DD layers to decision variables. For the considered kind of problem, this implies that multiple nodes representing the same state at different layers can be avoided, and consequently also many redundant isomorphic substructures. For keeping the relaxed DD compact, a new mechanism for merging nodes in a layer-independent way is suggested. For our prize-collecting job sequencing problem, experimental results show that the DDs from our A$^*$-based approach provide substantially better bounds while frequently being an order-of-magnitude smaller than DDs obtained from traditional compilation methods, given about the same time. To obtain a heuristic solution and a corresponding lower bound, we further propose to construct a restricted DD based on the relaxed one, thereby substantially exploiting already gained information. This approach outperforms a standalone restricted DD construction, basic constraint programming and mixed integer linear programming approaches, and a variable neighborhood search in terms of solution quality on most of our benchmark instances.

Link to Repositum

A Beam Search for the Longest Common Subsequence Problem Guided by a Novel Approximate Expected Length Calculation
Djukanovic, Marko, Raidl, Günther R., Blum, Christian
Type: Inproceedings; In: Machine Learning, Optimization, and Data Science; Pages: 154-167
Show Abstract
The longest common subsequence problem (LCS) aims at nding a longest string that appears as subsequence in each of a given set of input strings. This is a well known NP-hard problem which has been tackled by many heuristic approaches. Among them, the best performing ones are based on beam search (BS) but di er signi cantly in various aspects. In this paper we compare the existing BS-based approaches by using a common BS framework making the di erences more explicit. Furthermore, we derive a novel heuristic function to guide BS, which approximates the expected length of an LCS of random strings. In a rigorous experimental evaluation we compare all BS-based methods from the literature and investigate the impact of our new heuristic guidance. Results show in particular that our novel heuristic guidance leads frequently to signi cantly better solutions. New best solutions are obtained for a wide range of the existing benchmark instances. Keywords: string problems; expected value; beam search

Link to Repositum

Towards Data-Driven Multilinear Metro Maps
Nickel, Soeren, Nöllenburg, Martin
Type: Inproceedings; In: Diagrammatic Representation and Inference; Pages: 153-161
Show Abstract
Traditionally, most schematic metro maps as well as metro map layout algorithms adhere to an octolinear layout style with all paths composed of horizontal, vertical, and 45∘-diagonal edges. Despite growing interest in more general multilinear metro maps, generic algorithms to draw metro maps based on a system of k≥2 not necessarily equidistant slopes have not been investigated thoroughly. We present and implement an adaptation of the octolinear mixed-integer linear programming approach of Nöllenburg and Wolff (2011) that can draw metro maps schematized to any set C of arbitrary orientations. We further present a data-driven approach to determine a suitable set C by either detecting the best rotation of an equidistant orientation system or by clustering the input edge orientations using a k-means algorithm. We demonstrate the new possibilities of our method in a real-world case study.

Link to Repositum

MaxSAT-Based Postprocessing for Treedepth
Peruvemba Ramaswamy, Vaidyanathan, Szeider, Stefan
Type: Inproceedings; In: Principles and Practice of Constraint Programming 26th International Conference, CP 2020, Louvain-la-Neuve, Belgium, September 7–11, 2020, Proceedings; Pages: 478-495
Show Abstract
Treedepth is an increasingly popular graph invariant. Many NP-hard combinatorial problems can be solved efficiently on graphs of bounded treedepth. Since the exact computation of treedepth is itself NP-hard, recent research has focused on the development of heuristics that compute good upper bounds on the treedepth. In this paper, we introduce a novel MaxSAT-based approach for improving a heuristically obtained treedepth decomposition. At the core of our approach is an efficient MaxSAT encoding of a weighted generalization of treedepth arising naturally due to subtree contractions. The encoding is applied locally to the given treedepth decomposition to reduce its depth, in conjunction with the collapsing of subtrees. We show the local improvement method´s correctness and provide an extensive experimental evaluation with some encouraging results.

Link to Repositum

Interpolation-Based Semantic Gate Extraction and Its Applications to QBF Preprocessing
Slivovsky, Friedrich
Type: Inproceedings; In: Computer Aided Verification; Pages: 508-528
Show Abstract
We present a new semantic gate extraction technique for propositional formulas based on interpolation. While known gate detection methods are incomplete and rely on pattern matching or simple semantic conditions, this approach can detect any definition entailed by an input formula. As an application, we consider the problem of computing unique strategy functions from Quantified Boolean Formulas (QBFs) and Dependency Quantified Boolean Formulas (DQBFs). Experiments with a prototype implementation demonstrate that functions can be efficiently extracted from formulas in standard benchmark sets, and that many of these definitions remain undetected by syntactic gate detection. We turn this into a preprocessing technique by substituting unique strategy functions for input variables and test solver performance on the resulting instances. Compared to syntactic gate detection, we see a significant increase in the number of solved QBF instances, as well as a modest increase for DQBF instances.

Link to Repositum

On the Parameterized Complexity of Clustering Incomplete Data into Subspaces of Small Rank
Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the AAAI Conference on Artificial Intelligence; Pages: 3906-3913
Show Abstract
We consider a fundamental matrix completion problem wherewe are given an incomplete matrix and a set of constraintsmodeled as a CSP instance. The goal is to complete the matrixsubject to the input constraints, and in such a way that thecomplete matrix can be clustered into few subspaces withlow dimension. This problem generalizes several problems indata mining and machine learning, including the problem ofcompleting a matrix into one with minimum rank. In additionto its ubiquitous applications in machine learning, the problemhas strong connections to information theory, related to binarylinear codes, and variants of it have been extensively studiedfrom that perspective.We formalize the problem mentioned above and study its clas-sical and parameterized complexity with respect to severalnatural parameters that are desirably small, and with respect tothe CSP fragment from which the set of constraints is drawn.We draw a detailed landscape of the complexity and parameter-ized complexity of the problem with respect to the parametersunder consideration, and with respect to several well-studiedCSP fragments.

Link to Repositum

A Faster Algorithm for Propositional Model Counting Parameterized by Incidence Treewidth
Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2020; Pages: 267-276
Show Abstract
The propositional model counting problem (#SAT) is known to be fixed-parameter-tractable (FPT) when parameterized by the width k of a given tree decomposition of the incidence graph. The running time of the fastest known FPT algorithm contains the exponential factor of 4k. We improve this factor to 2k by utilizing fast algorithms for computing the zeta transform and covering product of functions representing partial model counts, thereby achieving the same running time as FPT algorithms that are parameterized by the less general treewidth of the primal graph. Our new algorithm is asymptotically optimal unless the Strong Exponential Time Hypothesis (SETH) fails.

Link to Repositum

Formalizing Graph Trail Properties in Isabelle/HOL
Kovács, Laura, Lachnitt, Hanna, Szeider, Stefan
Type: Inproceedings; In: Intelligent Computer Mathematics 13th International Conference, CICM 2020, Bertinoro, Italy, July 26–31, 2020, Proceedings; Pages: 190-205
Show Abstract
We describe a dataset expressing and proving properties of graph trails, using Isabelle/HOL. We formalize the reasoning about strictly increasing and decreasing trails, using weights over edges, and prove lower bounds over the length of trails in weighted graphs. We do so by extending the graph theory library of Isabelle/HOL with an algorithm computing the length of a longest strictly decreasing graph trail starting from a vertex for a given weight distribution, and prove that any decreasing trail is also an increasing one.

Link to Repositum

Distributing Battery Swapping Stations for Electric Scooters in an Urban Area
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther
Type: Inproceedings; In: Optimization and Applications; Pages: 150-165
Show Abstract
We investigate the problem of setting up battery swapping stations for electric scooters in an urban area from a computational optimization point of view. For the considered electric scooters batteries can be swapped quickly in a few simple steps. Depleted batteries are recharged at these swapping stations and provided again to customers once fully charged. Our goal is to identify optimal battery swapping station locations as well as to determine their capacities appropriately in order to cover a specified level of assumed demand at minimum cost. We propose a Mixed Integer Linear Programming (MILP) formulation that models the customer demand over time in a discretized fashion and also considers battery charging times. Moreover, we propose a Large Neighborhood Search (LNS) heuristic for addressing larger problem instances for which the MILP model cannot practically be solved anymore. Prototype implementations are experimentally evaluated on artificial benchmark scenarios. Moreover, we also consider an instance derived from real-world taxi and bus stop shelter data of Manhattan. With the MILP model, instances with up to 1000 potential station locations and up to 2000 origin/destination demand pairs can be solved to near optimality, while for larger instances the LNS is a highly promising choice.

Link to Repositum

Route Duration Prediction in a Stochastic and Dynamic Vehicle Routing Problem with Short Delivery Deadlines
Frohner, Nikolaus, Horn, Matthias, Raidl, Günther R.
Type: Inproceedings; In: Procedia Computer Science; Pages: 366-370
Show Abstract
We are facing a real-world vehicle routing problem where orders arrive dynamically over the day at an online store and have to be delivered within short time. Stochastic information in form of the expected number and weight of orders and the traffic congestion level is available upfront. The goal is to predict the average time needed to deliver an order for a given time and day. This information is desirable for both routing decisions in the short horizon and planning vehicle drivers' shifts with just the right capacity prior to the actual day. We compare a white box linear regression model and a neural network based black box model on historic route data collected over three months. We employ a hourly data aggregation approach with sampling statistics to estimate the ground truth and features. The weighted mean square error is used as loss function to favor samples with less uncertainty. A mean validation R^2 score over 10x5-fold cross-validations of 0.53 indicates a substantial amount of unexplained variance. Both predictors are slightly optimistic and produce median standardized absolute residuals of about one.

Link to Repositum

Multi-level Area Balancing of Clustered Graphs
Wu, Hsiang-Yun, Nöllenburg, Martin, Viola, Ivan
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 28; Issue: 7; Pages: 2682-2696
Show Abstract
We present a multi-level area balancing technique for laying out clustered graphs to facilitate a comprehensive understanding of the complex relationships that exist in various fields, such as life sciences and sociology. Clustered graphs are often used to model relationships that are accompanied by attribute-based grouping information. Such information is essential for robust data analysis, such as for the study of biological taxonomies or educational backgrounds. Hence, the ability to smartly arrange textual labels and packing graphs within a certain screen space is therefore desired to successfully convey the attribute data . Here we propose to hierarchically partition the input screen space using Voronoi tessellations in multiple levels of detail. In our method, the position of textual labels is guided by the blending of constrained forces and the forces derived from centroidal Voronoi cells. The proposed algorithm considers three main factors: (1) area balancing, (2) schematized space partitioning, and (3) hairball management. We primarily focus on area balancing, which aims to allocate a uniform area for each textual label in the diagram. We achieve this by first untangling a general graph to a clustered graph through textual label duplication, and then coupling with spanning-tree-like visual integration. We illustrate the feasibility of our approach with examples and then evaluate our method by comparing it with well-known conventional approaches and collecting feedback from domain experts.

Link to Repositum

Geometric Planar Networks on Bichromatic Points
Bandyapadhyay, Sayan, Banik, Aritra, Bhore, Sujoy, Nöllenburg, Martin
Type: Inproceedings; In: Algorithms and Discrete Applied Mathematics
Show Abstract
We study four classical graph problems - Hamiltonian path, Traveling salesman, Minimum spanning tree, and Minimum perfect matching on geometric graphs induced by bichromatic (red and blue) points. These problems have been widely studied for points in the Euclidean plane, and many of them are NP-hard. In this work, we consider these problems in two restricted settings: (i) collinear points and (ii) equidistant points on a circle. We show that almost all of these problems can be solved in linear time in these constrained, yet non-trivial settings.

Link to Repositum

On Solving a Generalized Constrained Longest Common Subsequence Problem
Djukanovic, Marko, Berger, Christoph, Raidl, Günther R., Blum, Christian
Type: Inproceedings; In: Optimization and Applications; Pages: 55-70
Show Abstract
Given a set of two input strings and a pattern string, the constrained longest common subsequence problem deals with finding a longest string that is a subsequence of both input strings and that contains the given pattern string as a subsequence. This problem has various applications, especially in computational biology. In this work we consider the NP-hard case of the problem in which more than two input strings are given. First, we adapt an existing A ∗ search from two input strings to an arbitrary number m of input strings (m ≥ 2). With the aim of tackling large problem instances approximately, we additionally propose a greedy heuristic and a beam search. All three algorithms are compared to an existing approximation algorithm from the literature. Beam search turns out to be the best heuristic approach, matching almost all optimal solutions obtained by A ∗ search for rather small instances.

Link to Repositum

Stable roommates with narcissistic, single-peaked, andsingle-crossing preferences
Bredereck, Robert, Chen, Jiehua, Finnendahl, Ugo Paavo, Niedermeier, Rolf
Type: Article; In: Autonomous Agents and Multi-Agent Systems; Vol: 34; Issue: 53
Show Abstract
The classicalStable Roommatesproblem is to decide whether there exists a matchingof an even number of agents such that no two agents which are not matched to each otherwould prefer to be with each other rather than with their respectively assigned partners. WeinvestigateStable Roommateswith complete (i.e., every agent can be matched with anyother agent) or incomplete preferences, with ties (i.e., two agents are considered of equal valueto some agent) or without ties. It is known that in general allowing ties makes the problem NP-complete. We provide algorithms forStable Roommatesthat are, compared to those in theliterature, more efficient when the input preferences are complete and have some structuralproperty, such as being narcissistic, single-peaked, and single-crossing. However, when thepreferences are incomplete and have ties, we show that being single-peaked and single-crossing does not reduce the computational complexity-Stable RoommatesremainsNP-complete.

Link to Repositum

A Survey on Transit Map Layout - from Design, Machine, and Human Perspectives
Wu, Hsiang‐Yun, Niedermann, Benjamin, Takahashi, Shigeo, Roberts, Maxwell J., Nöllenburg, Martin
Type: Article; In: Computer Graphics Forum; Vol: 39; Issue: 3; Pages: 619-646
Show Abstract
Transit maps are designed to present information for using public transportation systems, such as urban railways. Creating atransit map is a time-consuming process, which requires iterative information selection, layout design, and usability validation,and thus maps cannot easily be customised or updated frequently. To improve this, scientists investigate fully- or semi-automatictechniques in order to produce high quality transit maps using computers and further examine their corresponding usability.Nonetheless, the quality gap between manually-drawn maps and machine-generated maps is still large. To elaborate the currentresearch status, this state-of-the-art report provides an overview of the transit map generation process, primarily from Design,Machine, and Human perspectives. A systematic categorisation is introduced to describe the design pipeline, and an extensiveanalysis of perspectives is conducted to support the proposed taxonomy. We conclude this survey with a discussion on thecurrent research status, open challenges, and future directions.

Link to Repositum

Placing Labels in Road Maps: Algorithms and Complexity
Gemsa, Andreas, Niedermann, Benjamin, Nöllenburg, Martin
Type: Article; In: Algorithmica; Vol: 82; Issue: 7; Pages: 1881-1908
Show Abstract
A road map can be interpreted as a graph embedded in the plane, in which each vertex corresponds to a road junction and each edge to a particular road section. In this paper, we consider the computational cartographic problem to place non-over-lapping road labels along the edges so that as many road sections as possible are identified by their name, i.e., covered by a label. We show that this is NP-hard in general, but the problem can be solved in O(n3) time if the road map is an embed-ded tree with n vertices and constant maximum degree. This special case is not only of theoretical interest, but our algorithm in fact provides a very useful subroutine in exact or heuristic algorithms for labeling general road maps.

Link to Repositum

An Algorithmic Study of Fully Dynamic Independent Sets for Map Labeling
Bhore, Sujoy, Li, Guangping, Nöllenburg, Martin
Type: Inproceedings; In: 28th Annual European Symposium on Algorithms (ESA 2020); Pages: 1-24
Show Abstract
Map labeling is a classical problem in cartography and geographic information systems (GIS) that asks to place labels for area, line, and point features, with the goal to select and place the maximum number of independent, i.e., overlap-free, labels. A practically interesting case is point labeling with axis-parallel rectangular labels of common size. In a fully dynamic setting, at each time step, either a new label appears or an existing label disappears. Then, the challenge is to maintain a maximum cardinality subset of pairwise independent labels with sub-linear update time. Motivated by this, we study the maximal independent set (MIS) and maximum independent set (Max-IS) problems on fully dynamic (insertion/deletion model) sets of axis-parallel rectangles of two types - (i) uniform height and width and (ii) uniform height and arbitrary width; both settings can be modeled as rectangle intersection graphs. We present the first deterministic algorithm for maintaining a MIS (and thus a 4-approximate Max-IS) of a dynamic set of uniform rectangles with amortized sub-logarithmic update time. This breaks the natural barrier of ( ) update time (where is the maximum degree in the graph) for vertex updates presented by Assadi et al. (STOC 2018). We continue by investigating Max-IS and provide a series of deterministic dynamic approximation schemes. For uniform rectangles, we first give an algorithm that maintains a 4-approximate Max-IS with O(1) update time. In a subsequent algorithm, we establish the trade-off between approximation quality 2(1 + 1 k ) and update time O(k2 log n), for k 2 N. We conclude with an algorithm that maintains a 2-approximate Max-IS for dynamic sets of unit-height and arbitrary-width rectangles with O(! log n) update time, where ! is the maximum size of an independent set of rectangles stabbed by any horizontal line. We have implemented our algorithms and report the results of an experimental comparison exploring the trade-off between solution quality and update time for synthetic and real-world map labeling instances.

Link to Repositum

Towards Faster Reasoners by Using Transparent Huge Pages
Fichte, Johannes, Manthey, Norbert, Stecklina, Julian, Schidler, Andre
Type: Inproceedings; In: Principles and Practice of Constraint Programming CP 2020; Vol: 12333; Pages: 304-322
Show Abstract
Various state-of-the-art automated reasoning (AR) tools are widely used as backend tools in research of knowledge representation and reasoning as well as in industrial applications. In testing and verification, those tools often run continuously or nightly. In this work, we present an approach to reduce the runtime of AR tools by 10% on average and up to 20% for long running tasks. Our improvement addresses the high memory usage that comes with the data structures used in AR tools, which are based on conflict driven no-good learning. We establish a general way to enable faster memory access by using the memory cache line of modern hardware more effectively. Therefore, we extend the standard C library (glibc) by dynamically allowing to use a memory management feature called huge pages. Huge pages allow to reduce the overhead that is required to translate memory addresses between the virtual memory of the operating system and the physical memory of the hardware. In that way, we can reduce runtime, which in turn decreases costs of running AR tools and applications with similar memory access patterns by linking the tool against this new glibc library when compiling it. In every day industrial applications, runtime savings allow to include more detailed verification tasks, getting better results of any-time optimization algorithms with a bound execution time, and save energy during nightly software builds. To back up the claimed speed-up, we present experimental results for tools that are commonly used in the AR community, including the domains ASP, hardware and software BMC, MaxSAT, and SAT.

Link to Repositum

Computing Optimal Hypertree Decompositions
Schidler, André, Szeider, Stefan
Type: Inproceedings; In: 2020 Proceedings of the Twenty-Second Workshop on Algorithm Engineering and Experiments (ALENEX); Pages: 1-11
Show Abstract
We propose a new algorithmic method for computing the hypertreewidth of hypergraphs, and we evaluate its performance empirically.At the core of our approach lies a novel ordering based character-ization of hypertree width which lends to an efficient encoding toSAT modulo Theory (SMT). We tested our algorithm on an exten-sive benchmark set consisting of real-world instances from varioussources. Our approach outperforms state-of-the-art algorithms forhypertree width. We achieve a further speedup by a new techniquethat first solves a relaxation of the problem and subsequently usesthe solution to guide the algorithm for solving the problem itself

Link to Repositum

An Efficient Algorithm for Counting Markov Equivalent DAGs
Ganian, Robert, Hamm, Thekla, Talvitie, Topi
Type: Inproceedings; In: Proceedings of the AAAI Conference on Artificial Intelligence; Pages: 10136-10143
Show Abstract
We consider the problem of counting the number of DAGs which are Markov-equivalent, i.e., which encode the same conditional independencies between random variables. The problem has been studied, among others, in the context of causal discovery, and it is known that it reduces to counting the number of so-called moral acyclic orientations of certain undirected graphs, notably chordal graphs. Our main empirical contribution is a new algorithm which outperforms previously known exact algorithms for the considered problem by a significant margin. On the theoretical side, we show that our algorithm is guaranteed to run in polynomial time on a broad class of chordal graphs, including interval graphs.

Link to Repositum

Extending Partial 1-Planar Drawings
Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Klute, Fabian, Nöllenburg, Martin
Type: Inproceedings; In: 47th International Colloquium on Automata, Languages, and Programming; Pages: 1-19
Show Abstract
Algorithmic extension problems of partial graph representations such as planar graph drawings orgeometric intersection representations are of growing interest in topological graph theory and graphdrawing. In such an extension problem, we are given a tuple(G, H,H)consisting of a graphG, aconnected subgraphHofGand a drawingHofH, and the task is to extendHinto a drawing ofGwhile maintaining some desired property of the drawing, such as planarity.In this paper we study the problem of extending partial 1-planar drawings, which are drawings inthe plane that allow each edge to have at most one crossing. In addition we consider the subclass ofIC-planar drawings, which are 1-planar drawings with independent crossings. Recognizing 1-planargraphs as well as IC-planar graphs isNP-complete and theNP-completeness easily carries over tothe extension problem. Therefore, our focus lies on establishing the tractability of such extensionproblems in a weaker sense than polynomial-time tractability. Here, we show that both problems arefixed-parameter tractable when parameterized by the number of edges missing fromH, i.e., the edgedeletion distance betweenHandG. The second part of the paper then turns to a more powerfulparameterization which is based on measuring the vertex+edge deletion distance between the partialand complete drawing, i.e., the minimum number of vertices and edges that need to be deleted toobtainHfromG.

Link to Repositum

Parameterized Complexity of Envy-Free Resource Allocation in Social Networks
Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the AAAI Conference on Artificial Intelligence; Pages: 7135-7142
Show Abstract
We consider the classical problem of allocating resources among agents in an envy-free (and, where applicable, proportional) way. Recently, the basic model was enriched by introducing the concept of a social network which allows to capture situations where agents might not have full information about the allocation of all resources. We initiate the study of the parameterized complexity of these resource allocation problems by considering natural parameters which capture structural properties of the network and similarities between agents and items. In particular, we show that even very general fragments of the considered problems become tractable as long as the social network has bounded treewidth or bounded clique-width. We complement our results with matching lower bounds which show that our algorithms cannot be substantially improved.

Link to Repositum

Fixed-Parameter Tractability of Dependency QBF with Structural Parameters
Ganian, Robert, Peitl, Tomáš, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Seventeenth International Conference on Principles of Knowledge Representation and Reasoning
Show Abstract
We studydependency quantified Boolean formulas(DQBF),an extension of QBF in which dependencies of existentialvariables are listed explicitly rather than being implicit inthe order of quantifiers. DQBF evaluation is a canonicalNEXPTIME-complete problem, a complexity class contain-ing many prominent problems that arise in Knowledge Rep-resentation and Reasoning.One approach for solving such hard problems is to identifyand exploit structural properties captured by numerical pa-rameters such that bounding these parameters gives rise toan efficient algorithm. This idea is captured by the notionof fixed-parameter tractability (FPT). We initiate the studyof DQBF through the lens of fixed-parameter tractabilityand show that the evaluation problem becomes FPT undertwo natural parameterizations: the treewidth of the primalgraph of the DQBF instance combined with a restriction onthe interactions between the dependency sets, and also thetreedepth of the primal graph augmented by edges represent-ing dependency sets.

Link to Repositum

Finding the Hardest Formulas for Resolution
Peitl, Tomáš, Szeider, Stefan
Type: Inproceedings; In: Principles and Practice of Constraint Programming 26th International Conference, CP 2020, Louvain-la-Neuve, Belgium, September 7–11, 2020, Proceedings; Pages: 514-530
Show Abstract
A CNF formula is harder than another CNF formula with the same number of clauses if it requires a longer resolution proof. The resolution hardness numbers give for m = 1, 2, . . . the length of a shortest proof of a hardest formula on m clauses. We compute the first ten resolution hardness numbers, along with the corresponding hardest formulas. We achieve this by a candidate filtering and symmetry breaking search scheme for limiting the number of potential candidates for formulas and an efficient SAT encoding for computing a shortest resolution proof of a given candidate formula.

Link to Repositum

Towards Improving Merging Heuristics for Binary Decision Diagrams
Frohner, Nikolaus, Raidl, Günther
Type: Inproceedings; In: Learning and Intelligent Optimization : 13th International Conference, LION 13, Chania, Crete, Greece, May 27–31, 2019, Revised Selected Papers; Pages: 30-45
Show Abstract
Over the last years, binary decision diagrams (BDDs) have become a powerful tool in the field of combinatorial optimization. They are directed acyclic multigraphs and represent the solution space of binary optimization problems in a recursive way. During their construction, merging of nodes in this multigraph is applied to keep the size within polynomial bounds resulting in a discrete relaxation of the original problem. The longest path length through this diagram corresponds then to an upper bound of the optimal objective value. The algorithm deciding which nodes to merge is called a merging heuristic. A commonly used heuristic for layer-wise construction is minimum longest path length (minLP) which sorts the nodes in a layer descending by the currently longest path length to them and subsequently merges the worst ranked nodes to reduce the width of a layer. A shortcoming of this approach is that it neglects the (dis-)similarity between states it merges, which we assume to have negative impact on the quality of the finally obtained bound. By means of a simple tie breaking procedure, we show a way to incorporate the similarity of states into minLP using different distance functions to improve dual bounds for the maximum independent set problem (MISP) and the set cover problem (SCP), providing empirical evidence for our assumption. Furthermore, we extend this procedure by applying similarity-based node merging also to nodes with close but not necessarily identical longest path values. This turns out to be beneficial for weighted problems where ties are substantially less likely to occur. We evaluate the method on the weighted MISP and tune parameters that control as to when to apply similarity-based node merging.

Link to Repositum

Merging Quality Estimation for Binary Decision Diagrams with Binary Classifiers
Frohner, Nikolaus, Raidl, Günther
Type: Inproceedings; In: Machine Learning, Optimization, and Data Science; Vol: 11943; Pages: 445-457
Show Abstract
Relaxed binary decision diagrams (BDDs) are used in combinatorial optimization as a compact representation of a relaxed solution space. They are directed acyclic multigraphs which are derived from the state space of a recursive dynamic programming formulation of the considered optimization problem. The compactness of a relaxed BDD is achieved by superimposing states, which corresponds to merging BDD nodes in the classical layer-wise top-down BDD construction. Selecting which nodes to merge crucially determines the quality of the resulting BDD and is the task of a merging heuristic, for which the minimum longest path value (minLP) heuristic has turned out to be highly effective for a number of problems. This heuristic sorts the nodes in a layer by decreasing current longest path value and merges the necessary number of worst ranked nodes into one. There are, however, also other merging heuristics available and usually it is not easy to decide which one is more promising to use in which situation. In this work we propose a prediction mechanism to evaluate a set of different merging mechanisms at each layer during the construction of a relaxed BDD, in order to always select and apply the most promising heuristic. This prediction is implemented by either a perfect or by a k-layers lookahead construction of the BDD, gathering feature vectors for two competing merging heuristics which are then fed into a binary classifier. Models based on statistical tests and a feed-forward neural network are considered for the classifier. We study this approach for the maximum weighted independent set problem and in conjunction with a parameterized merging heuristic that takes also the similarity between states into account. We train and validate the binary classifiers on random graphs and finally test on weighted DIMACS instances. Results indicate that relaxed BDDs can be obtained whose upper bounds are on average up to 16% better than those of BDDs constructed with the sole use of minLP.

Link to Repositum

A Beam Search Approach to the Traveling Tournament Problem
Frohner, Nikolaus, Neumann, Bernhard, Raidl, Günther R.
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization; Pages: 67-82
Show Abstract
The well-known traveling tournament problem is a hard optimization problem in which a double round robin sports league schedule has to be constructed while minimizing the total travel distance over all teams. The teams start and end their tours at their home venues, are only allowed to play a certain maximum number of games in a row at home or away, and must not play against each other in two consecutive rounds. The latter aspects introduce also a difficult feasibility aspect. In this work, we study a beam search approach based on a recursive state space formulation. We compare different state ordering heuristics for the beam search based on lower bounds derived by means of decision diagrams. Furthermore, we introduce a randomized beam search variant that adds Gaussian noise to the heuristic value of a node for diversifying the search in order to enable a simple yet effective parallelization. In our computational study, we use randomly generated instances to compare and tune algorithmic parameters and present final results on the classical National League and circular benchmark instances. Results show that this purely construction-based method provides mostly better solutions than existing ant-colony optimization and tabu search algorithms and it comes close to the leading simulated annealing based approaches without using any local search. For two circular benchmark instances we found new best solutions for which the last improvement was twelve years ago. The presented state space formulation and lower bound techniques could also be beneficial for exact methods like A∗ or DFS∗ and may be used to guide the randomized construction in ACO or GRASP approaches.

Link to Repositum

Layered Fan-Planar Graph Drawings
Biedl, Therese, Chaplick, Steven, Kaufmann, Michael, Montecchiani, Fabrizio, Nöllenburg, Martin, Raftopoulou, Chrysanthi
Type: Inproceedings; In: 45th International Symposium on Mathematical Foundations of Computer Science; Pages: 1-13
Show Abstract
In a fan-planar drawing of a graph an edge can cross only edges with a common end-vertex. In this paper, we study fan-planar drawings that use h (horizontal) layers and are proper, i.e., edges connect adjacent layers. We show that if the embedding of the graph is fixed, then testing the existence of such drawings is fixed-parameter tractable in h, via a reduction to a similar result for planar graphs by Dujmovic et al. If the embedding is not fixed, then we give partial results for h = 2: It was already known how to test the existence of fan-planar proper 2-layer drawings for 2-connected graphs, and we show here how to test this for trees. Along the way, we exhibit other interesting results for graphs with a fan-planar proper h-layer drawing; in particular we bound their pathwidth and show that they have a bar-1-visibility representation.

Link to Repositum

Multi-linear Strategy Extraction for QBF Expansion Proofs via Local Soundness
Schlaipfer, Matthias, Slivovsky, Friedrich, Weissenbacher, Georg, Zuleger, Florian
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2020; Pages: 429-446
Show Abstract
In applications, QBF solvers are expected to not only decide whether a given formula is true or false but also return a solution in the form of a strategy. Determining whether strategies can be efficiently extracted from proof traces generated by QBF solvers is a fundamental research task. Most resolution-based proof systems are known to implicitly support polynomial-time strategy extraction through a simulation of the evaluation game associated with an input formula, but this approach introduces large constant factors and results in unwieldy circuit representations. In this work, we present an explicit polynomial-time strategy extraction algorithm for the ∀-Exp+Res proof system. This system is used by expansion-based solvers that implement counterexample-guided abstraction refinement (CEGAR), currently one of the most effective QBF solving paradigms. Our argument relies on a Curry-Howard style correspondence between strategies and ∀-Exp+Res derivations, where each strategy realizes an invariant obtained from an annotated clause derived in the proof system.

Link to Repositum

Extending Nearly Complete 1-Planar Drawings in Polynomial Time
Eiben, Eduard, Ganian, Robert, Hamm, Thekla, Klute, Fabian, Nöllenburg, Martin
Type: Inproceedings; In: 45th International Symposium on Mathematical Foundations of Computer Science; Pages: 1-16
Show Abstract
The problem of extending partial geometric graph representations such as plane graphs has received considerable attention in recent years. In particular, given a graph G, a connected subgraph H of G and a drawing H of H, the extension problem asks whether H can be extended into a drawing of G while maintaining some desired property of the drawing (e.g., planarity). In their breakthrough result, Angelini et al. [ACM TALG 2015] showed that the extension problem is polynomial-time solvable when the aim is to preserve planarity. Very recently we considered this problem for partial 1-planar drawings [ICALP 2020], which are drawings in the plane that allow each edge to have at most one crossing. The most important question identified and left open in that work is whether the problem can be solved in polynomial time when H can be obtained from G by deleting a bounded number of vertices and edges. In this work, we answer this question positively by providing a constructive polynomial-time decision algorithm.

Link to Repositum

2019
Guidelines for Experimental Algorithmics: A Case Study in Network Analysis
Angriman, Eugenio, Grinten, Alexander van der, Looz, Moritz von, Meyerhenke, Henning, Nöllenburg, Martin, Predari, Maria, Tzovas, Charilaos
Type: Article; In: Algorithms; Vol: 12; Issue: 7; Pages: 127
Show Abstract
The field of network science is a highly interdisciplinary area; for the empirical analysis of network data, it draws algorithmic methodologies from several research fields. Hence, research procedures and descriptions of the technical results often differ, sometimes widely. In this paper we focus on methodologies for the experimental part of algorithm engineering for network analysis-an important ingredient for a research area with empirical focus. More precisely, we unify and adapt existing recommendations from different fields and propose universal guidelines-including statistical analyses-for the systematic evaluation of network analysis algorithms. This way, the behavior of newly proposed algorithms can be properly assessed and comparisons to existing solutions become meaningful. Moreover, as the main technical contribution, we provide SimexPal, a highly automated tool to perform and analyze experiments following our guidelines. To illustrate the merits of SimexPal and our guidelines, we apply them in a case study: we design, perform, visualize and evaluate experiments of a recent algorithm for approximating betweenness centrality, an important problem in network analysis. In summary, both our guidelines and SimexPal shall modernize and complement previous efforts in experimental algorithmics; they are not only useful for network analysis, but also in related contexts.

Link to Repositum

Photonic-integrated circuits with non-planar topologies realized by 3D-printed waveguide overpasses
Nesic, Aleksandar, Blaicher, Matthias, Hoose, Tobias, Hofmann, Andreas, Lauermann, Matthias, Kutuvantavida, Yasar, Nöllenburg, Martin, Randel, Sebastian, Freude, Wolfgang, Koos, Christian
Type: Article; In: Optics Express; Vol: 27; Issue: 12; Pages: 17402
Show Abstract
Complex photonic-integrated circuits (PIC) may have strongly non-planar topologies that require waveguide crossings (WGX) when realized in single-layer integration platforms. The number of WGX increases rapidly with the complexity of the circuit, in particular when it comes to highly interconnected optical switch topologies. Here, we present a concept for WGX-free PIC that relies on 3D-printed freeform waveguide overpasses (WOP).We experimentally demonstrate the viability of our approach using the example of a 4 4 switch-and-select (SAS) circuit realized on the silicon photonic platform. We further present a comprehensive graph-theoretical analysis of di erent n n SAS circuit topologies. We find that for increasing port counts n of the SAS circuit, the number of WGX increases with n4, whereas the number of WOP increases only in proportion to n2.

Link to Repositum

Exact Approaches for Network Design Problems with Relays
Leitner, Markus, Ljubić, Ivana, Riedler, Martin, Ruthmair, Mario
Type: Article; In: INFORMS Journal on Computing; Vol: 31; Issue: 1; Pages: 171-192
Show Abstract
In this article we consider the network design problem with relays (NDPR), which gives answers to some important strategic design questions in telecommunication network design. Given a family of origin-destination pairs and a set of existing links these questions are as follows: (1) What are the optimal locations for signal regeneration devices (relays) and how many of them are needed? (2) Could the available infrastructure be enhanced by installing additional links in order to reduce the travel distance and therefore reduce the number of necessary relays? In contrast to previous work on the NDPR, which mainly focused on heuristic approaches, we discuss exact methods based on different mixed-integer linear programming formulations for the problem. We develop branch-and-price and branch-price-andcut algorithms that build upon models with an exponential number of variables (and constraints). In an extensive computational study, we analyze the performance of these approaches for instances that reflect different real-world settings. Finally, we also point out the relevance of the NDPR in the context of electric mobility.

Link to Repositum

Metabopolis: scalable network layout for biological pathway diagrams in urban map style
Wu, Hsiang-Yun, Nöllenburg, Martin, Sousa, Filipa L., Viola, Ivan
Type: Article; In: BMC Bioinformatics; Vol: 20; Issue: 187
Show Abstract
Background: Biological pathways represent chains of molecular interactions in biological systems that jointly form complex dynamic networks. The network structure changes from the significance of biological experiments and layout algorithms often sacrifice low-level details to maintain high-level information, which complicates the entire image to large biochemical systems such as human metabolic pathways. Results: Our work is inspired by concepts from urban planning since we create a visual hierarchy of biological pathways, which is analogous to city blocks and grid-like road networks in an urban area. We automatize the manual drawing process of biologists by first partitioning the map domain into multiple sub-blocks, and then building the corresponding pathways by routing edges schematically, to maintain the global and local context simultaneously. Our system incorporates constrained floor-planning and network-flow algorithms to optimize the layout of sub-blocks and to distribute the edge density along the map domain. We have developed the approach in close collaboration with domain experts and present their feedback on the pathway diagrams based on selected use cases. Conclusions: We present a new approach for computing biological pathway maps that untangles visual clutter by decomposing large networks into semantic sub-networks and bundling long edges to create space for presenting relationships systematically.

Link to Repositum

External Labeling Techniques: A Taxonomy and Survey
Bekos, Michael A., Niedermann, Benjamin, Nöllenburg, Martin
Type: Article; In: Computer Graphics Forum; Vol: 38; Issue: 3; Pages: 833-860
Show Abstract
External labeling is frequently used for annotating features in graphical displays and visualizations, such as technical illustrations, anatomical drawings, or maps, with textual information. Such a labeling connects features within an illustration by thin leader lines with their labels, which are placed in the empty space surrounding the image. Over the last twenty years, a large body of literature in diverse areas of computer science has been published that investigates many different aspects, models, and algorithms for automatically placing external labels for a given set of features. This state-of-the-art report introduces a first unified taxonomy for categorizing the different results in the literature and then presents a comprehensive survey of the state of the art, a sketch of the most relevant algorithmic techniques for external labeling algorithms, as well as a list of open research challenges in this multidisciplinary research field.

Link to Repositum

A Memetic Algorithm for Competitive Facility Location Problems
Biesinger, Benjamin, Hu, Bin, Raidl, Günther R.
Type: Book Contribution; In: Business and Consumer Analytics: New Ideas; Pages: 637-660
Show Abstract
We study a memetic algorithm to solve diverse variants of competitive facility location problems. Two non-cooperating companies enter a market sequentially and compete for market share. The first decision maker, the leader, aims to choose a set of locations which maximize his market share knowing that a follower will enter the same market, lowering the leader´s market share. For this bi-level combinatorial optimization problem several customer behaviour scenarios and demand models are studied. A memetic algorithm is applied to find a good set of locations for the leader and the solution evaluation consisting of finding near optimal locations for the follower is performed by greedy algorithms and the use of mixed integer linear programming models. We conclude this chapter with a case study for two hypermarket chains who both want to open stores in Vienna using real world demographic data. In this study we consider six different customer behaviour scenarios and present numerical and graphical results which show the effectiveness of the presented approach.

Link to Repositum

Computational Thinking und ADA.wien
Szeider, Stefan
Type: Presentation

Link to Repositum

Geometric Systems of Unbiased Representatives
Banik, Aritra, Bhattacharya, Bhaswar B., Bhore, Sujoy, Martinez-Sandoval, Leonardo
Type: Presentation
Show Abstract
Let P be a set of points in Rd, B a bicoloring of P and O a family of geometric objects (that is, intervals, boxes, balls, etc). An object from O is called balanced with respect to B if it contains the same number of points from each color of B. For a collection B of bicolorings of P, a geometric system of unbiased representatives (G- SUR) is a subset O0 O such that for any bicoloring B of B there is an object in O0 that is balanced with respect to B. We study the problem of nding G-SURs. We obtain general bounds on the size of G-SURs consisting of in- tervals, size-restricted intervals, axis-parallel boxes and Euclidean balls. We show that the G-SUR problem is NP-hard even in the simple case of points on a line and interval ranges. Furthermore, we study a related prob- lem on determining the size of the largest and smallest balanced intervals for points on the real line with a ran- dom distribution and coloring. Our results are a natural extension to a geometric con- text of the work initiated by Balachandran et al. on arbitrary systems of unbiased representatives.

Link to Repositum

Parameterized Complexity Results for the Completion and Clustering of Incomplete Data
Szeider, Stefan, Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian
Type: Presentation
Show Abstract
We consider the following data completion problem: Given an incomplete matrix and a set of constraints, the goal is to complete the matrix such that each row satisfies the input constraints and the complete matrix can be clustered into few subspaces with low rank. This problem generalizes several problems in data mining and machine learning, and is related to ranking problems over incomplete data studied in the context knowledge compilation by Choi et al. We draw a detailed complexity landscape with respect to natural parameters and well-studied CSP fragments. Joint work with Robert Ganian, Iyad Kanj, and Sebastian Ordyniak.

Link to Repositum

An Introduction to Knowledge Compilation
Slivovsky, Friedrich
Type: Presentation

Link to Repositum

A Compendium of Parameterized Problems at Higher Levels of the Polynomial Hierarchy
Haan, Ronald de, Szeider, Stefan
Type: Article; In: Algorithms; Vol: 12; Issue: 9; Pages: 188
Show Abstract
We present a list of parameterized problems together with a complexity classification of whether they allow a fixed-parameter tractable reduction to SAT or not. These problems are parameterized versions of problems whose complexity lies at the second level of the Polynomial Hierarchy or higher.

Link to Repositum

A SAT Approach to Branchwidth
Lodha, Neha, Ordyniak, Sebastian, Szeider, Stefan
Type: Article; In: ACM Transactions on Computational Logic; Vol: 20; Issue: 3; Pages: 1-24
Show Abstract
Branch decomposition is a prominent method for structurally decomposing a graph, a hypergraph, or a propositional formula in conjunctive normal form. The width of a branch decomposition provides a measure of how well the object is decomposed. For many applications, it is crucial to computing a branch decomposition whose width is as small as possible. We propose an approach based on Boolean Satisfiability (SAT) to finding branch decompositions of small width. The core of our approach is an efficient SAT encoding that determines with a single SAT-call whether a given hypergraph admits a branch decomposition of a certain width. For our encoding, we propose a natural partition-based characterization of branch decompositions. The encoding size imposes a limit on the size of the given hypergraph. To break through this barrier and to scale the SAT approach to larger instances, we develop a new heuristic approach where the SAT encoding is used to locally improve a given candidate decomposition until a fixed-point is reached. This new SAT-based local improvement method scales now to instances with several thousands of vertices and edges.

Link to Repositum

Dependency Learning for QBF
Peitl, Tomáš, Slivovsky, Friedrich, Szeider, Stefan
Type: Article; In: Journal of Artificial Intelligence Research; Vol: 65; Pages: 181-208
Show Abstract
Quantified Boolean Formulas (QBFs) can be used to succinctly encode problems from domains such as formal verification, planning, and synthesis. One of the main approaches to QBF solving is Quantified Con ict Driven Clause Learning (QCDCL). By default, QCDCL assigns variables in the order of their appearance in the quantifier prefix so as to account for dependencies among variables. Dependency schemes can be used to relax this restriction and exploit independence among variables in certain cases, but only at the cost of nontrivial interferences with the proof system underlying QCDCL. We introduce depen- dency learning, a new technique for exploiting variable independence within QCDCL that allows solvers to learn variable dependencies on the y. The resulting version of QCDCL enjoys improved propagation and increased exibility in choosing variables for branching while retaining ordinary (long-distance) Q-resolution as its underlying proof system. We show that dependency learning can achieve exponential speedups over ordinary QCDCL. Experiments on standard benchmark sets demonstrate the effectiveness of this technique.

Link to Repositum

On the parameterized complexity of (k, s)-SAT
Paulusma, Daniël, Szeider, Stefan
Type: Article; In: Information Processing Letters; Vol: 143; Pages: 34-36
Show Abstract
Let (k, s)-SAT be the k-SAT problem restricted to formulas in which each variable occurs in at most s clauses. It is well known that (3, 3)-SAT is trivial and (3, 4)-SAT is NP-complete. Answering a question posed by Iwama and Takaki (DMTCS 1997), Berman, Karpinski and Scott (DAM 2007) gave, for every fixed t≥0, a polynomial-time algorithm for (3, 4)-SAT restricted to formulas in which the number of variables that occur in four clauses is t. Parameterized by t, their algorithm runs in XP time. We extend their result by giving, for every k ≥3and s ≥k, an FPT algorithm for(k, s)-SAT when parameterized by the number t of variables occurring in more than k clauses.

Link to Repositum

A Survey on Computing Schematic Network Maps: The Challenge to Interactivity
Wu, Hsiang-Yun, Niedermann, Benjamin, Takahashi, Shiego, Nöllenburg, Martin
Type: Presentation
Show Abstract
Schematic maps are in daily use to show the connec- tivity of subway systems and to facilitate travellers to plan their journeys effectively. This study surveys up-to-date algorithmic approaches in order to give an overview of the state of the art in schematic network mapping. The study investigates the hypothesis that the choice of algorithmic approach is often guided by the requirements of the mapping application. For example, an algorithm that computes globally optimal solutions for schematic maps is capable of producing results for printing, while it is not suitable for computing instant layouts due to its long running time. Our analysis and discussion, therefore, focus on the compu- tational complexity of the problem formulation and the running times of the schematic map algorithms, including algorithmic network layout techniques and station labeling techniques. The correlation between problem complexity and running time is then visually depicted using scatter plot diagrams. Moreover, since metro maps are common metaphors for data visualization, we also investigate online tools and application domains using metro map representations for analytics purposes, and finally summarize the potential future opportunities for schematic maps.

Link to Repositum

World map of recipes
Li, Guangping, Nickel, Soeren, Nöllenburg, Martin, Viola, Ivan, Wu, Hsiang-Yun
Type: Presentation
Show Abstract
This poster visualises the Meal Ingredients dataset with 151 international food recipes and their corresponding ingredients. The underlying graph layout in the image is automatically generated using a new multi-level force-based algorithm developed by the authors, but not yet published. The background flags were added manually to identify the countries from the data set. The algorithm aims to untangle mutually nested subgraphs by harmonizing the available space for the labels and improving edge visibility by duplicating high-frequency ingredient nodes. Ingredients occurring in multiple countries also receive at least one node per country. The idea is inspired by map diagrams, which often show the semantics enclosed by country boundaries. In our diagram, countries are represented by octolinear polygons, and are placed next to each other if they share many ingredients in their recipes. The actual placement of the countries by the algorithm is entirely data driven. As we can see, this design naturally gathers countries that are located on the same continent, due to the accessibility of the ingredients. The names of recipes are visualized using textual labels with sharp corners, and they are enclosed by the country polygon they belong to. Contrarily, ingredients are represented by textual labels with rounded corners. Moreover, ingredients are visually classified into common (pink) and special (blue) ingredients based on their frequency in the dataset. For visually analyzing the data set, we can generate smoothed spanning trees along the boundaries of an (invisible) Voronoi diagram of all textual labels to connect identical nodes to visually integrate all copies of one ingredient. For example, we highlighted the ingredient "soy sauce", one of the most commonly used ingredients in Asia, to discover that it has spread to the UK as well. We can also perform visual queries for related recipes based on sharing rare ingredients. For example, the British dish "steak and kidney pie" is highlighted in green together with three blue spanning trees connecting all recipes related to that dish via at least one of its special (blue) ingredients.

Link to Repositum

Planar drawings of fixed-mobile bigraphs
Bekos, Michael A., De Luca, Felice, Didimo, Walter, Mchedlidze, Tamara, Nöllenburg, Martin, Symvonis, Antonios, Tollis, Ioannis
Type: Article; In: Theoretical Computer Science; Vol: 795; Pages: 408-419
Show Abstract
A fixed-mobile bigraphGis a bipartite graph such that the vertices of one partition set are given with fixedpositions in the plane and the mobilevertices of the other partition, together with the edges, must be added to the drawing without any restriction on their positions. We assume that Gis planar and study the problem of finding a planar straight-line drawing of G. We show that deciding whether Gadmits such a drawing isNP-hard in the general case. Under the assumption that each mobile vertex is placed in the convex hull of its neighbors, we are able to prove that the problem is also inNP. Moreover, if the intersection graph of these convex hulls is a path, a cycle or, more generally, a cactus, the problem is polynomial-time solvable through a dynamic programming approach. Finally, we describe linear-time testing algorithms when the fixed vertices are collinear or when they lie on a finite set of horizontal levels (lines) and no edge can intersect a level except at its fixed vertex.

Link to Repositum

Job sequencing with one common and multiple secondary resources: An A∗/Beam Search based anytime algorithm
Horn, Matthias, Raidl, Günther, Blum, Christian
Type: Article; In: Artificial Intelligence; Vol: 277; Issue: 103173; Pages: 103173
Show Abstract
We consider a sequencing problem that arises, for example, in the context of scheduling patients in particle therapy facilities for cancer treatment. A set of non-preemptive jobs needs to be scheduled, where each job requires two resources: (1) a common resource that is shared by all jobs and (2) a secondary resource, which is shared with only a subset of the other jobs. While the common resource is only required for a part of the job's processing time, the secondary resource is required for the whole duration. The objective is to minimize the makespan. First we show that the tackled problem is NP-hard and provide three different lower bounds for the makespan. These lower bounds are then exploited in a greedy construction heuristic and a novel exact anytime A∗algorithm, which uses an advanced diving mechanism based on Beam Search and Local Search to find good heuristic solutions early. For comparison we also provide a basic Constraint Programming model solved with the ILOG CP optimizer. An extensive experimental evaluation on two types of problem instances shows that the approach works even for large instances with up to 2000 jobs extremely well. It typically yields either optimal solutions or solutions with an optimality gap of less than 1%.

Link to Repositum

Short Plane Supports for Spatial Hypergraphs
Castermans, Thom, van Garderen, Mereke, Meulemans, Wouter, Nöllenburg, Martin, Yuan, Xiaoru
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 23; Issue: 3; Pages: 463-498
Show Abstract
A graph G = (V;E) is a support of a hypergraph H = (V; S) if every hyperedge induces a connected subgraph in G. Supports are used for cer- tain types of hypergraph drawings, also known as set visualizations. In this paper we consider visualizing spatial hypergraphs, where each ver- tex has a xed location in the plane. This scenario appears when, e.g., modeling set systems of geospatial locations as hypergraphs. Following established aesthetic quality criteria, we are interested in nding supports that yield plane straight-line drawings with minimum total edge length on the input point set V . From a theoretical point of view, we rst show that the problem is NP-hard already under rather mild conditions, and additionally provide a negative approximability result. Therefore, the main focus of the paper lies on practical heuristic algorithms as well as an exact, ILP-based approach for computing short plane supports. We report results from computational experiments that investigate the e ect of requiring planarity and acyclicity on the resulting support length. Fur- thermore, we evaluate the performance and trade-o s between solution quality and speed of heuristics relative to each other and compared to optimal solutions.

Link to Repositum

Minimizing Crossings In Constrained Two-Sided Circular Graph Layouts
Klute, Fabian, Nöllenburg, Martin
Type: Article; In: Journal of Computational Geometry; Vol: 10; Issue: 2; Pages: 45-69
Show Abstract
Circular graph layout is a popular drawing style, in which vertices are placed on a circle and edges are drawn as straight chords. Crossing minimization in circular layouts is NP-hard. One way to allow for fewer crossings in practice are two-sided layouts, which drawsome edges as curves in the exterior of the circle. In fact, one- and two-sided circular layouts are equivalent to one-page and two-page book drawings, i.e., graph layouts with all verticesplaced on a line (thespine) and edges drawn in one or two distinct half-planes (the pages) bounded by the spine. In this paper we study the problem of minimizing the crossings for a fixed cyclic vertex order by computing an optimal k-plane set of exteriorly drawnedges for k ≥ 1, extending the previously studied case k = 0. We show that this relatesto finding bounded-degree maximum-weight induced subgraphs of circle graphs, which is a graph-theoretic problem of independent interest. We show NP-hardness for arbitrary k, present an efficient algorithm for k = 1, and generalize it to an explicit XP-time algorithmfor any fixed k. For the practically interesting case k= 1 we implemented our algorithm and present experimental results that confirm its applicability.

Link to Repositum

Parameterized Complexity of Asynchronous Border Minimization
Ganian, Robert, Kronegger, Martin, Pfandler, Andreas, Popa, Alexandru
Type: Article; In: Algorithmica; Vol: 81; Issue: 1; Pages: 201-223
Show Abstract
Microarrays are research tools used in gene discovery as well as disease and cancer diagnostics. Two prominent but challenging problems related to microarrays are the Border Minimization Problem (BMP) and the Border Minimization Problem with given placement (P-BMP). The common task of these two problems is to create so-called probe sequences (essentially a string) in a microarray. Here, the goal of the former problem is to determine an assignment of each probe sequence to a unique cell of the array and afterwards to construct the sequences at their respective cells while minimizing the border length of the probes. In contrast, for the latter problem the assignment of the probes to the cells is already given. In this paper we investigate the parameterized complexity of the natural exhaustive variants of BMP and P-BMP, termed BMP𝑒 and P-BMP𝑒 respectively, under several natural parameters. We show that BMP𝑒 and P-BMP^{𝑒} are in FPT under the following two combinations of parameters: (1) the size of the alphabet (c), the maximum length of a sequence (string) in the input (ℓ) and the number of rows of the microarray (r); and, (2) the size of the alphabet and the size of the border length (o). Furthermore, P-BMP𝑒 is in FPT when parameterized by c and ℓ. We complement our tractability results with a number of corresponding hardness results.

Link to Repositum

On the readability of leaders in boundary labeling
Barth, Lukas, Gemsa, Andreas, Niedermann, Benjamin, Nöllenburg, Martin
Type: Article; In: Information Visualization; Vol: 18; Issue: 1; Pages: 110-132
Show Abstract
External labeling deals with annotating features in images with labels that are placed outside of the image and are connected by curves (so-called leaders) to the corresponding features. While external labeling has been extensively investigated from a perspective of automatization, the research on its readability has been neglected. In this article, we present the first formal user study on the readability of leader types in boundary labeling, a special variant of external labeling that considers rectangular image contours. We consider the four most studied leader types (straight, L-shaped, diagonal, and S-shaped) with respect to their performance, that is, whether and how fast a viewer can assign a feature to its label and vice versa. We give a detailed analysis of the results regarding the readability of the four models and discuss their aesthetic qualities based on the users' preference judgments and interviews. As a consequence of our experiment, we can generally recommend L-shaped leaders as the best compromise between measured task performance and subjective preference ratings, while straight and diagonal leaders received mixed ratings in the two measures. S-shaped leaders are generally not recommended from a practical point of view.

Link to Repositum

Long-Distance Q-Resolution with Dependency Schemes
Peitl, Tomáš, Slivovsky, Friedrich, Szeider, Stefan
Type: Article; In: Journal of Automated Reasoning; Vol: 63; Issue: 1; Pages: 127-155
Show Abstract
Resolution proof systems for quantified Boolean formulas (QBFs) provide a formal model for studying the limitations of state-of-the-art search-based QBF solvers that use these systems to generate proofs. We study a combination of two proof systems supported by the solver DepQBF: Q-resolution with generalized universal reduction according to a dependency scheme and long distance Q-resolution. We show that the resulting proof system-which we call long-distance Q(D)-resolution-is sound for the reflexive resolution-path dependency scheme. In fact, we prove that it admits strategy extraction in polynomial time. This comes as an application of a general result, by which we identify a whole class of dependency schemes for which long-distance Q(D)-resolution admits polynomial-time strategy extraction. As a special case, we obtain soundness and polynomial-time strategy extraction for long distance Q(D)-resolution with the standard dependency scheme. We further show that search-based QBF solvers using a dependency scheme D and learning with long-distance Q-resolution generate long-distance Q(D)-resolution proofs. The above soundness results thus translate to partial soundness results for such solvers: they declare an input QBF to be false only if it is indeed false. Finally, we report on experiments with a configuration of DepQBF that uses the standard dependency scheme and learning based on long-distance Q-resolution.

Link to Repositum

On the Potentials and Dilemmas of Cooperative/White-Label Deliveries based on Selected Austrian Demonstration Cases
Prandtstetter, Matthias, Biesinger, Benjamin, Hu, Bin, Nolz, Pamela, Reinthaler, Martin, Zajicek, Jürgen, Angelini, Alessandra, Hauger, Georg
Type: Inproceedings; In: Proceedings of the 6th International Physical Internet Conference IPIC 2019; Pages: 1-7
Show Abstract
One of the main pillars of the Physical Internet (PI) is cooperation. One possible form of cooperation in freight transportation is bundling. As soon as bundling is in focus, we have to think about locations where this bundling might take place, which are, normally, hubs. So, the main idea would be that different freight carriers meet at a specific hub and exchange their freight according to some (clever) planning such that redundancies in trips are overcome. E.g., instead of two carriers serving regions A and B, they cooperate such that the first carrier only has to serve region A and the other one only has to serve area B. Even though the general idea is quite clear, details are sometimes more complicated. In this paper, potentials and dilemmas related to cooperative delivery models based on the observations made in selected Austrian case studies are outlined.

Link to Repositum

Efficient non-segregated routing for reconfigurable demand-aware networks
Fenz, Thomas, Foerster, Klaus-Tycho, Schmid, Stefan, Villedieu, Anaıs
Type: Inproceedings; In: 2019 IFIP Networking Conference (IFIP Networking)
Show Abstract
More and more networks are becoming reconfigurable: not just the routing can be programmed, but the physical layer itself as well. Various technologies enable this programmability, ranging from optical circuit switches to beamformed wireless connections and free-space optical interconnects. Existing reconfigurable network topologies are typically hybrid in nature, consisting of static and a reconfigurable links. However, even though the static and reconfigurable links form a joint structure, routing policies are artificially segregated and hence do not fully exploit the network resources: the state of the art is to route large elephant flows on direct reconfigurable links, whereas the remaining traffic is left to the static network topology. Recent work showed that such artificial segregation is inefficient, but did not provide the tools to actually leverage the benefits on non-segregated routing. In this paper, we provide several algorithms which take advantage of non-segregated routing, by jointly optimizing topology and routing. We compare our algorithms to segregated routing policies and also evaluate their performance in workloaddriven simulations, based on real-world traffic traces. We find that our algorithms do not only outperform segregated routing policies, in various settings, but also come close to the optimal solution, computed by a mixed integer program formulation, also presented in this paper. Finally, we also provide insights into the complexity of the underlying combinatorial optimization problem, by deriving approximation hardness results.

Link to Repositum

Exploiting Similar Behavior of Users in a Cooperative Optimization Approach for Distributing Service Points in Mobility Applications
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther
Type: Inproceedings; In: Machine Learning, Optimization, and Data Science; Pages: 738-750
Show Abstract
In this contribution we address scaling issues of our previously proposed cooperative optimization approach (COA) for distributing service points for mobility applications in a geographical area. COA is an iterative algorithm that solves the problem by combining an optimization component with user interaction on a large scale and a machine learning component that provides the objective function for the optimization. In each iteration candidate solutions are generated, suggested to the future potential users for evaluation, the machine learning component is trained on the basis of the collected feedback, and the optimization is used to nd a new solution tting the needs of the users as good as possible. While the former concept study showed promising results for small instances, the number of users that could be considered was quite limited and each user had to evaluate a relatively large number of candidate solutions. Here we deviate from this previous approach by using matrix factorization as central machine learning component in order to identify and exploit similar needs of many users. Furthermore, instead of the black-box optimization we are now able to apply mixed integer linear programming to obtain a best solution in each iteration. While being still a conceptual study, experimental simulation results clearly indicate that the approach works in the intended way and scales better to more users.

Link to Repositum

VNS and PBIG as Optimization Cores in a Cooperative Optimization Approach for Distributing Service Points
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther
Type: Inproceedings; In: EXTENDED ABSTRACTS-Computer Aided Systems Theory 2019; Pages: 70-71
Show Abstract
We consider a variant of the facility location problem [2]. The task is to find an optimal subset of locations within a certain geographical area for constructing service points in order to satisfy customer demands as well as possible. This general scenario has a wide range of real-world applications. More specifically, we have the setup of stations for mobility purposes in mind, such as constructing bike sharing stations for a public bike sharing system, rental stations for car sharing, or charging stations for electric vehicles. A main challenge with such optimization problems is to come up with reliable data for existing demand that may be fulfilled. Geographic and demographic data is usually combined with the special knowledge of points of interest and upfront surveys of potential users, but almost always this only yields a crude estimate of the real existing demand and final acceptance of the system. Instead of acquiring demand information from potential users upfront, we recently proposed a cooperative optimization approach, in which potential users are tightly integrated on a large scale in the optimization process [3]. For a more general review on cooperative optimization methods see [5]. The method iteratively generates solution candidates that are presented to users for evaluation. A surrogate objective function is trained by the users´ feedback and used by an optimization core. The process is iterated on a large scale with many potential users and several rounds until a satisfactory solution is reached.

Link to Repositum

Parameterized Algorithms for Book Embedding Problems
Bhore, Sujoy, Ganian, Robert, Montecchiani, Fabrizio, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2019; Vol: 11904; Pages: 365-378
Show Abstract
A k-page book embedding of a graph G draws the vertices of G on a line and the edges on k half-planes (called pages) bounded by this line, such that no two edges on the same page cross. We study the problem of determining whether G admits a k-page book embedding both when the linear order of the vertices is fixed, called Fixed-Order Book Thickness, or not fixed, called Book Thickness. Both problems are known to be NP-complete in general. We show that Fixed-Order Book Thickness and Book Thickness are fixed-parameter tractable parameterized by the vertex cover number of the graph and that Fixed-Order Book Thickness is fixed-parameter tractable parameterized by the pathwidth of the vertex order.

Link to Repositum

The Balanced Connected Subgraph Problem
Bhore, Sujoy Kumar, Chakraborty, Sourav, Jana, Satyabrata, Mitchell, Joseph S. B., Pandit, Supantha, Roy, Sasanka
Type: Inproceedings; In: Algorithms and Discrete Applied Mathematics; Pages: 201-215
Show Abstract
The problem of computing induced subgraphs that satisfy some specified restrictions arises in various applications of graph algorithms and has been well studied. In this paper, we consider the following Open image in new window (shortly, Open image in new window ) problem. The input is a graph G=(V,E), with each vertex in the set V having an assigned color, " Open image in new window " or " Open image in new window ". We seek a maximum-cardinality subset V′⊆V of vertices that is Open image in new window (having exactly |V′|/2 red nodes and |V′|/2 blue nodes), such that the subgraph induced by the vertex set V′ in G is connected. We show that the BCS problem is NP-hard, even for bipartite graphs G (with red/blue color assignment not necessarily being a proper 2-coloring). Further, we consider this problem for various classes of the input graph G, including, e.g., planar graphs, chordal graphs, trees, split graphs, bipartite graphs with a proper red/blue 2-coloring, and graphs with diameter 2. For each of these classes either we prove NP-hardness or design a polynomial time algorithm.

Link to Repositum

Recognizing embedded caterpillars with weak unit disk contact representations is NP-hard
Chiu, Man-Kwun, Cleve, Jonas, Nöllenburg, Martin
Type: Inproceedings; In: Extended abstract of EuroCG 2019; Pages: 1-9
Show Abstract
Weak unit disk contact graphs are graphs that admit a representation of the nodes as a collection of internally disjoint unit disks whose boundaries touch if there is an edge between the corresponding nodes. We provide a gadget-based reduction to show that recognizing embedded caterpillars that admit a weak unit disk contact representation is NP-hard.

Link to Repositum

Efficient Segment Folding is Hard
Horiyama, Takashi, Klute, Fabian, Korman, Matias, Parada, Irene, Uehara, Ryuhei, Yamanaka, Katsuhisa
Type: Inproceedings; In: Proceedings of the 31st Canadian Conference on Computational Geometry; Pages: 8
Show Abstract
We introduce a computational origami problem which we call the segment folding problem: given a set of n line-segments in the plane the aim is to make creases along all segments in the minimum number of folding steps. Note that a folding might alter the relative po- sition between the segments, and a segment could split into two. We show that it is NP-hard to determine if n line segments can be folded in n simple folding opera- tions.

Link to Repositum

Exploring Semi-Automatic Map Labeling
Klute, Fabian, Li, Guangping, Löffler, Raphael, Nöllenburg, Martin, Schmidt, Manuela
Type: Inproceedings; In: Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Show Abstract
Label placement in maps is a very challenging task that is critical for the overall map quality. Most previous work focused on designing and implementing fully automatic solutions, but the resulting visual and aesthetic quality has not reached the same level of sophistication that skilled human cartographers achieve. We investigate a different strategy that combines the strengths of humans and algorithms. In our proposed labeling method, first an initial labeling is computed that has many well-placed labels but is not claiming to be perfect. Instead it serves as a starting point for an expert user who can then interactively and locally modify the labeling where necessary. In an iterative human-in-the-loop process alternating between user modifications and local algorithmic updates and refinements the labeling can be tuned to the user´s needs. We demonstrate our approach by performing different possible modification steps in a sample workflow with a prototypical interactive labeling editor. Further, we report computational performance results from a simulation experiment in QGIS, which investigates the differences between exact and heuristic algorithms for semi-automatic map labeling. To that end, we compare several alternatives for recomputing the labeling after local modifications and updates, as a major ingredient for an interactive labeling editor.

Link to Repositum

Balanced Covering Problem in Bicolored point Sets
Bhore, Sujoy Kumar, Pandit, Supantha, Roy, Sasanka
Type: Inproceedings; In: Proceeding of EuroCG 2019; Pages: 6
Show Abstract
We study a variation of the classical set cover problem called the balanced covering (BC) problem on a set of red and blue points in the Euclidean plane. Let P be a set of red and blue points in the plane. An object is called a balanced object with respect to P, if it contains an equal number of red and blue points from P. In the BC problem, the objective is to cover the points in P with a minimum number of homogeneous geometric objects (i.e., unit squares, intervals) such that each object is balanced. For points in the plane, we prove that the BC problem is NP-hard when the covering objects are unit squares. For points on a line, we show that if the ratio of the total numbers of reds and blues is more than 2 then, there exists no solution of the BC problem. Subsequently, we devise a linear time exact algorithm for the BC problem with intervals. Finally, we study the study the problem of computing a balanced object of maximum cardinality. For this, we give polynomial time algorithms with unit squares in the plane and intervals on a line.

Link to Repositum

Maximizing Ink in Partial Edge Drawings of k-plane Graphs
Hummel, Matthias, Klute, Fabian, Nickel, Soeren, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2019; Vol: 11904; Pages: 323-336
Show Abstract
Partial edge drawing (PED) is a drawing style for non-planar graphs, in which edges are drawn only partially as pairs of opposing stubs on the respective end-vertices. In a PED, by erasing the central parts of edges, all edge crossings and the resulting visual clutter are hidden in the undrawn parts of the edges. In symmetric partial edge drawings (SPEDs), the two stubs of each edge are required to have the same length. It is known that maximizing the ink (or the total stub length) when transforming a straight-line graph drawing with crossings into a SPED is tractable for 2-plane input drawings, but NP-hard for unrestricted inputs. We show that the problem remains NP-hard even for 3-plane input drawings and establish NP-hardness of ink maximization for PEDs of 4-plane graphs. Yet, for k-plane input drawings whose edge intersection graph forms a collection of trees or, more generally, whose intersection graph has bounded treewidth, we present efficient algorithms for computing maximum-ink PEDs and SPEDs. We implemented the treewidth-based algorithms and show a brief experimental evaluation.

Link to Repositum

On Strict (Outer-)Confluent Graphs
Förster, Henry, Ganian, Robert, Klute, Fabian, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2019; Vol: 11904; Pages: 147-161
Show Abstract
A strict confluent (SC) graph drawing is a drawing of a graph with vertices as points in the plane, where vertex adjacencies are represented not by individual curves but rather by unique smooth paths through a planar system of junctions and arcs. If all vertices of the graph lie in the outer face of the drawing, the drawing is called a strict outerconfluent (SOC) drawing. SC and SOC graphs were first considered by Eppstein et al. in Graph Drawing 2013. Here, we establish several new relationships between the class of SC graphs and other graph classes, in particular string graphs and unit-interval graphs. Further, we extend earlier results about special bipartite graph classes to the notion of strict outerconfluency, show that SOC graphs have cop number two, and establish that tree-like (Δ-)SOC graphs have bounded cliquewidth.

Link to Repositum

A Biased Random Key Genetic Algorithm with Rollout Evaluations for the Resource Constraint Job Scheduling Problem
Blum, Christian, Thiruvady, Dhananjay, Ernst, Andreas T., Horn, Matthias, Raidl, Günther R.
Type: Inproceedings; In: AI 2019: Advances in Artificial Intelligence; Pages: 549-560
Show Abstract
The resource constraint job scheduling problem considered in this work is a difficult optimization problem that was defined in the context of the transportation of minerals from mines to ports. The main characteristics are that all jobs share a common limiting resource and that the objective function concerns the minimization of the total weighted tardiness of all jobs. The algorithms proposed in the literature for this problem have a common disadvantage: they require a huge amount of computation time. Therefore, the main goal of this work is the development of an algorithm that can compete with the state of the art, while using much less computational resources. In fact, our experimental results show that the biased random key genetic algorithm that we propose significantly outperforms the state-of-the-art algorithm from the literature both in terms of solution quality and computation time.

Link to Repositum

Particle therapy patient scheduling with limited starting time variations of daily treatments
Maschler, Johannes, Raidl, Günther
Type: Article; In: International Transactions in Operational Research; Vol: 27; Issue: 1; Pages: 458-479
Show Abstract
The particle therapy patient scheduling problem (PTPSP) arises in modern cancer treatment facilities that provide particle therapy. It consists of scheduling a set of therapies within a planning horizon of several months. A particularity of PTSP compared with classical radiotherapy scheduling is that therapies need not only be assigned to days but also scheduled within each day to account for the more complicated operational scenario. In an earlier work, we introduced this novel problem setting and provided first algorithms including an iterated greedy (IG) metaheuristic. In this work, we consider an important extension to the PTPSP emerging from practice in which the therapies should be provided on treatment days roughly at the same time. To be more specific, the variation between the starting times of the therapies' individual treatments should not exceed the given limits, and needs otherwise to be minimized. This additional constraint implies that the sequencing parts within each day can no longer be treated independently. To tackle this variant of PTPSP, we revise our previous IG and exchange its main components: the part of the applied construction heuristic for scheduling within the days and the local search algorithm. The resulting metaheuristic provides promising results for the proposed extension of the PTPSP and further enhances the existing approach for the original problem.

Link to Repositum

Metaheuristic Hybrids
Raidl, Günther, Puchinger, Jakob, Blum, Christian
Type: Book Contribution; In: Handbook of Metaheuristics; Vol: 272; Pages: 385-417
Show Abstract
Over the last decades, so-called hybrid optimization approaches have become increasingly popular for addressing hard optimization problems. In fact, when looking at leading applications of metaheuristics for complex real-world scenarios, many if not most of them do not purely adhere to one specific classical metaheuristic model but rather combine different algorithmic techniques. Concepts from different metaheuristics are often hybridized with each other, but they are also often combined with other optimization techniques such as tree-search, dynamic programming and methods from the mathematical programming, constraint programming, and SATsolving fields. Such combinations aim at exploiting the particular advantages of the individual components, and in fact well-designed hybrids often perform substantially better than their "pure" counterparts. Many very different ways of hybridizing metaheuristics are described in the literature, and unfortunately it is usually difficult to decide which approach(es) are most appropriate in a particular situation. This chapter gives an overview on this topic by starting with a classification of metaheuristic hybrids and then discussing several prominent design templates which are illustrated by concrete examples. G

Link to Repositum

Extending to 1-plane drawings
Hamm, Thekla, Klute, Fabian, Parada, Irene
Type: Inproceedings; In: Abstracts of the XVIII Spanish Meeting on Computational Geometry; Pages: 30
Show Abstract
We study the problem of extending a connected (1-)plane drawing of a graph G with a maximum set of edges M0 chosen from a given set of edges M of the complement graph of G. It turns out the problem is NP-hard already for the case of the initial drawing be- ing plane and orthogonal. On the positive side we give an FPT-algorithm in k for the case of adding k edges and M being the set of all edges in the complement of G.

Link to Repositum

Mixed Linear Layouts: Complexity, Heuristics, and Experiments
de Col, Philipp, Klute, Fabian, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2019; Vol: 11904; Pages: 460-467
Show Abstract
A k-page linear graph layout of a graph G = (V,E) draws all vertices along a line and each edge in one of k disjoint halfplanes called pages, which are bounded by . We consider two types of pages. In a stack page no two edges should cross and in a queue page no edge should be nested by another edge. A crossing (nesting) in a stack (queue) page is called a conflict. The algorithmic problem is twofold and requires to compute (i) a vertex ordering and (ii) a page assignment of the edges such that the resulting layout is either conflict-free or conflict-minimal. While linear layouts with only stack or only queue pages are well-studied, mixed s-stack q-queue layouts for s, q ≥ 1 have received less attention. We show NP-completeness results on the recognition problem of certain mixed linear layouts and present a new heuristic for minimizing conflicts. In a computational experiment for the case s, q = 1 we show that the new heuristic is an improvement over previous heuristics for linear layouts.

Link to Repositum

Casual Employee Scheduling with Constraint Programming and Ant Colony Optimization
Frohner, Nikolaus, Teuschl, Stephan, Raidl, Günther
Type: Inproceedings; In: Eurocast2019-EXTENDED ABSTRACTS; Pages: 78-79

Link to Repositum

SAT-Encodings for Treecut Width and Treedepth
Ganian, Robert, Lodha, Neha, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Twenty-First Workshop on Algorithm Engineering and Experiments (ALENEX)

Link to Repositum

Computing Stable Demers Cartograms
Nickel, Soeren, Sondag, Max, Meulemans, Wouter, Chimani, Markus, Kobourov, Stephen, Peltonen, Jaakko, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2019; Vol: 11904; Pages: 46-60
Show Abstract
Cartograms are popular for visualizing numerical data for map regions. Maintaining correct adjacencies is a primary quality criterion for cartograms. When there are multiple data values per region (over time or different datasets) shown as animated or juxtaposed cartograms, preserving the viewer´s mental map in terms of stability between cartograms is another important criterion. We present a method to compute stable Demers cartograms, where each region is shown as a square and similar data yield similar cartograms. We enforce orthogonal separation constraints with linear programming, and measure quality in terms of keeping adjacent regions close (cartogram quality) and using similar positions for a region between the different data values (stability). Our method guarantees ability to connect most lost adjacencies with minimal leaders. Experiments show our method yields good quality and stability.

Link to Repositum

Algorithm and Hardness Results on Liar's Dominating Set and k-tuple Dominating Set
Banerjee, Sandip, Bhore, Sujoy
Type: Inproceedings; In: Combinatorial Algorithms - 30th International Workshop IWOCA 2019; Vol: 11638; Pages: 48-60
Show Abstract
Given a graph G = (V,E), the dominating set problem asks for a minimum subset of vertices D ⊆ V such that every vertex u ∈ V \ D is adjacent to at least one vertex v ∈ D. That is, the set D satisfies the condition that |N[v] ∩ D| ≥ 1 for each v ∈ V , where N[v] is the closed neighborhood of v. In this paper, we study two variants of the classical dominating set problem: k-tuple dominating set (k-DS) problem and Liar´s dominating set (LDS) problem, and obtain several algorithmic and hardness results. On the algorithmic side, we present a constant factor ( 11/2 )-approximation algorithm for the Liar´s dominating set problem on unit disk graphs. Then, we design a polynomial time approximation scheme (PTAS) for the k-tuple dominating set problem on unit disk graphs. On the hardness side, we show a Ω(n2) bits lower bound for the space complexity of any (randomized) streaming algorithm for Liar´s dominating set problem as well as for the k-tuple dominating set problem. Furthermore, we prove that the Liar´s dominating set problem on bipartite graphs is W[2]-hard.

Link to Repositum

Lombardi drawings of knots and links
Kindermann, Philipp, Kobourov, Stephen, Löffler, Maarten, Nöllenburg, Martin, Schulz, André, Vogtenhuber, Birgit
Type: Article; In: Journal of Computational Geometry; Vol: 10; Issue: 1; Pages: 444-476
Show Abstract
Knot and link diagrams are projections of one or more 3-dimensional simple closed curves into lR2, such that no more than two points project to the same point in lR2. These diagrams are drawings of 4-regular plane multigraphs. Knots are typically smooth curves in lR3, so their projections should be smooth curves in lR2 with good continuity and large crossing angles: exactly the properties of Lombardi graph drawings (de ned by circular-arc edges and perfect angular resolution). We show that several knots do not allow crossing-minimal plane Lombardi drawings. On the other hand, we identify a large class of 4-regular plane multigraphs that do have plane Lombardi drawings. We then study two relaxations of Lombardi drawings and show that every knot admits a crossing-minimal plane 2-Lombardi drawing (where edges are composed of two circular arcs). Further, every knot is near-Lombardi, that is, it can be drawn as a plane Lombardi drawing when relaxing the angular resolution requirement by an arbitrary small angular o set ", while maintaining a 180 angle between opposite edges.

Link to Repositum

On Erdős–Szekeres-Type Problems for k-convex Point Sets
Balko, Martin, Bhore, Sujoy, Martinez-Sandoval, Leonardo, Valtr, Pavel
Type: Inproceedings; In: Combinatorial Algorithms 30th International Workshop, IWOCA 2019, Pisa, Italy, July 23–25, 2019, Proceedings; Pages: 35-47
Show Abstract
We study Erd˝os-Szekeres-type problems for k-convex point sets, a recently introduced notion that naturally extends the concept of convex position. A finite set S of n points is k-convex if there exists a spanning simple polygonization of S such that the intersection of any straight line with its interior consists of at most k connected components. We address several open problems about k-convex point sets. In particular, we extend the well-known Erd˝os-Szekeres Theorem by showing that, for every fixed k ∈ N, every set of n points in the plane in general position (with no three collinear points) contains a k-convex subset of size at least Ω(logk n). We also show that there are arbitrarily large 3-convex sets of n points in the plane in general position whose largest 1-convex subset has size O(log n). This gives a solution to a problem posed by Aichholzer et al. [2]. We prove that there is a constant c > 0 such that, for every n ∈ N, there is a set S of n points in the plane in general position such that every 2-convex polygon spanned by at least c · log n points from S contains a point of S in its interior. This matches an earlier upper bound by Aichholzer et al. [2] up to a multiplicative constant and answers another of their open problems.

Link to Repositum

A Heuristic Approach for Solving the Longest Common Square Subsequence Problem
Djukanovic, Marko, Raidl, Günther, Blum, Christian
Type: Inproceedings; In: EXTENDED ABSTRACTS-Computer Aided Systems Theory 2019; Pages: 120-122
Show Abstract
The longest common square subsequence (LCSqS) problem, a variant of the longest common subsequence (LCS) problem, aims at nding a subsequence common to all input strings that is, at the same time, a square subsequence. So far the LCSqS was solved only for two input strings.We present a heuristic approach, based on randomized local search and a hybrid of variable neighborhood search and beam search, to solve the LCSqS for an arbitrary set of input strings. The beam search makes use of a novel heuristic estimation of the approximated expected length of a LCS to guide the search.

Link to Repositum

Casual Employee Scheduling with Constraint Programming and Metaheuristics
Frohner, Nikolaus, Teuschl, Stephan, Raidl, Günther R.
Type: Inproceedings; In: Computer Aided Systems Theory – EUROCAST 2019; Pages: 279-287
Show Abstract
We consider an employee scheduling problem where many casual employees have to be assigned to shifts defined by the requirement of different work locations. For a given planning horizon, locations specify these requirements by stating the number of employees needed at specific times. Employees place offers for shifts at locations they are willing to serve. The goal is to find an assignment of employees to the locations´ shifts that satisfies certain hard constraints and minimizes an objective function defined as weighted sum of soft constraint violations. The soft constraints consider ideal numbers of employees assigned to shifts, distribution fairness, and preferences of the employees. The specific problem originates in a real-world application at an Austrian association. In this paper, we propose a Constraint Programming (CP) model which we implemented using MiniZinc and tested with different backend solvers. As the application of this exact approach is feasible only for small to medium sized instances, we further consider a hybrid CP/metaheuristic approach where we create an initial feasible solution using a CP solver and then further optimize by means of an ant colony optimization and a variable neighborhood descent. This allows us to create high-quality solutions which are finally tuned by a manual planner.

Link to Repositum

A Cooperative Optimization Approach for Distributing Service Points in Mobility Applications
Jatschka, Thomas, Rodemann, Tobias, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization 2019; Pages: 1-16
Show Abstract
We investigate a variant of the facility location problem concerning the optimal distribution of service points with incomplete information within a certain geographical area. The application scenario is generic in principle, but we have the setup of charging stations for electric vehicles or rental stations for bicycles or cars in mind. When planning such systems, estimating under which conditions which customer demand can be ful lled is fundamental in order to evaluate and optimize possible solutions. In this paper we present a cooperative optimization approach for distributing service points that incorporates potential customers not only in the data acquisition but also during the optimization process. A surrogate objective function is used to evaluate intermediate solutions during the optimization. The quality of this surrogate objective function is iteratively improved by learning from the feedback of potential users given to candidate solutions. For the actual optimization we consider a population based iterated greedy algorithm. Experiments on arti cial benchmark scenarios with idealized simulated user behavior show the learning capabilities of the surrogate objective function and the e ectiveness of the optimization.

Link to Repositum

Balanced Connected Subgraph Problem in Geometric Intersection Graphs
Bhore, Sujoy Kumar, Jana, Satyabrata, Pandit, Supantha, Roy, Sasanka
Type: Inproceedings; In: Combinatorial Optimization and Applications; Pages: 56-68
Show Abstract
We study the Open image in new window (shortly, Open image in new window ) problem on geometric intersection graphs such as interval, circular-arc, permutation, unit-disk, outer-string graphs, etc. Given a Open image in new window graph G=(V,E), where each vertex in V is colored with either " Open image in new window " or " Open image in new window ", the BCS problem seeks a maximum cardinality induced connected subgraph H of G such that H is Open image in new window , i.e., H contains an equal number of red and blue vertices. We study the computational complexity landscape of the BCS problem while considering geometric intersection graphs. On one hand, we prove that the BCS problem is NP-hard on the unit disk, outer-string, complete grid, and unit square graphs. On the other hand, we design polynomial-time algorithms for the BCS problem on interval, circular-arc and permutation graphs. In particular, we give algorithms for the Open image in new window problem on both interval and circular-arc graphs, and those algorithms are used as subroutines for solving the BCS problem on the same classes of graphs. Finally, we present a FPT algorithm for the BCS problem on general graphs.

Link to Repositum

Proof Complexity of Fragments of Long-Distance Q-Resolution
Peitl, Tomas, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2019; Pages: 319-335
Show Abstract
Q-resolution is perhaps the most well-studied proof system for Quantified Boolean Formulas (QBFs). Its proof complexity is by now well understood, and several general proof size lower bound techniques have been developed. The situation is quite different for long-distance Q-resolution (LDQ-resolution). While lower bounds on LDQ-resolution proof size have been established for specific families of formulas, we lack semantically grounded lower bound techniques for LDQ-resolution. In this work, we study restrictions of LDQ-resolution. We show that a specific lower bound technique based on bounded-depth strategy extraction does not work even for reductionless Q-resolution by presenting short proofs of the QParity formulas. Reductionless Q-resolution is a variant of LDQ-resolution that admits merging but no universal reduction. We also prove a lower bound on the proof size of the completion principle formulas in reductionless Q-resolution. This shows that two natural fragments of LDQ-resolution are incomparable: Q-resolution, which allows universal reductions but no merging, and reductionless Q-resolution, which allows merging but no universal reductions. Finally, we develop semantically grounded lower bound techniques for fragments of LDQresolution, specifically tree-like LDQ-resolution and regular reductionless Q-resolution.

Link to Repositum

A SAT Approach for Finding Sup-Transition-Minors
Klocker, Benedikt, Fleischner, Herbert, Raidl, Günther R.
Type: Inproceedings; In: Learning and Intelligent Optimization 13th International Conference, LION 13, Chania, Crete, Greece, May 27–31, 2019, Revised Selected Papers; Pages: 325-341
Show Abstract
The cycle double cover conjecture is a famous longstanding unsolved conjecture in graph theory. It is related and can be reduced to the compatible circuit decomposition problem. Recently Fleischner et al. (2018) provided a sufficient condition for a compatible circuit decomposition, which is called SUD- K5-minor freeness. In a previous work we developed an abstract mathematical model for finding SUD-K5-minors and based on the model a mixed integer linear program (MIP). In this work we propose a respective boolean satisfiability (SAT) model and compare it with the MIP model in computational tests. Non-trivial symmetry breaking constraints are proposed, which improve the solving times of both models considerably. Compared to the MIP model the SAT approach performs significantly better. We use the faster algorithm to further test graphs of graph theoretic interest and were able to get new insights. Among other results we found snarks with 30 and 32 vertices that do not contain a perfect pseudomatching, that is a spanning subgraph consisting of K2 and K1;3 components, whose contraction leads to a SUD-K5-minor free graph.

Link to Repositum

Combining Resolution-Path Dependencies with Dependency Learning
Peitl, Tomas, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2019 22nd International Conference, SAT 2019, Lisbon, Portugal, July 9–12, 2019, Proceedings
Show Abstract
We present the first practical implementation of the reflexive resolution-path dependency scheme in a QBF solver. Unlike in DepQBF, which uses the less general standard dependency scheme, we do not compute the dependency relation upfront, but instead query relevant dependencies on demand during dependency conflicts, when the solver is about to learn a missing dependency. Thus, our approach is fundamentally tied to dependency learning, and shows that the two techniques for dependency analysis can be fruitfully combined. As a byproduct, we propose a quasilinear-time algorithm to compute all resolution-path dependencies of a given variable. Experimental results on the QBF library confirm the viability of our technique and identify families of formulas where the speedup is particularly promising.

Link to Repositum

Finding Linear Arrangements of Hypergraphs with Bounded Cutwidth in Linear Time
Hamm, Thekla
Type: Inproceedings; In: 14th International Symposium on Parameterized and Exact Computation; Pages: 1-14
Show Abstract
Cutwidth is a fundamental graph layout parameter. It generalises to hypergraphs in a natural way and has been studied in a wide range of contexts. For graphs it is known that for a fixed constant k there is a linear time algorithm that for any given G, decides whether G has cutwidth at most k and, in the case of a positive answer, outputs a corresponding linear arrangement. We show that such an algorithm also exists for hypergraphs.

Link to Repositum

Strategies for Iteratively Refining Layered Graph Models
Riedler, Martin, Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics: 11th International Workshop; Pages: 46-62
Show Abstract
We consider a framework for obtaining a sequence of converging primal and dual bounds based on mixed integer linear programming formulations on layered graphs. The proposed iterative algorithm avoids the typically rather large size of the full layered graph by approximating it incrementally. We focus in particular on this refinement step that extends the graph in each iteration. Novel path-based approaches are compared to existing variants from the literature. Experiments on two benchmark problems-the traveling salesman problem with time windows and the rooted distance-constrained minimum spanning tree problem-show the effectiveness of our new strategies. Moreover, we investigate the impact of a strong heuristic component within the algorithm, both for improving convergence speed and for improving the potential of an employed reduced cost fixing step.

Link to Repositum

Decision Diagram Based Limited Discrepancy Search for a Job Sequencing Problem
Horn, Matthias, Raidl, Günther
Type: Inproceedings; In: Computer Aided System Theory - EUROCAST 2019; Pages: 94-95
Show Abstract
In this work we consider the Price-Collecting Job Sequencing with One Com- mon and Multiple Secondary Resources (PC-JSOCMSR) introduced from [2, 3]. The task is to feasibly schedule a subset of jobs from a given set of jobs. Each job needs two resources: a common resource for a part of the job's execution time and a secondary resource for the whole execution time. In addition, each job has one or more time windows and an associated prize. A feasible schedule requires that there is no resource used by more than one job at the same time and each job is scheduled within one of its time windows. Due to the time windows it may not be possible to schedule all jobs. Therefore we aim to maximize the total prize over the actually scheduled jobs.

Link to Repositum

2018
Graph Visualization
Hu, Yifan, Nöllenburg, Martin
Type: Book Contribution; In: Encyclopedia of Big Data Technologies; Pages: 1-9
Show Abstract
Graph visualization is an area of mathematics and computer science, at the intersection of geometric graph theory and information visualization. It is concerned with visual representation of graphs that reveals structures and anomalies that may be present in the data and helps the user to understand and reason about the graphs.

Link to Repositum

An SMT Approach to Fractional Hypertree Width
Fichte, Johannes, Hecher, Markus, Lodha, Neha, Szeider, Stefan
Type: Inproceedings; In: Principles and Practice of Constraint Programming, 24th International Conference, CP 2018; Pages: 109-127
Show Abstract
Bounded fractional hypertree width ( Open image in new window ) is the most general known structural property that guarantees polynomial-time solvability of the constraint satisfaction problem. Bounded Open image in new window generalizes other structural properties like bounded induced width and bounded hypertree width. We propose, implement and test the first practical algorithm for computing the Open image in new window and its associated structural decomposition. We provide an extensive empirical evaluation of our method on a large class of benchmark instances which also provides a comparison with known exact decomposition methods for hypertree width. Our approach is based on an efficient encoding of the decomposition problem to SMT (SAT modulo Theory) with Linear Arithmetic as implemented in the SMT solver Open image in new window . The encoding is further strengthened by preprocessing and symmetry breaking methods. Our experiments show (i) that Open image in new window can indeed be computed exactly for a wide range of benchmark instances, and (ii) that state-of-the art SMT techniques can be successfully applied for structural decomposition.

Link to Repositum

Unary Integer Linear Programming with Structural Restrictions
Eiben, Eduard, Ganian, Robert, Knop, Dusan
Type: Inproceedings; In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence; Pages: 1284-1290
Show Abstract
Recently a number of algorithmic results have appeared which show the tractability of Integer Linear Programming (ILP) instances under strong restrictions on variable domains and/or coefficients (AAAI 2016, AAAI 2017, IJCAI 2017). In this paper, we target ILPs where neither the variable domains nor the coefficients are restricted by a fixed constant or parameter; instead, we only require that our instances can be encoded in unary. We provide new algorithms and lower bounds for such ILPs by exploiting the structure of their variable interactions, represented as a graph. Our first set of results focuses on solving ILP instances through the use of a graph parameter called clique-width, which can be seen as an extension of treewidth which also captures wellstructured dense graphs. In particular, we obtain a polynomial-time algorithm for instances of bounded clique-width whose domain and coefficients are polynomially bounded by the input size, and we complement this positive result by a number of algorithmic lower bounds. Afterwards, we turn our attention to ILPs with acyclic variable interactions. In this setting, we obtain a complexity map for the problem with respect to the graph representation used and restrictions on the encoding.

Link to Repositum

Small Resolution Proofs for QBF using Dependency Treewidth
Eiben, Eduard, Ganian, Robert, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the 35th Symposium on Theoretical Aspects of Computer Science, STACS 2018, February 28 to March 3, 2018, Caen, France; Pages: 1-14
Show Abstract
In spite of the close connection between the evaluation of quantified Boolean formulas (QBF) and propositional satisfiability (SAT), tools and techniques which exploit structural properties of SAT instances are known to fail for QBF. This is especially true for the structural parameter treewidth, which has allowed the design of successful algorithms for SAT but cannot be straightforwardly applied to QBF since it does not take into account the interdependencies between quantified variables. In this work we introduce and develop dependency treewidth, a new structural parameter based on treewidth which allows the efficient solution of QBF instances. Dependency treewidth pushes the frontiers of tractability for QBF by overcoming the limitations of previously introduced variants of treewidth for QBF. We augment our results by developing algorithms for computing the decompositions that are required to use the parameter.

Link to Repositum

Drawing Large Graphs by Multilevel Maxent-Stress Optimization
Meyerhenke, Henning, Nöllenburg, Martin, Schulz, Christian
Type: Article; In: IEEE Transactions on Visualization and Computer Graphics; Vol: 24; Issue: 5; Pages: 1814-1827
Show Abstract
Drawing large graphs appropriately is an important step for the visual analysis of data from real-world networks. Here we present a novel multilevel algorithm to compute a graph layout with respect to the maxent-stress metric proposed by Gansner et al. (2013) that combines layout stress and entropy. As opposed to previous work, we do not solve the resulting linear systems of the maxent-stress metric with a typical numerical solver. Instead we use a simple local iterative scheme within a multilevel approach. To accelerate local optimization, we approximate long-range forces and use shared-memory parallelism. Our experiments validate the high potential of our approach, which is particularly appealing for dynamic graphs. In comparison to the previously best maxent-stress optimizer, which is sequential, our parallel implementation is on average 30 times faster already for static graphs (and still faster if executed on a single thread) while producing a comparable solution quality.

Link to Repositum

Reinterpreting Dependency Schemes: Soundness Meets Incompleteness in DQBF
Beyersdorff, Olaf, Blinkhorn, Joshua, Chew, Leroy, Schmidt, Renate, Suda, Martin
Type: Article; In: Journal of Automated Reasoning; Vol: 63; Issue: 3; Pages: 597-623

Link to Repositum

Meta-kernelization using well-structured modulators
Eiben, Eduard, Ganian, Robert, Szeider, Stefan
Type: Article; In: Discrete Applied Mathematics; Vol: 248; Pages: 153-167
Show Abstract
Kernelization investigates exact preprocessing algorithms with performance guarantees. The most prevalent type of parameters used in kernelization is the solution size for optimization problems; however, also structural parameters have been successfully used to obtain polynomial kernels for a wide range of problems. Many of these parameters can be defined as the size of a smallest modulator of the given graph into a fixed graph class (i.e., a set of vertices whose deletion puts the graph into the graph class). Such parameters admit the construction of polynomial kernels even when the solution size is large or not applicable. This work follows up on the research on meta-kernelization frameworks in terms of structural parameters. We develop a class of parameters which are based on a more general view on modulators: instead of size, the parameters employ a combination of rank-width and split decompositions to measure structure inside the modulator. This allows us to lift kernelization results from modulator-size to more general parameters, hence providing small kernels even in cases where previously developed approaches could not be applied. We show (i) how such large but well-structured modulators can be efficiently approximated, (ii) how they can be used to obtain polynomial kernels for graph problems expressible in Monadic Second Order logic, and (iii) how they support the extension of previous results in the area of structural meta-kernelization.

Link to Repositum

On the Complexity of Rainbow Coloring Problems
Eiben, Eduard, Ganian, Robert, Lauri, Juho
Type: Article; In: Discrete Applied Mathematics; Vol: 246; Pages: 38-48
Show Abstract
An edge-colored graph G is said to be rainbow connected if between each pair of vertices there exists a path which uses each color at most once. The rainbow connection number, denoted by rc(G), is the minimum number of colors needed to make G rainbow connected. Along with its variants, which consider vertex colorings and/or so-called strong colorings, the rainbow connection number has been studied from both the algorithmic and graphtheoretic points of view. In this paper we present a range of new results on the computational complexity of computing the four major variants of the rainbow connection number. In particular, we prove that the Strong Rainbow Vertex Coloring problem is NP-complete even on graphs of diameter 3, and also when the number of colors is restricted to 2. On the other hand, we show that if the number of colors is fixed then all of the considered problems can be solved in linear time on graphs of bounded treewidth. Moreover, we provide a linear-time algorithm which decides whether it is possible to obtain a rainbow coloring by saving a fixed number of colors from a trivial upper bound. Finally, we give a linear-time algorithm for computing the exact rainbow connection numbers for three variants of the problem on graphs of bounded vertex cover number.

Link to Repositum

A single-exponential fixed-parameter algorithm for distance-hereditary vertex deletion
Eiben, Eduard, Ganian, Robert, Kwon, O-joung
Type: Article; In: Journal of Computer and System Sciences; Vol: 97; Pages: 121-146
Show Abstract
Vertex deletion problems ask whether it is possible to delete at most k vertices from a graph so that the resulting graph belongs to a specified graph class. Over the past years, the parameterized complexity of vertex deletion to a plethora of graph classes has been systematically researched. Here we present the first single-exponential fixed-parameter tractable algorithm for vertex deletion to distance-hereditary graphs, a well-studied graph class which is particularly important in the context of vertex deletion due to its connection to the graph parameter rank-width. We complement our result with matching asymptotic lower bounds based on the exponential time hypothesis. As an application of our algorithm, we show that a vertex deletion set to distance-hereditary graphs can be used as a parameter which allows single-exponential fixed-parameter tractable algorithms for classical NP-hard problems.

Link to Repositum

The Complexity Landscape of Decompositional Parameters for ILP
Ganian, Robert, Ordyniak, Sebastian
Type: Article; In: Artificial Intelligence; Vol: 257; Pages: 61-71
Show Abstract
Integer Linear Programming (ILP) can be seen as the archetypical problem for NP-complete optimization problems, and a wide range of problems in artificial intelligence are solved in practice via a translation to ILP. Despite its huge range of applications, only few tractable fragments of ILP are known, probably the most prominent of which is based on the notion of total unimodularity. Using entirely different techniques, we identify new tractable fragments of ILP by studying structural parameterizations of the constraint matrix within the framework of parameterized complexity. In particular, we show that ILP is fixed-parameter tractable when parameterized by the treedepth of the constraint matrix and the maximum absolute value of any coefficient occurring in the ILP instance. Together with matching hardness results for the more general parameter treewidth, we give an overview of the complexity of ILP w.r.t. decompositional parameters defined on the constraint matrix.

Link to Repositum

Snarks with Special Spanning Trees
Hoffmann-Ostenhof, Arthur, Jatschka, Thomas
Type: Article; In: Graphs and Combinatorics; Vol: 35; Issue: 1; Pages: 207-219
Show Abstract
Let G be a cubic graph which has a decomposition into a spanning tree T and a 2-regular subgraph C, i.e. E(T)∪E(C)=E(G) and E(T)∩E(C)=∅. We provide an answer to the following question: which lengths can the cycles of C have if G is a snark? Note that T is a hist (i.e. a spanning tree without a vertex of degree two) and that every cubic graph with a hist has the above decomposition.

Link to Repositum

Particle Therapy Patient Scheduling with Limited Starting Time Variations of Daily Treatments
Maschler, Johannes, Raidl, Günther
Type: Article; In: International Transactions in Operational Research; Vol: 27; Issue: 1; Pages: 458-479
Show Abstract
Abstract The particle therapy patient scheduling problem (PTPSP) arises in modern cancer treatment facilities that provide particle therapy. It consists of scheduling a set of therapies within a planning horizon of several months. A particularity of PTSP compared with classical radiotherapy scheduling is that therapies need not only be assigned to days but also scheduled within each day to account for the more complicated operational scenario. In an earlier work, we introduced this novel problem setting and provided first algorithms including an iterated greedy (IG) metaheuristic. In this work, we consider an important extension to the PTPSP emerging from practice in which the therapies should be provided on treatment days roughly at the same time. To be more specific, the variation between the starting times of the therapies' individual treatments should not exceed the given limits, and needs otherwise to be minimized. This additional constraint implies that the sequencing parts within each day can no longer be treated independently. To tackle this variant of PTPSP, we revise our previous IG and exchange its main components: the part of the applied construction heuristic for scheduling within the days and the local search algorithm. The resulting metaheuristic provides promising results for the proposed extension of the PTPSP and further enhances the existing approach for the original problem.

Link to Repositum

On Structural Parameterizations of the Bounded-Degree Vertex Deletion Problem
Ganian, Robert, Klute, Fabian, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the 35th Symposium on Theoretical Aspects of Computer Science; Pages: 1-14
Show Abstract
We study the parameterized complexity of the Bounded-Degree Vertex Deletion problem (BDD), where the aim is to find a maximum induced subgraph whose maximum degree is below a given degree bound. Our focus lies on parameters that measure the structural properties of the input instance. We first show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. We thereby resolve the main open question stated in Betzler, Bredereck, Niedermeier and Uhlmann (2012) concerning the complexity of BDD parameterized by the feedback vertex set number. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to the decompositional parameter treecut width and a novel problem-specific parameter called the core fracture number.

Link to Repositum

A Structural Approach to Activity Selection
Eiben, Eduard, Ganian, Robert, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence; Pages: 203-209
Show Abstract
The general task of finding an assignment of agents to activities under certain stability and rationality constraints has led to the introduction of two prominent problems in the area of computational social choice: Group Activity Selection (GASP) and Stable Invitations (SIP). Here we introduce and study the Comprehensive Activity Selection Problem, which naturally generalizes both of these problems. In particular, we apply the parameterized complexity paradigm, which has already been successfully employed for SIP and GASP. While previous work has focused strongly on parameters such as solution size or number of activities, here we focus on parameters which capture the complexity of agent-to-agent interactions. Our results include a comprehensive complexity map for CAS under various restrictions on the number of activities in combination with restrictions on the complexity of agent interactions.

Link to Repositum

Scalable Set Visualizations (Dagstuhl Seminar 17332)
Authors not available
Type: Proceedings
Show Abstract
This report documents the program and outcomes of Dagstuhl Seminar 17332 "Scalable Set Visu- alizations", which took place August 14-18, 2017. The interdisciplinary seminar brought together 26 researchers from different areas in computer science and beyond such as information visualization, human-computer interaction, graph drawing, algorithms, machine learning, geography, and life sciences. During the seminar we had five invited overview talks on different aspects of set visualizations as well as a few ad-hoc presentations of ongoing work. The abstracts of these talks are contained in this report. Furthermore, we formed five working groups, each of them discussing intensively about a selected open research problem that was proposed by the seminar participants in an open problem session. The second part of this report contains summaries of the groups´ findings.

Link to Repositum

Solving a selective dial-a-ride problem with logic-based Benders decomposition
Riedler, Martin, Raidl, Günther
Type: Article; In: Computers and Operations Research; Vol: 96; Pages: 30-54
Show Abstract
Today's society is facing an ever-growing demand for mobility. To a high degree these needs can be fulfilled by individual and public transport. People that do not have access to the former and cannot use the latter require additional means of transportation. This is where dial-a-ride services come into play. The dial-a-ride problem considers transportation requests of people from pick-up to drop-off locations. Users specify time windows with respect to these points. Requests are served by a given vehicle fleet with limited capacity and tour duration per vehicle. Moreover, user inconvenience considerations are taken into account by limiting the travel time between origin and destination for each request. Previous research on the dial-a-ride problem primarily focused on serving a given set of requests with a fixed-size vehicle fleet at minimal traveling costs. It is assumed that the request set is sufficiently small to be served by the available vehicles. We consider a different scenario in which a maximal number of requests shall be served under the given constraints, i.e., it is no longer guaranteed that all requests can be accepted. For this new problem variant we propose a compact mixed integer linear programming model as well as algorithms based on Benders decomposition. In particular, we employ logic-based Benders decomposition and branch-and-check using mixed integer linear programming and constraint programming algorithms. We consider different variants on how to generate Benders cuts as well as heuristic boosting techniques and different types of valid inequalities. Computational experiments illustrate the effectiveness of the suggested algorithms.

Link to Repositum

Particle Therapy Patient Scheduling: Time Estimation for Scheduling Sets of Treatments
Maschler, Johannes, Riedler, Martin, Raidl, Günther R.
Type: Inproceedings; In: Computer Aided Systems Theory – EUROCAST 2017; Pages: 364-372
Show Abstract
In the particle therapy patient scheduling problem (PTPSP) cancer therapies consisting of sequences of treatments have to be planned within a planning horizon of several months. In our previous works we approached PTPSP by decomposing it into a day assignment part and a sequencing part. The decomposition makes the problem more manageable, however, both levels are dependent on a large degree. The aim of this work is to provide and a surrogate objective function that quickly predicts the behavior of the sequencing part with reasonable precision, allowing an improved day assignment w.r.t. the original problem.

Link to Repositum

Solving a Weighted Set Covering Problem for Improving Algorithms for Cutting Stock Problems with Setup Costs by Solution Merging
Klocker, Benedikt, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory -- EUROCAST 201; Pages: 355-363
Show Abstract
Abstract. Many practical applications of the cutting stock problem (CSP) have additional costs for setting up machine configurations. In this paper we describe a post-processing method which can improve so-lutions in general, but works especially well if additional setup costs are considered. We formalize a general cutting stock problem and a solution merging problem which can be used as a post-processing step. To solve the solution merging problem we propose an integer linear programming (ILP) model, a greedy approach, a PILOT method and a beam search. We apply the approaches to different real-world problems and compare their results. They show that in up to 50% of the instances the post-processing could improve the previous best solution.

Link to Repositum

Planar Drawings of Fixed-Mobile Bigraphs
Bekos, Michael A., De Luca, Felice, Didimo, Walter, Mchedlidze, Tamara, Nöllenburg, Martin, Symvonis, Antonios, Tollis, Ioannis G.
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2018; Vol: 11282; Pages: 426-439
Show Abstract
A fixed-mobile bigraph G is a bipartite graph such that the vertices of one partition set are given with fixed positions in the plane and the mobile vertices of the other part, together with the edges, must be added to the drawing. We assume that G is planar and study the problem of finding, for a given k ≥ 0, a planar poly-line drawing of G with at most k bends per edge. In the most general case, we show NP-hardness. For k = 0 and under additional constraints on the positions of the fixed or mobile vertices, we either prove that the problem is polynomial-time solvable or prove that it belongs to NP. Finally, we present a polynomial-time testing algorithm for a certain type of "layered" 1-bend drawings.

Link to Repositum

Experimental Evaluation of Book Drawing Algorithms
Klawitter, Jonathan, Mchedlidze, Tamara, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2018; Vol: 11282; Pages: 224-238
Show Abstract
A k-page book drawing of a graph G = (V,E) consists of a linear ordering of its vertices along a spine and an assignment of each edge to one of the k pages, which are half-planes bounded by the spine. In a book drawing, two edges cross if and only if they are assigned to the same page and their vertices alternate along the spine. Crossing minimization in a k-page book drawing is NP-hard, yet book drawings have multiple applications in visualization and beyond. Therefore several heuristic book drawing algorithms exist, but there is no broader comparative study on their relative performance. In this paper, we propose a comprehensive benchmark set of challenging graph classes for book drawing algorithms and provide an extensive experimental study of the performance of existing book drawing algorithms.

Link to Repositum

Portfolio-Based Algorithm Selection for Circuit QBFs
Hoos, Holger H., Peitl, Tomáš, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Principles and Practice of Constraint Programming 24th International Conference, CP 2018, Lille, France, August 27-31, 2018, Proceedings; Pages: 195-209
Show Abstract
Abstract. Quantified Boolean Formulas (QBFs) are a generalization of propo-sitional formulae that admits succinct encodings of verification and synthesis problems. Given that modern QBF solvers are based on different architectures with complementary performance characteristics, a portfolio-based approach to QBF solving is particularly promising. While general QBFs can be converted to prenex conjunctive normal form (PCNF) with small overhead, this transformation has been known to adversely affect performance. This issue has prompted the development of several solvers for circuit QBFs in recent years. We define a natural set of features of circuit QBFs and show that they can be used to construct portfolio-based algorithm selectors of state-of-the-art circuit QBF solvers that are close to the virtual best solver. We further demonstrate that most of this performance can be achieved using surprisingly small subsets of cheaply computable and intuitive features.

Link to Repositum

Planar L-Drawings of Directed Graphs
Chaplick, Steven, Chimani, Markus, Cornelsen, Sabine, Da Lozzo, Giordano, Nöllenburg, Martin, Patrignani, Maurizio, Tollis, Ioannis G., Wolff, Alexander
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2018; Vol: 11282; Pages: 465-478
Show Abstract
We study planar drawings of directed graphs in the L-drawing standard. We provide necessary conditions for the existence of these drawings and show that testing for the existence of a planar L-drawing is an NP-complete problem. Motivated by this result, we focus on upward-planar L-drawings. We show that directed st-graphs admitting an upward- (resp. upward-rightward-) planar L-drawing are exactly those admitting a bitonic (resp. monotonically increasing) st-ordering. We give a linear-time algorithm that computes a bitonic (resp. monotonically increasing) st-ordering of a planar st-graph or reports that there exists none.

Link to Repositum

GRASP-VNS for a Periodic VRP with Time Windows to Deal with Milk Collection
Expósito, Airam, Raidl, Günther, Brito, Julio, Moreno-Perez, Jose
Type: Inproceedings; In: Computer Aided Systems Theory; Pages: 299-306
Show Abstract
Abstract. This paper considers the planning of the collection of fresh milk from local farms with a fleet of refrigerated vehicles. The problem is formulated as a version of the Periodic Vehicle Routing Problem with Time Windows. The objective function is oriented to the quality of ser-vice by minimizing the service times to the customers within their time windows. We developed a hybrid metaheuristic that combines GRASP and VNS to find solutions. In order to help the hybrid GRASP-VNS find high-quality and feasible solutions, we consider infeasible solutions during the search using different penalty functions.

Link to Repositum

An A* Algorithm for Solving a Prize-Collecting Sequencing Problem with One Common and Multiple Secondary Resources and Time Windows
Horn, Matthias, Raidl, Günther, Rönnberg, Elina
Type: Inproceedings; In: Annals of Operations Research; Pages: 235-256
Show Abstract
Abstract In the considered sequencing problem, a subset of a given set of jobs is to the be scheduled. A scheduled job has to execute without preemp-tion and during this time, the job needs both a common resource for a part of the execution as well as a secondary resource for the whole execution time. The common resource is shared by all jobs while a secondary resource is shared only by a subset of the jobs. Each job has one or more time windows and due to these, it may not be possible to schedule all jobs. Instead, each job is as-sociated with a prize and the task is to select a subset of jobs which yields a feasible schedule with a maximum total sum of prizes. First, we argue that the problem is NP-hard. Then, we present an exact A* algorithm and derive di erent upper bounds for the total prize; these bounds are based on con-straint and Lagrangian relaxations of a linear programming relaxation of a multidimensional knapsack problem. For comparison, a compact mixed inte-ger programming (MIP) model and a constraint programming model are also presented. An extensive experimental evaluation on two types of problem in-stances shows that the A* algorithm outperforms the other approaches and is able to solve small to medium size instances with up to about 50 jobs to proven optimality. In cases where A* does not prove that an optimal solution is found, the obtained upper bounds are stronger than those of the MIP model.

Link to Repositum

Multivalued Decision Diagrams for a Prize-Collecting Sequencing Problem
Maschler, Johannes, Raidl, Günther
Type: Inproceedings; In: PATAT 2018: Proceedings of the 12th International Conference of the Practice and Theory of Automated Timetabling}; Pages: 375-397
Show Abstract
Johannes Maschler · G¨unther R. Raidl Abstract Recent years have shown that multivalued decision diagrams (MDD) are a powerful tool for approaching combinatorial optimization prob-lems (COPs). Relatively compact relaxed and restricted MDDs are employed to obtain dual bounds and heuristic solutions and provide opportunities for new branching schemes. We consider a prize-collecting sequencing problem in which a subset of given jobs has to be found that is schedulable and yields max-imum total prize. The primary aim of this work is to study different methods for creating relaxed MDDs for this problem. To this end, we adopt and extend the two main MDD compilation approaches found in the literature: top down construction and incremental refinement. In a series of computational experi-ments these methods are compared. The results indicate that for our problem the incremental refinement method produces MDDs with stronger bounds. Moreover, heuristic solutions are derived by compiling restricted MDDs and by applying a general variable neighborhood search (GVNS). Here we observe that the top down construction of restricted MDDs is able to yield better solutions as the GVNS on small to medium-sized instances.

Link to Repositum

Short Plane Supports for Spatial Hypergraphs
Castermans, Thom, van Garderen, Mereke, Meulemans, Wouter, Nöllenburg, Martin, Yuan, Xiaoru
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2018; Vol: 11282; Pages: 53-66
Show Abstract
A graph G = (V, E) is a support of a hypergraph H = (V, S) if every hyperedge induces a connected subgraph in G. Supports are used for certain types of hypergraph visualizations. In this paper we consider visualizing spatial hypergraphs, where each vertex has a fixed location in the plane. This is the case, e.g., when modeling set systems of geospatial locations as hypergraphs. By applying established aesthetic quality cri- teria we are interested in finding supports that yield plane straight-line drawings with minimum total edge length on the input point set V . We first show, from a theoretical point of view, that the problem is NP-hard already under rather mild conditions as well as a negative approxima- bility results. Therefore, the main focus of the paper lies on practical heuristic algorithms as well as an exact, ILP-based approach for com- puting short plane supports. We report results from computational ex- periments that investigate the effect of requiring planarity and acyclicity on the resulting support length. Further, we evaluate the performance and trade-offs between solution quality and speed of several heuristics relative to each other and compared to optimal solutions.

Link to Repositum

Orthogonal and Smooth Orthogonal Layouts of 1-Planar Graphs with Low Edge Complexity
Argyriou, Evmorfia, Cornelsen, Sabine, Förster, Henry, Kaufmann, Michael, Nöllenburg, Martin, Okamoto, Yoshio, Raftopoulou, Chrysanthi, Wolff, Alexander
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2018; Vol: 11282; Pages: 509-523
Show Abstract
While orthogonal drawings have a long history, smooth orthogonal drawings have been introduced only recently. So far, only planar drawings or drawings with an arbitrary number of crossings per edge have been studied. Recently, a lot of research effort in graph draw- ing has been directed towards the study of beyond-planar graphs such as 1-planar graphs, which admit a drawing where each edge is crossed at most once. In this paper, we consider graphs with a fixed embedding. For 1-planar graphs, we present algorithms that yield orthogonal drawings with optimal curve complexity and smooth orthogonal drawings with small curve complexity. For the subclass of outer-1-planar graphs, which can be drawn such that all vertices lie on the outer face, we achieve optimal curve complexity for both, orthogonal and smooth orthogonal drawings.

Link to Repositum

Parameterized Algorithms for the Matrix Completion Problem
Ganian, Robert, Kanj, Iyad, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Proceeding of ICML; Pages: 1642-1651
Show Abstract
We consider two matrix completion problems, in which we are given a matrix with missing entries and the task is to complete the matrix in a way that (1) minimizes the rank, or (2) minimizes the number of distinct rows. We study the parameterized complexity of the two aforementioned problems with respect to several parameters of interest, including the minimum number of matrix rows, columns, and rows plus columns needed to cover all missing entries. We obtain new algorithmic results showing that, for the bounded domain case, both problems are fixed-parameter tractable with respect to all aforementioned parameters. We complement these results with a lower-bound result for the unbounded domain case that rules out fixed-parameter tractability w.r.t. some of the parameters under consideration.

Link to Repositum

Polynomial-Time Validation of QCDCL Certificates
Peitl, Tomas, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2018; Pages: 253-269
Show Abstract
Abstract. Quantified Boolean Formulas (QBFs) o er compact encod-ings of problems arising in areas such as verification and synthesis. These applications require that QBF solvers not only decide whether an input formula is true or false but also output a witnessing certificate, i.e. a rep-resentation of the winning strategy. State-of-the-art QBF solvers based on Quantified Conflict-Driven Constraint Learning (QCDCL) can emit Q-resolution proofs, from which in turn such certificates can be extracted. The correctness of a certificate generated in this way is validated by sub-stituting it into the matrix of the input QBF and using a SAT solver to check that the resulting propositional formula (the validation formula) is unsatisfiable. This final check is often the most time-consuming part of the entire certification workflow. We propose a new validation method that does not require a SAT call and provably runs in polynomial time. It uses the Q-resolution proof from which the given certificate was ex-tracted to directly generate a (propositional) proof of the validation for-mula in the RUP format, which can be verified by a proof checker such as DRAT-trim. Experiments with a prototype implementation show a robust, albeit modest, increase in the number of successfully validated certificates compared to validation with a SAT solver.

Link to Repositum

Maximizing Ink in Symmetric Partial Edge Drawings of k-plane Graphs
Höller, Michael, Klute, Fabian, Nickel, Soeren, Nöllenburg, Martin, Schreiber, Birgit
Type: Presentation
Show Abstract
Partial edge drawing (PED) is a drawing style for non-planar graphs, in which edges are drawn only partially as pairs of opposing stubs on the respective end-vertices. In a PED, by erasing the central parts of edges, all edge crossings and the resulting visual clutter are hidden in the undrawn parts of the edges. We study symmetric partial edge drawings (SPEDs), in which the two stubs of each edge are required to have the same length. It is known that maximizing the ink (or the total stub length) when transforming a straight-line drawing with crossings into a SPED is tractable for 2-plane input drawings, but generally NP-hard. We show that the problem remains NP-hard even for 3-plane input drawings. Yet, for k-plane input drawings whose edge intersection graph forms a collection of trees or cacti we present efficient algorithms for ink maximization.

Link to Repositum

Minimizing Crossings in Constrained Two-Sided Circular Graph Layouts
Klute, Fabian, Nöllenburg, Martin
Type: Inproceedings; In: 34th International Symposium on Computational Geometry; Pages: 53:1-53:14
Show Abstract
Circular layouts are a popular graph drawing style, where vertices are placed on a circle and edges are drawn as straight chords. Crossing minimization in circular layouts is NP-hard. One way to allow for fewer crossings in practice are two-sided layouts that draw some edges as curves in the exterior of the circle. In fact, one- and two-sided circular layouts are equivalent to one- page and two-page book drawings, i.e., graph layouts with all vertices placed on a line (the spine) and edges drawn in one or two distinct half-planes (the pages) bounded by the spine. In this paper we study the problem of minimizing the crossings for a fixed cyclic vertex order by computing an optimal k-plane set of exteriorly drawn edges for k ≥ 1, extending the previously studied case k = 0. We show that this relates to finding bounded-degree maximum-weight induced subgraphs of circle graphs, which is a graph-theoretic problem of independent interest. We show NP-hardness for arbitrary k, present an efficient algorithm for k = 1, and generalize it to an explicit XP-time algorithm for any fixed k. For the practically interesting case k = 1 we implemented our algorithm and present experimental results that confirm the applicability of our algorithm.

Link to Repositum

A Visual Comparison of Hand-Drawn and Machine-Generated Human Metabolic Pathways
Wu, Hsiang-Yun, Viola, Ivan, Nöllenburg, Martin
Type: Inproceedings; In: Proceedings of EuroVis 2018
Show Abstract
This poster abstract presents a visual comparison between three hand-drawn and one machine-generated human metabolic pathway diagrams. The human metabolic pathways, which describe significant biochemical reactions in the human body, have been increasingly investigated due to the development of analysis processes and are compiled into pathway diagrams to provide an overview of reaction in the human body. This complex network includes about 5,000 metabolites and 7,500 reactions, which are hierarchically nested and difficult to visualize. We collect and analyze well-known human metabolic pathway diagrams, and summarize the design choices of these diagrams, respectively. Together with a machine-generated diagram, we can understand the visual complexity of three hand-drawn and one machine-generated diagrams.

Link to Repositum

The Travel of a Metabolite
Wu, Hsiang-Yun, Nöllenburg, Martin, Viola, Ivan
Type: Inproceedings; In: Proceedings of PacificVis 2018 Data Story Telling Contest
Show Abstract
Biological pathways are chains of molecule interactions and reactions in biological systems that jointly form complex, hierarchical networks. Although several pathway layout algorithms have been investigated, biologists still prefer to use hand-drawn ones, due to their high visual quality relied on domain knowledge. In this project, we propose a visualization for computing metabolic pathway maps that restrict the grouping structure defined by biologists to rectangles and apply orthogonal-style edge routing to simplify edge orientation. This idea is inspired by concepts from urban planning, where we consider reactions as city blocks and built up roads to connect identical metabolites occurred in multiple categories. We provide a story to present how glucose is broken down to phosphoenolpyruvate to release energy, which is often stored in adenosine triphosphate (ATP) in a human body. Finally, we demonstrate ATP is also utilized to synthesize urea to eliminate the toxic ammonia in our body.

Link to Repositum

Lombardi Drawings of Knots and Links
Kindermann, Philipp, Kobourov, Stephen, Löffler, Maarten, Nöllenburg, Martin, Schulz, André, Vogtenhuber, Birgit
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2018; Vol: 11282; Pages: 113-126
Show Abstract
Knot and link diagrams are projections of one or more 3- dimensional simple closed curves into lR2, such that no more than two points project to the same point in lR2. These diagrams are drawings of 4-regular plane multigraphs. Knots are typically smooth curves in lR3, so their projections should be smooth curves in lR2 with good continu- ity and large crossing angles: exactly the properties of Lombardi graph drawings (defined by circular-arc edges and perfect angular resolution). We show that several knots do not allow plane Lombardi drawings. On the other hand, we identify a large class of 4-regular plane multigraphs that do have Lombardi drawings. We then study two relaxations of Lombardi drawings and show that every knot admits a plane 2-Lombardi drawing (where edges are composed of two circular arcs). Further, every knot is near-Lombardi, that is, it can be drawn as Lombardi drawing when relaxing the angular resolution requirement by an arbitrary small angular offset ε, while maintaining a 180◦ angle between opposite edges.

Link to Repositum

Planar and poly-arc Lombardi drawings
Duncan, Christian A., Eppstein, David, Goodrich, Michael T., Kobourov, Stephen G., Löffler, Maarten, Nöllenburg, Martin
Type: Article; In: Journal of Computational Geometry (JOCG); Vol: 9; Issue: 1; Pages: 328-355
Show Abstract
In Lombardi drawings of graphs, edges are represented as circular arcs and the edges incident on vertices have perfect angular resolution. It is known that not every planar graph has a planar Lombardi drawing. We give an example of a planar 3-tree that has no planar Lombardi drawing and we show that all outerpaths do have a planar Lombardi drawing. Further, we show that there are graphs that do not even have any Lombardi drawing at all. With this in mind, we generalize the notion of Lombardi drawings to that of (smooth) k-Lombardi drawings, in which each edge may be drawn as a (differentiable) sequence of k circular arcs; we show that every graph has a smooth 2-Lombardi drawing and every planar graph has a smooth planar 3-Lombardi drawing. We further investigate related topics connecting planarity and Lombardi drawings.

Link to Repositum

Towards Characterizing Strict Outerconfluent Graphs
Klute, Fabian, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2017; Vol: 10692; Pages: 612-614
Show Abstract
Confluent drawings of graphs are geometric representations in the plane, in which vertices are mapped to points, but edges are not drawn as individually distinguishable geometric objects. Instead, an edge is represented by the presence of a smooth curve between two vertices in a system of arcs and junctions. More formally, a confluent drawing D of a graph G = (V, E) consists of a set of points representing the vertices, a set of junction points, and a set of smooth arcs, such that each arc starts and ends at a vertex point or a junction, no two arcs intersect (except at common endpoints), and all arcs meeting in a junction share the same tangent line in the junction point. There is an edge (u, v) ∈ E if and only if there is a smooth path from u to v in D that does not pass through any other vertex. Confluent drawings were introduced by Dickerson et al. [1], who identified classes of graphs that admit or that do not admit confluent drawings. Later, variations such as strong and tree confluency [6], as well as ∆-confluency [2] were introduced. Confluent drawings have further been used for layered drawings [3] and for drawing Hasse diagrams [5]. The complexity of the recognition problem for graphs that admit a confluent drawing remains open. Eppstein et al. [4] defined strict confluent drawings, in which every edge of the graph must be represented by a unique smooth path. They showed that for general graphs it is NP-complete to decide whether a strict confluent drawing exists. A strict confluent drawing is called strict outerconfluent if all vertices lie on the boundary of a (topological) disk that contains the strict confluent drawing. For a given cyclic vertex order, Eppstein et al. [4] presented a constructive poly-time algorithm for testing the existence of a strict outerconfluent drawing. Without a given vertex order the recognition complexity as well as a characterization of the graphs admitting such drawings remained open. We present first results towards characterizing the strict outerconfluent (SOC) graphs by examining potential sub- and super-classes of SOC graphs.

Link to Repositum

Minimizing Wiggles in Storyline Visualizations
Fröschl, Theresa, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2017; Vol: 10692; Pages: 585-587
Show Abstract
A storyline visualization is a two-dimensional drawing of a special kind of time-varying hypergraph H(t), where the x-axis represents time and the vertices (also called characters) are x-monotone curves. At each point in time t, the vertices form a permutation such that groups of adjacent characters in H(t) occupy consecutive vertical positions to indicate a meeting at time t, see Fig.1. Each character can only be part of at most one meeting at each point in time. This kind of visualization has been introduced for illustrating movie narratives [8], but is also more generally used in information visualization [6, 11]. Several aesthetic optimization criteria have been proposed [6, 11], including minimization of crossing, line wiggles, and white-space gaps. While crossing minimization has been studied from an algorithmic point of view in recent years [4, 5, 7], minimizing line wiggles, as another important quality criterion, which is similar to bend minimization in node-link diagrams [9, 10], has not been investigated on its own. We note that the problem of minimizing corners or moves in permutation diagrams [2, 3] is related to wiggle minimization, yet does not include the temporal aspects of storylines with meetings over time and their induced character ordering constraints. We present the first integer linear pro- gramming (ILP) model for exact wiggle minimization in storyline visualizations without an initial permutation. We can include crossing minimization into a weighted multicriteria ILP model and show examples of a first case study.

Link to Repositum

2017
Get Satisfaction: Das Erfüllbarkeitsproblem in Theorie und Praxis
Szeider, Stefan
Type: Presentation

Link to Repositum

Capturing Structure in Instances of the Propositional Satisfiability Problem
Szeider, Stefan
Type: Presentation

Link to Repositum

An Iterative Time-Bucket Refinement Algorithm for a Resource-Constrained Project Scheduling Problem
Raidl, Günther
Type: Presentation
Show Abstract
We consider a resource-constrained project scheduling problem originating in particle therapy for cancer treatment, in which the scheduling has to be done in high resolution. Traditional mixed integer linear programming techniques such as time-indexed formulations or discrete-event formulations are known to have severe limitations in such cases, i.e., growing too fast or having weak linear programming relaxations. We suggest a relaxation based on partitioning time into so-called time-buckets. This relaxation is iteratively solved and serves as basis for deriving feasible solutions using heuristics. Based on these primal and dual bounds the time-buckets are successively refined. Combining these parts we obtain an algorithm that provides good approximate solutions soon and eventually converges to an optimal solution. Diverse strategies for doing the time-bucket refinement are investigated. The approach shows excellent performance in comparison to the traditional formulations and a GRASP metaheuristic.

Link to Repositum

Mixed Integer Programming Approaches for Resource-Constrained Project Scheduling
Raidl, Günther
Type: Presentation

Link to Repositum

The Constraint Satisfaction Problem: Complexity and Approximability
Szeider, Stefan, Ordyniak, Sebastian, Gaspers, Serge
Type: Book Contribution; In: The Constraint Satisfaction Problem: Complexity and Approximability; Pages: 137-157
Show Abstract
A backdoor set of a CSP instance is a set of variables whose instantiation moves the instance into a fixed class of tractable instances (an island of tractability). An interesting algorithmic task is to find a small backdoor set efficiently: once it is found we can solve the instance by solving a number of tractable instances. Parameterized complexity provides an adequate framework for studying and solving this algorithmic task, where the size of the backdoor set provides a natural parameter. In this survey we present some recent parameterized complexity results on CSP backdoor sets, focusing on backdoor sets into islands of tractability that are defined in terms of constraint languages.

Link to Repositum

Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd
Gadiraju, Ujwal, Möller, Sebastian, Nöllenburg, Martin, Saupe, Dietmar, Egger-Lampl, Sebastian, Archambault, Daniel, Fisher, Brian
Type: Book Contribution; In: Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments; Pages: 6-26
Show Abstract
Crowdsourcing solutions are increasingly being adopted across a variety of domains these days. An important consequence of the flourishing crowdsourcing markets is that experiments which were traditionally carried out in laboratories on a much smaller scale can now tap into the immense potential of online labor. Researchers in different fields have shown considerable interest in attempting to carry out priorly constrained lab experiments in the crowd. In this chapter, we reflect on the key factors to consider while transitioning from controlled laboratory experiments to large scale experiments in the crowd.

Link to Repositum

Solving Problems on Graphs of High Rank-Width
Eiben, Eduard, Ganian, Robert, Szeider, Stefan
Type: Article; In: Algorithmica; Vol: 80; Issue: 2; Pages: 742-771
Show Abstract
A modulator in a graph is a vertex set whose deletion places the considered graph into some specified graph class. The cardinality of a modulator to various graph classes has long been used as a structural parameter which can be exploited to obtain fixed-parameter algorithms for a range of hard problems. Here we investigate what happens when a graph contains a modulator which is large but "well-structured" (in the sense of having bounded rank-width). Can such modulators still be exploited to obtain efficient algorithms? And is it even possible to find such modulators efficiently? We first show that the parameters derived from such well-structured modulators are more powerful for fixed-parameter algorithms than the cardinality of modulators and rank-width itself. Then, we develop a fixed-parameter algorithm for finding such well-structured modulators to every graph class which can be characterized by a finite set of forbidden induced subgraphs. We proceed by showing how well-structured modulators can be used to obtain efficient parameterized algorithms for Minimum Vertex Cover and Maximum Clique. Finally, we use the concept of well-structured modulators to develop an algorithmic meta-theorem for deciding problems expressible in monadic second order logic, and prove that this result is tight in the sense that it cannot be generalized to LinEMSO problems.

Link to Repositum

Progress on Partial Edge Drawings
Bruckdorfer, Till, Cornelsen, Sabine, Gutwenger, Carsten, Kaufmann, Michael, Montecchiani, Fabrizio, Nöllenburg, Martin, Wolff, Alexander
Type: Article; In: Journal of Graph Algorithms and Applications; Vol: 21; Issue: 4; Pages: 757-786
Show Abstract
Recently, a new way of avoiding crossings in straight-line drawings of non-planar graphs has been introduced. The idea of partial edge drawings (PED) is to drop the middle part of edges and rely on the remaining edge parts called stubs. We focus on symmetric partial edge drawings (SPEDs) that require the two stubs of an edge to be of equal length. In this way, the stub at the other endpoint of an edge assures the viewer of the edge's existence. We also consider an additional homogeneity constraint that forces the stub lengths to be a given fraction δ of the edge lengths (δ-SHPED). Given length and direction of a stub, this model helps to infer the position of the opposite stub. We show that, for a fixed stub-edge length ratio δ, not all graphs have a δ-SHPED. Specifically, we show that K165 does not have a 1/4-SHPED, while bandwidth-k graphs always have a Θ(1/√k)-SHPED. We also give bounds for complete bipartite graphs. Further, we consider the problem MAXSPED where the task is to compute the SPED of maximum total stub length that a given straight-line drawing contains. We present an efficient solution for 2-planar drawings and a 2-approximation algorithm for the dual problem of minimizing the total amount of erased ink.

Link to Repositum

Euclidean Greedy Drawings of Trees
Nöllenburg, Martin, Prutkin, Roman
Type: Article; In: Discrete and Computational Geometry; Vol: 58; Issue: 3; Pages: 543-579
Show Abstract
Greedy embedding (or drawing) is a simple and efficient strategy to route messages in wireless sensor networks. For each source-destination pair of nodes s, t in a greedy embedding there is always a neighbor u of s that is closer to t according to some distance metric. The existence of greedy embeddings in the Euclidean plane R^2 is known for certain graph classes such as 3-connected planar graphs. We completely characterize the trees that admit a greedy embedding in R^2. This answers a question by Angelini et al. (Networks 59(3):267-274, 2012) and is a further step in characterizing the graphs that admit Euclidean greedy embeddings.

Link to Repositum

Partitioning Graph Drawings and Triangulated Simple Polygons into Greedily Routable Regions
Nöllenburg, Martin, Prutkin, Roman, Rutter, Ignaz
Type: Article; In: International Journal of Computational Geometry and Applications; Vol: 27; Issue: 01n02; Pages: 121-158
Show Abstract
A greedily routable region (GRR) is a closed subset of R^2, in which any destination point can be reached from any starting point by always moving in the direction with maximum reduction of the distance to the destination in each point of the path. Recently, Tan and Kermarrec proposed a geographic routing protocol for dense wireless sensor networks based on decomposing the network area into a small number of interior-disjoint GRRs. They showed that minimum decomposition is NP-hard for polygonal regions with holes. We consider minimum GRR decomposition for plane straight-line drawings of graphs. Here, GRRs coincide with self-approaching drawings of trees, a drawing style which has become a popular research topic in graph drawing. We show that minimum decomposition is still NP-hard for graphs with cycles and even for trees, but can be solved optimally for trees in polynomial time, if we allow only certain types of GRR contacts. Additionally, we give a 2-approximation for simple polygons, if a given triangulation has to be respected.

Link to Repositum

A Scalable Approach for the K-Staged Two-Dimensional Cutting Stock Problem
Dusberger, Frederico, Raidl, Günther
Type: Inproceedings; In: Selected Papers of the International Conference of the German, Austrian and Swiss Operations Research Societies; Pages: 385-391
Show Abstract
This work focuses on the K-staged two-dimensional cutting stock problem with variable sheet size. High-quality solutions are computed by an efficient beam-search algorithm that exploits the congruency of subpatterns and takes informed decisions on which of the available sheet types to use for the solutions. We extend this algorithm by embedding it in a sequential value-correction framework that runs the algorithm multiple times while adapting element type values in each iteration and thus constitutes a guided diversification process for computing a solution. Experiments demonstrate the effectiveness of the approach and that the sequential value-correction further increases the overall quality of the constructed solutions.

Link to Repositum

An Enhanced Iterated Greedy Metaheuristic for the Particle Therapy Patient Scheduling Problem
Maschler, Johannes, Hackl, Thomas, Riedler, Martin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 12th Metaheuristics International Conference (MIC 2017); Pages: 463-472
Show Abstract
The Particle Therapy Patient Scheduling Problem (PTPSP) arises in modern cancer treatment facilities that provide particle therapy and consists of scheduling a set of therapies within a planning horizon of several months. A particularity of PTPSP compared to classical radiotherapy scheduling is that therapies need not only be assigned to days but also scheduled within each day to account for the more complicated operational scenario. In an earlier work we introduced this novel problem setting and provided first algorithms including an Iterated Greedy (IG) metaheuristic. In this work we build upon this IG and exchange two main components: the construction phase and the local search algorithm. The resulting metaheuristic enhances the existing approach and yields in most of the considered benchmark instances substantially better results.

Link to Repositum

Combining Treewidth and Backdoors for CSP
Ganian, Robert, Ramanujan, M. Sridharan, Szeider, Stefan
Type: Inproceedings; In: 34th Symposium on Theoretical Aspects of Computer Science; Vol: 66; Pages: 429-445
Show Abstract
Decomposition width parameters such as treewidth provide a measurement on the complexity of a graph. Finding a decomposition of smallest width is itself NP-hard but lends itself to a SAT-based solution. Previous work on treewidth, branchwidth and clique-width indicates that identifying a suitable characterization of the considered decomposition method is key for a practically feasible SAT-encoding. In this paper we study SAT-encodings for the decomposition width parameters special treewidth and pathwidth. In both cases we develop SAT-encodings based on two different characterizations. In particular, we develop two novel characterizations for special treewidth based on parti- tions and elimination orderings. We empirically obtained SAT-encodings.

Link to Repositum

SAT-Encodings for Special Treewidth and Pathwidth
Lodha, Neha, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2017; Pages: 429-445
Show Abstract
Decomposition width parameters such as treewidth provide a measurement on the complexity of a graph. Finding a decomposition of smallest width is itself NP-hard but lends itself to a SAT-based solution. Previous work on treewidth, branchwidth and clique-width indicates that identifying a suitable characterization of the considered decomposition method is key for a practically feasible SAT-encoding. In this paper we study SAT-encodings for the decomposition width parameters special treewidth and pathwidth. In both cases we develop SAT-encodings based on two different characterizations. In particular, we develop two novel characterizations for special treewidth based on parti- tions and elimination orderings. We empirically obtained SAT-encodings.

Link to Repositum

Backdoor Treewidth for SAT
Ganian, Robert, Ramanujan, M. S., Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2017; Pages: 20-37
Show Abstract
A strong backdoor in a CNF formula is a set of variables such that each possible instantiation of these variables moves the formula into a tractable class. The algorithmic problem of finding a strong backdoor has been the subject of intensive study, mostly within the parameterized complexity framework. Results to date focused primarily on backdoors of small size. In this paper we propose a new approach for algorithmi- cally exploiting strong backdoors for SAT: instead of focusing on small backdoors, we focus on backdoors with certain structural properties. In particular, we consider backdoors that have a certain tree-like structure, formally captured by the notion of backdoor treewidth. First, we provide a fixed-parameter algorithm for SAT parameterized by the backdoor treewidth w.r.t. the fundamental tractable classes Horn, Anti-Horn, and 2CNF. Second, we consider the more general setting where the backdoor decomposes the instance into components belonging to different tractable classes, albeit focusing on backdoors of treewidth 1 (i.e., acyclic backdoors). We give polynomial-time algorithms for SAT and #SAT for instances that admit such an acyclic backdoor.

Link to Repositum

New Width Parameters for Model Counting
Ganian, Robert, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2017; Pages: 38-52
Show Abstract
We study the parameterized complexity of the propositional model counting problem #SAT for CNF formulas. As the parameter we consider the treewidth of the following two graphs associated with CNF formulas: the consensus graph and the conflict graph. Both graphs have as vertices the clauses of the formula; in the consensus graph two clauses are adjacent if they do not contain a complementary pair of literals, while in the conflict graph two clauses are adjacent if they do contain a complementary pair of literals. We show that #SAT is fixed-parameter tractable for the treewidth of the consensus graph but W[1]-hard for the treewidth of the conflict graph. We also compare the new parameters with known parameters under which #SAT is fixed-parameter tractable. Supported by the Austrian Science Fund (FWF), project P26696. Robert Ganian is also affiliated with FI MU, Brno, Czech Republic.

Link to Repositum

Radial contour labeling with straight leaders
Nöllenburg, Martin, Niedermann, Benjamin, Rutter, Ignaz
Type: Inproceedings; In: 2017 IEEE Pacific Visualization Symposium (PacificVis)
Show Abstract
The usefulness oftechnieal drawings as weil as scientific illustrations such as medical drawings ofhuman anatomy essentially depends on the placement of labels that describe all relevant parts of the figure. In order to not spoil or clutter the figure with text, the labels are often placed around the figure and are associated by thin connecting lines to their features, respectively. This labeling technique is known as externailabel placement. In this paper we introduce a flexible and general approach for external label placement assuming a contour of the figure prescribing the possible positions of the labels. While much research on external label placement aims for fast labeling procedures for inter- active systems, we focus on highest-quality illustrations. Based on interviews with domain experts and a semi-automatie analysis of 202 handmade anatomieal drawings, we identify a set of 18 layout quality criteria, naturally not all of equal importance. We design a new geometric label placement algorithm that is based only on the most important criteria. Yet, other criteria can ftexibly be included in the algorithm, either as hard constraints not to be violated or as soft constraints whose violation is penalized by a general cost function. We formally prove that our approach yields labelings that satisfy all hard constraints and have minimum overall cost. Introducing several speedup techniques, we further demonstrate how to deploy our approach in practice. In an experimental evaluation on real-world anatomieal drawings we show that the resulting labelings are of high quality and can be produced in adequate time.

Link to Repositum

Solving the Two-State Fixed-Charge Transportation Problem with a Hybrid Genetic Algorithm
Pop, Petrica, Cosmin, Sabo, Biesinger, Benjamin, Hu, Bin, Raidl, Günther
Type: Article; In: Carpathian Journal of Mathematics; Vol: 33; Issue: 3; Pages: 365-371
Show Abstract
This article considers the two-stage fixed-charge transportation problem which models an important transportation application in a supply chain, from manufacturers to customers through distribution centers. For solving this optimization problem we describe a hybrid algorithm that combines a steady-state genetic algorithm with a local search procedure. The computational results for an often used collection of benchmark instances show that our proposed hybrid method delivers results that are competitive to those of other state-of-the-art algorithms for solving the two-stage fixed-charge transportation problem.

Link to Repositum

First order limits of sparse graphs: Plane trees and path-width
Gajarsky, Jakub, Hlinený, Petr, Kaiser, Tomas, Král, Daniel, Kupec, Martin, Obdrzalek, Jan, Ordyniak, Sebastian, Vojtech, Tuma
Type: Article; In: Random Structures and Algorithms; Vol: 50; Pages: 612-635
Show Abstract
Nesetřil and Ossona de Mendez introduced the notion of first order convergence as an attempt to unify the notions of convergence for sparse and dense graphs. It is known that there exist first order convergent sequences of graphs with no limit modeling (an analytic representation of the limit). On the positive side, every first order convergent sequence of trees or graphs with no long path (graphs with bounded tree-depth) has a limit modeling. We strengthen these results by showing that every first order convergent sequence of plane trees (trees with embeddings in the plane) and every first order convergent sequence of graphs with bounded path-width has a limit modeling.

Link to Repositum

Full-Load Route Planning for Balancing Bike Sharing Systems by Logic-Based Benders Decomposition
Kloimüllner, Christian, Raidl, Günther
Type: Article; In: Networks; Vol: 69; Issue: 3; Pages: 270-289
Show Abstract
Public bike sharing systems require some kind of rebalancing to avoid too many rental stations of running empty or entirely full, which would make the system ineffective and annoy customers. Most frequently, a fleet of vehicles with trailers is used for this purpose, moving bikes among the stations. Previous works considered different objectives and modeled the underlying routing problem in different ways, but they all allow an arbitrary number of bikes to be picked up at some stations and delivered to other stations, just limited by the vehicles' capacities. Observations in practice, however, indicate that in larger well-working bike sharing systems drivers almost never pickup or deliver only few bikes, but essentially always approximately full vehicle loads. Many stations even require several visits with full loads. Due to budgetary reasons, typically only just enough drivers and vehicles are employed to achieve a reasonable balance most of the time, but basically never an ideal one where single bikes play a substantial role. Consequently, we investigate here a simplified problem model, in which only full vehicle loads are considered for movement among the rental stations. This restriction appears to have only a minor impact on the achieved quality of the rebalancing in practice but eases the modeling substantially. More specifically, we formulate the rebalancing problem as a selective unit-capacity pickup and delivery problem with time budgets on a bipartite graph and present a compact mixed integer linear programming model, a logic-based Benders decomposition and a variant thereof, namely branch-and-check for it. For the general case, instances with up to 70 stations, and for the single-vehicle case instances with up to 120 stations are solved to proven optimality. A comparison to leading metaheuristic approaches considering flexible vehicle loads indicates that indeed the restriction to full loads has only a very small impact on the finally achieved balance in typical scenarios of Citybike Wien.

Link to Repositum

Kernelization using structural parameters on sparse graph classes
Gajarsky, Jakub, Hlinený, Petr, Obdrzalek, Jan, Ordyniak, Sebastian, Reidl, Felix, Rossmanith, Peter, Villaamil Sanchez, Fernando, Sikdar, Somnath
Type: Article; In: Journal of Computer and System Sciences; Vol: 84; Pages: 219-242
Show Abstract
We prove that graph problems with finite integer index have linear kernels on graphs of bounded expansion when parameterized by the size of a modulator to constant-treedepth graphs. For nowhere dense graph classes, our result yields almost-linear kernels. We also argue that such a linear kernelization result with a weaker parameter would fail to include some of the problems covered by our framework. We only require the problems to have FII on graphs of constant treedepth. This allows to prove linear kernels also for problems such as Longest-Path/Cycle, Exact-s,t-Path, Treewidth, and Pathwidth, which do not have FII on general graphs.

Link to Repositum

On the Parameterized Complexity of Finding Small Unsatisfiable Subsets of CNF Formulas and CSP Instances
Haan, Ronald De, Kanj, Iyad, Szeider, Stefan
Type: Article; In: ACM Transactions on Computational Logic; Vol: 18; Issue: 3; Pages: 1-46
Show Abstract
In many practical settings it is useful to nd a small unsatis able subset of a given unsatis able set of con- straints. We study this problem from a parameterized complexity perspective, taking the size of the unsat- is able subset as the natural parameter where the set of constraints is either (i) given a set of clauses, i.e., a formula in conjunctive normal Form (CNF), or (ii) as an instance of the Constraint Satisfaction Problem (CSP). In general, the problem is xed-parameter intractable. For an instance of the propositional satis ability problem (SAT), it was known to be W[1]-complete. We establish A[2]-completeness for CSP instances, where A[2]-hardness prevails already for the Boolean case. With these xed-parameter intractability results for the general case in mind, we consider various re- stricted classes of inputs and draw a detailed complexity landscape. It turns out that often Boolean CSP and CNF formulas behave similarly, but we also identify notable exceptions to this rule. The main part of this article is dedicated to classes of inputs that are induced by Boolean constraint languages that Schaefer [1978] identi ed as the maximal constraint languages with a tractable satis ability problem. We show that for the CSP setting, the problem of nding small unsatis able subsets remains xed- parameter intractable for all Schaefer languages for which the problem is non-trivial. We show that this is also the case for CNF formulas with the exception of the class of bijunctive (Krom) formulas, which allows for an identi cation of a small unsatis able subset in polynomial time. In addition, we consider various restricted classes of inputs with bounds on the maximum number of times that a variable occurs (the degree), bounds on the arity of constraints, and bounds on the domain size. For the case of CNF formulas, we show that restricting the degree is enough to obtain xed-parameter tractability, whereas for the case of CSP instances, one needs to restrict the degree, the arity, and the domain size simultaneously to establish xed-parameter tractability. Finally, we relate the problem of nding small unsatis able subsets of a set of constraints to the problem of identifying whether a given variable-value assignment is entailed or forbidden already by a small subset of constraints. Moreover, we use the connection between the two problems to establish similar parameterized complexity results also for the latter problem.

Link to Repositum

Discovering Archipelagos of Tractability for Constraint Satisfaction and Counting
Ganian, Robert, Ramanujan, M. S., Szeider, Stefan
Type: Article; In: ACM Transactions on Algorithms; Vol: 13; Issue: 2; Pages: 1-32
Show Abstract
The Constraint Satisfaction Problem (CSP) is a central and generic computational problem which provides a common framework for many theoretical and practical applications. A central line of research is concerned with the identification of classes of instances for which CSP can be solved in polynomial time; such classes are often called "islands of tractability." A prominent way of defining islands of tractability for CSP is to restrict the relations that may occur in the constraints to a fixed set, called a constraint language, whereas a constraint language is conservative if it contains all unary relations. Schaefer's famous Dichotomy Theorem (STOC 1978) identifies all islands of tractability in terms of tractable constraint languages over a Boolean domain of values. Since then, many extensions and generalizations of this result have been obtained. Recently, Bulatov (TOCL 2011, JACM 2013) gave a full characterization of all islands of tractability for CSP and the counting version #CSP that are defined in terms of conservative constraint languages. This article addresses the general limit of the mentioned tractability results for CSP and #CSP, that they only apply to instances where all constraints belong to a single tractable language (in general, the union of two tractable languages is not tractable). We show that we can overcome this limitation as long as we keep some control of how constraints over the various considered tractable languages interact with each other. For this purpose, we utilize the notion of a strong backdoor of a CSP instance, as introduced by Williams et al. (IJCAI 2003), which is a set of variables that when instantiated, moves the instance to an island of tractability, that is, to a tractable class of instances. We consider strong backdoors into scattered classes, consisting of CSP instances where each connected component belongs entirely to some class from a list of tractable classes. Figuratively speaking, a scattered class constitutes an archipelago of tractability. The main difficulty lies in finding a strong backdoor of given size k; once it is found, we can try all possible instantiations of the backdoor variables and apply the polynomial time algorithms associated with the islands of tractability on the list component-wise. Our main result is an algorithm that, given a CSP instance with n variables, finds in time f (k)nO(1) a strong backdoor into a scattered class (associated with a list of finite conservative constraint languages) of size k or correctly decides that there is not such a backdoor. This also gives the running time for solving (#)CSP, provided that (#)CSP is polynomial-time tractable for the considered constraint languages. Our result makes significant progress towards the main goal of the backdoor-based approach to CSPs-the identification of maximal base classes for which small backdoors can be detected efficiently.

Link to Repositum

Parameterized complexity classes beyond para-NP
de Haan, Ronald, Szeider, Stefan
Type: Article; In: Journal of Computer and System Sciences; Vol: 87; Pages: 16-57
Show Abstract
Today's propositional satisfiability (SAT) solvers are extremely powerful and can be used as an efficient back-end for solving NP-complete problems. However, many fundamental problems in logic, in knowledge representation and reasoning, and in artificial intelligence are located at the second level of the Polynomial Hierarchy or even higher, and hence for these problems polynomial-time transformations to SAT are not possible, unless the hierarchy collapses. Recent research shows that in certain cases one can break through these complexity barriers by fixed-parameter tractable (fpt) reductions to SAT which exploit structural aspects of problem instances in terms of problem parameters. These reductions are more powerful because their running times can grow superpolynomially in the problem parameters. In this paper we develop a general theoretical framework that supports the classification of parameterized problems on whether they admit such an fpt-reduction to SAT or not.

Link to Repositum

An iterative time-bucket refinement algorithm for a high-resolution resource-constrained project scheduling problem
Riedler, Martin, Jatschka, Thomas, Maschler, Johannes, Raidl, Günther
Type: Article; In: International Transactions in Operational Research; Vol: 8
Show Abstract
We consider a resource-constrained project scheduling problem originating in particle therapy for cancer treatment, in which the scheduling has to be done in high resolution. Traditional mixed integer linear programming techniques such as time-indexed formulations or discrete-event formulations are known to have severe limitations in such cases, that is, growing too fast or having weak linear programming relaxations. We suggest a relaxation based on partitioning time into so-called time-buckets. This relaxation is iteratively solved and serves as basis for deriving feasible solutions using heuristics. Based on these primal and dual solutions and bounds, the time-buckets are successively refined. Combining these parts, we obtain an algorithm that provides good approximate solutions soon and eventually converges to an optimal solution. Diverse strategies for performing the time-bucket refinement are investigated. The approach shows excellent performance in comparison to the traditional formulations and a metaheuristic.

Link to Repositum

Backdoors into heterogeneous classes of SAT and CSP
Gaspers, Serge, Misra, Neeldhara, Ordyniak, Sebastian, Szeider, Stefan, Živný, Stanislav
Type: Article; In: Journal of Computer and System Sciences; Vol: 85; Pages: 38-56
Show Abstract
In this paper we extend the classical notion of strong and weak backdoor sets for SAT and CSP by allowing that different instantiations of the backdoor variables result in instances that belong to different base classes; the union of the base classes forms a heterogeneous base class. Backdoor sets to heterogeneous base classes can be much smaller than backdoor sets to homogeneous ones, hence they are much more desirable but possibly harder to find. We draw a detailed complexity landscape for the problem of detecting strong and weak backdoor sets into heterogeneous base classes for SAT and CSP.

Link to Repositum

The Treewidth of Proofs
Müller, Moritz, Szeider, Stefan
Type: Article; In: Information and Computation; Vol: 255; Pages: 147-164
Show Abstract
So-called ordered variants of the classical notions of pathwidth and treewidth are introduced and proposed as proof theoretically meaningful complexity measures for the directed acyclic graphs underlying proofs. Ordered pathwidth is roughly the same as proof space and the ordered treewidth of a proof is meant to serve as a measure of how far it is from being treelike. Length-space lower bounds for k-DNF refutations are generalized to arbitrary infinity axioms and strengthened in that the space measure is relaxed to ordered treewidth.

Link to Repositum

Hybrid Metaheuristics for Optimization Problems in Public Bike Sharing Systems
Raidl, Günther
Type: Report
Show Abstract
Hybrid Metaheuristics for Optimization Problems in Public Bike Sharing Systems I will start with an introduction of the Algorithms and Complexity Group of TU Wien and an overview of our research topics and recent projects. Two of our larger research projects of the past four years address optimization problems occurring in the setup and maintenance of public bicycle sharing systems (BSS). This talk will give some insight on the algorithms we developed on the one hand for planning transportation tours for balancing a BSS and on the other hand for deciding where to build new stations of which size in the task of setting up a new or extending an existing BSS. Operators of BSSs have to regularly redistribute bikes across the rental stations in order to prevent them getting overly full or empty. This is usually achieved with a fleet of vehicles with trailers. We will consider hybrid PILOT, GRASP and Variable Neighborhood Search approaches for an effective transportation tour planning. When establishing a new BSS or extending an existing one, one of the core questions is at which locations rental stations of which size should be built. We model this station planning problem on the basis of a given expected customer traveling demand over the considered geographical area and locations where stations can potentially be built. The objective is to maximize the actually satisfied demand under budget constraints. In order to deal with the huge amount of input data for a larger city, we apply a hierarchical clustering based approach. The optimization problem is then solved by a multilevel refinement metaheuristic making use of mixed integer linear programming and local search techniques.

Link to Repositum

Minimizing crossings in constrained two-sided circular graph layouts
Klute, Fabian, Nöllenburg, Martin
Type: Presentation
Show Abstract
Circular layouts are a popular graph drawing style, where vertices are placed on a circle and edges are drawn as straight chords. One way to reduce clutter caused by edge crossings is to use two-sided circular layouts , in which some edges are drawn as curves in the exterior of the circle. We study the problem of minimizing the crossings for a fi xed cyclic vertex or- der by computing an optimal 1-plane set of exteriorly drawn edges. This relates to fi nding maximum-weight degree-constrained induced subgraphs in circle or over- lap graphs.

Link to Repositum

Backdoors for Constraint Satisfaction
Szeider, Stefan
Type: Presentation

Link to Repositum

Job Sequencing with One Common and Multiple Secondary Resources: A Problem Motivated from Particle Therapy for Cancer Treatment
Horn, Matthias, Raidl, Günther, Blum, Christian
Type: Inproceedings; In: Machine Learning, Optimization, and Big Data Third International Conference, MOD 2017, Volterra, Italy, September 14–17, 2017, Revised Selected Papers; Pages: 506-518
Show Abstract
We consider in this work the problem of scheduling a set of jobs without preemption, where each job requires two resources: (1) a common resource, shared by all jobs, is required during a part of the job's processing period, while (2) a secondary resource, which is shared with only a subset of the other jobs, is required during the job's whole processing period. This problem models, for example, the scheduling of patients during one day in a particle therapy facility for cancer treatment. First, we show that the tackled problem is NP-hard. We then present a construction heuristic and a novel A* algorithm, both on the basis of an effective lower bound calculation. For comparison, we also model the problem as a mixed-integer linear program (MILP). An extensive experimental evaluation on three types of problem instances shows that A* typically works extremely well, even in the context of large instances with up to 1000 jobs. When our A* does not terminate with proven optimality, which might happen due to excessive memory requirements, it still returns an approximate solution with a usually small optimality gap. In contrast, solving the MILP model with the MILP solver CPLEX is not competitive except for very small problem instances.

Link to Repositum

Towards a Polynomial Kernel for Directed Feedback Vertex Set
Bergnougnoux, Benjamin, Eiben, Eduard, Ganian, Robert, Ordyniak, Sebastian, Ramanujan, M.S.
Type: Inproceedings; In: Proceedings of the 42nd International Symposium on Mathematical Foundations of Computer Science; Pages: 1-15
Show Abstract
In the Directed Feedback Vertex Set (DFVS) problem, the input is a directed graph $D$ and an integer $k$. The objective is to determine whether there exists a set of at most $k$ vertices intersecting every directed cycle of $D$. DFVS was shown to be fixed-parameter tractable when parameterized by solution size by Chen, Liu, Lu, O'Sullivan and Razgon [JACM 2008]; since then, the existence of a polynomial kernel for this problem has become one of the largest open problems in the area of parameterized algorithmics. Since this problem has remained open in spite of the best efforts of a number of prominent researchers and pioneers in the field, a natural step forward is to study the kernelization complexity of DFVS parameterized by a natural larger parameter. In this paper, we study DFVS parameterized by the feedback vertex set number of the underlying undirected graph. We provide two main contributions: a polynomial kernel for this problem on general instances, and a linear kernel for the case where the input digraph is embeddable on a surface of bounded genus.

Link to Repositum

Efficient Consideration of Soft Time Windows in a Large Neighborhood Search for the Districting and Routing Problem for Security Control
Kloimüllner, Christian, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization; Pages: 91-107
Show Abstract
For many companies it is important to protect their physical and intellectual property in an e cient and economically viable man- ner. Thus, specialized security companies are delegated to guard private and public property. These companies have to control a typically large number of buildings, which is usually done by teams of security guards patrolling di erent sets of buildings. Each building has to be visited sev- eral times within given time windows and tours to patrol these buildings are planned over a certain number of periods (days). This problem is regarded as the Districting and Routing Problem for Security Control . Investigations have shown that small time window violations do not re- ally matter much in practice but can drastically improve solution quality. When softening time windows of the original problem, a new subprob- lem arises where the minimum time window penalty for a given set of districts has to be found for each considered candidate route: What are optimal times for the individual visits of objects that minimize the overall penalty for time window violations? We call this Optimal Arrival Time Problem . In this paper, we investigate this subproblem in particular and rst give an exact solution approach based on linear programming . As this method is quite time-consuming we further propose a heuristic approach based on greedy methods in combination with dynamic programming. The whole mechanism is embedded in a large neighborhood search (LNS) to seek for solutions having minimum time window violations. Results show that using the proposed heuristic method for determining almost optimal starting times is much faster, allowing substantially more LNS iterations yielding in the end better overall solutions.

Link to Repositum

Particle Therapy Patient Scheduling: Time Estimation to Schedule Sets of Treatments
Maschler, Johannes, Riedler, Martin, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2017; Pages: 106-107
Show Abstract
In classical radiotherapy cancer treatments are provided by linear accelerators that serve a dedicated treatment room exclusively. In contrast, particle therapy uses beams that are produced by either cyclotrons or synchrotrons that can serve up to five treatment rooms in an interleaved way. Several sequential activities like stabilization not requiring the beam have to be performed in the treatment room before and after each actual irradiation. Using several rooms and switching the beam between the rooms thus allows an effective utilization of the expensive particle accelerator and increased throughput of the facility. In a typical midterm planning scenario a schedule for performing the thera- pies over the next few months has to be determined. Midterm planning for clas- sical radiotherapy has already attracted some research starting with the works from Kapamara et al. [1] and Petrovic et al. [3]. Due to the one-to-one corre- spondence of treatment rooms and accelerators it suffices to consider a coarser scheduling scenario in which treatments have to be assigned only to days but do not have to be sequenced within the day. In a recent work [2] we studied a simplified problem formulation addressing the midterm planning of the particle therapy treatment center MedAustron in Wiener Neustadt, Austria, which of- fers three treatment rooms. Our approach consisted in decomposing the problem into a day assignment and a sequencing part, and we provided a construction heuristic, a GRASP, and an Iterated Greedy (IG) metaheuristic. The aim of the current work is to extend the proposed model and to provide and utilize a mech- anism that quickly predicts the behavior of the sequencing part with reasonable precision, allowing in particular an improved day assignment.

Link to Repositum

GRASP and VNS for a periodic VRP with time windows to deal with milk collection
Exposit, Airam, Raidl, Günther, Brito, Julio, Moreno-Perez, Jose
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2017; Pages: 90-91

Link to Repositum

Using Layered Graphs to solve the Directed Network Design Problem with Relays
Riedler, Martin, Leitner, Markus, Ljubic, Ivana, Ruthmair, Mario
Type: Presentation
Show Abstract
We consider mixed integer linear programming models for the directed network design problem with relays (DNDPR) based on layered graphs. DNDPR originates from telecommunication network design but also has applications in hub location and electric mobility. The problem is based on a family of origin-destination pairs and a set of arcs that can be established in the network. A subset of arcs has to be selected in order to allow communication between all these pairs but communication paths must not exceed a certain distance limit. To transmit the signal farther, regeneration devices (relays) have to be installed. The goal is to allow all pairs to communicate while minimizing the costs for establishing arcs and relays. Previous work in the area involves a node-arc formulation and a branch-and-price approach. We propose two compact formulations and a model based on an exponential number of constraints. The latter is solved using a branch-and-cut algorithm. An experimental study demonstrates the effectiveness of our novel formulations on a diverse set of benchmark instances.

Link to Repositum

Going Beyond Primal Treewidth for (M)ILP
Ganian, Robert, Ordyniak, Sebastian, Ramanujan, M.S.
Type: Inproceedings; In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence; Pages: 815-821
Show Abstract
Integer Linear Programming (ILP) and its mixed variant (MILP) are archetypical examples of NP-complete optimization problems which have a wide range of applications in various areas of artificial intelligence. However, we still lack a thorough understanding of which structural restrictions make these problems tractable. Here we focus on structure captured via so-called decompositional parameters, which have been highly successful in fields such as boolean satisfiability and constraint satisfaction but have not yet reached their full potential in the ILP setting. In particular, primal treewidth (an established decompositional parameter) can only be algorithmically exploited to solve ILP under restricted circumstances. Our main contribution is the introduction and algorithmic exploitation of two new decompositional parameters for ILP and MILP. The first, torso-width, is specifically tailored to the linear programming setting and is the first decompositional parameter which can also be used for MILP. The latter, incidence treewidth, is a concept which originates from boolean satisfiability but has not yet been used in the ILP setting; here we obtain a full complexity landscape mapping the precise conditions under which incidence treewidth can be used to obtain efficient algorithms. Both of these parameters overcome previous shortcomings of primal treewidth for ILP in unique ways, and consequently push the frontiers of tractability for these important problems.

Link to Repositum

SAT-Based Local Improvement for Finding Tree Decompositions of Small Width
Fichte, Johannes K., Lodha, Neha, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2017; Pages: 401-411
Show Abstract
Many hard problems can be solved efficiently for problem instances that can be decomposed by tree decompositions of small width. In particular for problems beyond NP, such as {\#}P-complete counting problems, tree decomposition-based methods are particularly attractive. However, finding an optimal tree decomposition is itself an NP-hard problem. Existing methods for finding tree decompositions of small width either (a) yield optimal tree decompositions but are applicable only to small instances or (b) are based on greedy heuristics which often yield tree decompositions that are far from optimal. In this paper, we propose a new method that combines (a) and (b), where a heuristically obtained tree decomposition is improved locally by means of a SAT encoding. We provide an experimental evaluation of our new method

Link to Repositum

Rigging Nearly Acyclic Tournaments Is Fixed-Parameter Tractable
Ramanujan, M. Sridharan, Szeider, Stefan
Type: Inproceedings; In: Thirty-First AAAI Conference on Artificial Intelligence; Pages: 3929-3935
Show Abstract
Single-elimination tournaments (or knockout tournaments) are a popular format in sports competitions that is also widely used for decision making and elections. In this paper we study the algorithmic problem of manipulating the outcome of a tournament. More specifically, we study the problem of find- ing a seeding of the players such that a certain player wins the resulting tournament. The problem is known to be NP-hard in general. In this paper we present an algorithm for this problem that exploits structural restrictions on the tournament. More specifically, we establish that the problem is fixed-parameter tractable when parameterized by the size of a smallest feed- back arc set of the tournament (interpreting the tournament as an oriented complete graph). This is a natural parameter be- cause most problems on tournaments (including this one) are either trivial or easily solvable on acyclic tournaments, leading to the question-what about nearly acyclic tournaments or tournaments with a small feedback arc set? Our result signifi- cantly improves upon a recent algorithm by Aziz et al. (2014) whose running time is bounded by an exponential function where the size of a smallest feedback arc set appears in the exponent and the base is the number of players.

Link to Repositum

A SAT Approach to Branchwidth
Lodha, Neha, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017); Pages: 4894-4898
Show Abstract
Branch decomposition is a prominent method for structurally decomposing a graph, hypergraph or CNF formula. The width of a branch decomposi- tion provides a measure of how well the object is decomposed. For many applications it is crucial to compute a branch decomposition whose width is as small as possible. We propose a SAT approach to finding branch decompositions of small width. The core of our approach is an efficient SAT encoding which determines with a single SAT-call whether a given hypergraph admits a branch decomposition of certain width. For our encoding we develop a novel partition-based characterization of branch de- compositions. The encoding size imposes a limit on the size of the given hypergraph. In order to break through this barrier and to scale the SAT approach to larger instances, we develop a new heuristic ap- proach where the SAT encoding is used to locally improve a given candidate decomposition until a fixed-point is reached. This new method scales now to instances with several thousands of vertices and edges.

Link to Repositum

Hierarchical Clustering and Multilevel Refinement for the Bike-Sharing Station Planning Problem
Kloimüllner, Christian, Raidl, Günther
Type: Inproceedings; In: Conference Proceedings of Learning and Intelligent Optimization Conference; Pages: 1-16
Show Abstract
Abstract. We investigate the Bike-Sharing Station Planning Problem (BSSPP). A bike-sharing system consists of a set of rental stations, each with a certain number of parking slots, distributed over a geographical region. Customers can rent available bikes at any station and return them at any other station with free parking slots. The initial decision process where to build stations of which size or how to extend an existing system by new stations and/or changing existing station con gurations is crucial as it actually determines the satis able customer demand, costs, as well as the rebalancing e ort arising by the need to regularly move bikes from some stations tending to run full to stations tending to run empty. We consider as objective the maximization of the satis ed customer demand under budget constraints for xed and variable costs, including the costs for rebalancing. As bike-sharing stations are usually implemented within larger cities and the potential station locations are manifold, the size of practical instances of the underlying optimization problem is rather large, which makes a manual decision process a hardly comprehensible and understandable task but also a computational optimization very challenging. We therefore propose to state the BSSPP on the basis of a hierarchical clustering of the considered underlying geographical cells with potential customers and possible stations. In this way the estimated existing demand can be more compactly expressed by a relatively sparse weighted graph instead of a complete matrix with mostly small non-zero entries. For this advanced problem formulation we describe an e cient linear programming approach for evaluating candidate solutions, and for solving the problem a rst multilevel re nement heuristic based on mixed integer linear programming. Our experiments show that it is possible to approach instances with up to 2000 geographical cells in reasonable computation times.

Link to Repositum

2016
Integrating Algebraic Dynamic Programming in Combinatorial Optimization
Bacher, Christopher, Raidl, Günther
Type: Presentation
Show Abstract
Although Dynamic Programming is a fundamental technique in the field of combinatorial optimization, the traditional take on the method is an exemplary one. Whereas other techniques like Integer Programming or Constraint Programming possess clearly defined semantics, frameworks and solvers, Dynamic Programming code is usually handcrafted to single problem. Algebraic Dynamic Programming (ADP) (Giegerich et al., 2002) is a promising approach to provide these things for Dynamic Programming. In ADP dynamic programs are expressed as context free grammars to describe the search space of the problem as a form of object composition/decomposition and use separate evaluation algebras to evaluate objects of the search space. Originally developed for the bioinformatics community ADP was restricted to sequence data (strings) and has since been extended to set/general data structures in (Siederdissen et al., 2014) and (Siederdissen et al., 2015). Nevertheless, to use ADP in day-to-day combinatorial problems often a more expressive form of modelling is needed. We therefore develop a new Algebraic Dynamic Programming Framework called \emph{Whistle} which intends to make ADP more useful to the general combinatorial optimization and operations research communities. The presented work will focus on modelling classical DP problems by means of ADP, as well as introduce new useful features like explicit indexed grammars, index propagators, index amalgamation, search engines, and partial invalidation to broaden the applicability of ADP.

Link to Repositum

Large Neighborhood Search for the Most Strings with Few Bad Columns Problem
Lizárraga, Evelia, Blesa, Maria J., Blum, Christian, Raidl, Günther R.
Type: Article; In: Journal of Heuristics; Vol: 21; Issue: 17; Pages: 4901-4915
Show Abstract
In this work, we consider the following NP-hard combinatorial optimization problem from computational biology. Given a set of input strings of equal length, the goal is to identify a maximum cardinality subset of strings that differ maximally in a pre-defined number of positions. First of all, we introduce an integer linear programming model for this problem. Second, two variants of a rather simple greedy strategy are proposed. Finally, a large neighborhood search algorithm is presented. A comprehensive experimental comparison among the proposed techniques shows, first, that larger neighborhood search generally outperforms both greedy strategies. Second, while large neighborhood search shows to be competitive with the stand-alone application of CPLEX for small- and medium-sized problem instances, it outperforms CPLEX in the context of larger instances.

Link to Repositum

A Parameterized Study of Maximum Generalized Pattern Matching Problems
Ordyniak, Sebastian, Popa, Alexandru
Type: Article; In: Algorithmica; Vol: 75; Issue: 1; Pages: 1-26
Show Abstract
The generalized function matching (GFM) problem has been intensively studied starting with Ehrenfreucht and Rozenberg (Inf Process Lett 9(2):86-88, 1979). Given a pattern p and a text t, the goal is to find a mapping from the letters of p to nonempty substrings of t, such that applying the mapping to p results in t. Very recently, the problem has been investigated within the framework of parameterized complexity (Fernau et al. in FSTTCS, 2013). In this paper we study the parameterized complexity of the optimization variant of GFM (calledMax-GFM), which has been introduced in Amir and Amihood (J Discrete Algorithms 5(3):514-523, 2007). Here, one is allowed to replace some of the pattern letters with some special symbols "?", termed wildcards or don't cares, which can be mapped to an arbitrary substring of the text. The goal is to minimize the number of wildcards used. We give a complete classification of the parameterized complexity of Max-GFM and its variants under a wide range of parameterizations, such as, the number of occurrences of a letter in the text, the size of the text alphabet, the number of occurrences of a letter in the pattern, the size of the pattern alphabet, the maximum length of a string matched to any pattern letter, the number of wildcards and the maximum size of a string that a wildcard can be mapped to.

Link to Repositum

Complexity and monotonicity results for domination games
Kreutzer, Stephan, Ordyniak, Sebastian
Type: Article; In: Theoretical Computer Science; Vol: 628; Pages: 1-29
Show Abstract
In this paper we study Domination Games, a class of games introduced by Fomin, Kratsch, and Müller in[8]. Domination games are a variant of the well-known graph searching games (also called cops and robber games), where a number of cops tries to capture a robber hiding on the vertices of a graph. Variants of these games are often used to provide a game-theoretic characterization of important graph parameters such as pathwidth, treewidth, and hypertreewidth. We are primarily interested in questions concerning the complexity and monotonicity of these games. We show that dominating games are computationally much harder than standard cops and robber games and establish strong non-monotonicity results for various notions of monotonicity that arise naturally in the context of domination games. Answering a question of [8], we show that there are graphs where the shortest winning strategy for a minimal number of cops must necessarily be of exponential length.

Link to Repositum

Tree-depth and vertex-minors
Hlinený, Petr, Kwon, O-Joung, Obdrzalek, Jan, Ordyniak, Sebastian
Type: Article; In: European Journal of Combinatorics; Vol: 56; Pages: 46-56
Show Abstract
In a recent paper Kwon and Oum (2014), Kwon and Oum claim that every graph of bounded rank-width is a pivot-minor of a graph of bounded tree-width (while the converse has been known true already before). We study the analogous questions for ''depth'' parameters of graphs, namely for the tree-depth and related new shrub-depth. We show how a suitable adaptation of known results implies that shrub-depth is monotone under taking vertex-minors, and we prove that every graph class of bounded shrub-depth can be obtained via vertex-minors of graphs of bounded tree-depth. While we exhibit an example that pivot-minors are generally not sufficient (unlike Kwon and Oum (2014)) in the latter statement, we then prove that the bipartite graphs in every class of bounded shrub-depth can be obtained as pivot-minors of graphs of bounded tree-depth.

Link to Repositum

Directed elimination games
Engelmann, Viktor, Ordyniak, Sebastian, Kreutzer, Stephan
Type: Article; In: Discrete Applied Mathematics; Vol: 199; Pages: 187-198
Show Abstract
While tools from structural graph theory such as tree- or path-width have proved to be highly successful in coping with computational intractability on undirected graphs, corresponding width measures for directed graphs have not yet fulfilled their promise for broad algorithmic applications on directed graphs. One reason for this is that in most existing digraph width measures the class of acyclic digraphs has small width which immediately implies hardness of problems such as computing directed dominating sets. Fernau and Meister (2014) introduce the concept of elimination width and a corresponding graph searching game which overcomes this problem with acyclic digraphs. In their paper, the focus was on structural characterizations of classes of digraphs of bounded elimination width. In this paper we study elimination width from an algorithmic and graph searching perspective. We analyse variations of the elimination width game and show that this leads to width measures on which the directed dominating set problem and some variants of it become tractable.

Link to Repositum

Mixed map labeling
Löffler, Maarten, Nöllenburg, Martin, Staals, Frank
Type: Article; In: Journal of Spatial Information Science; Issue: 13
Show Abstract
Point feature map labeling is a geometric visualization problem, in which a set of input points must be labeled with a set of disjoint rectangles (the bounding boxes of the label texts). It is predominantly motivated by label placement in maps but it also has other visualization applications. Typically, labeling models either use internal labels, which must touch their feature point, or external (boundary) labels, which are placed outside the input image and which are connected to their feature points by crossing-free leader lines. In this paper we study polynomial-time algorithms for maximizing the number of internal labels in a mixed labeling model that combines internal and external labels. The model requires that all leaders are parallel to a given orientation 2 [0; 2 ), the value of which influences the geometric properties and hence the running times of our algorithms.

Link to Repositum

Extending Convex Partial Drawings of Graphs
Mchedlidze, Tamara, Nöllenburg, Martin, Rutter, Ignaz
Type: Article; In: Algorithmica; Vol: 76; Issue: 1; Pages: 47-67
Show Abstract
Given a plane graph G (i.e., a planar graph with a fixed planar embedding and outer face) and a biconnected subgraph G with a fixed planar straight-line convex drawing, we consider the question whether this drawing can be extended to a planar straight-line drawing of G. We characterize when this is possible in terms of simple necessary conditions, which we prove to be sufficient. This also leads to a linear-time testing algorithm. If a drawing extension exists, one can be computed in the same running time.

Link to Repositum

Supereulerian graphs with width s and s-collapsible graphs
Li, Ping, Li, Hao, Chen, Ye, Fleischner, Herbert, Lai, Hong-Jian
Type: Article; In: Discrete Applied Mathematics; Vol: 200; Pages: 79-94
Show Abstract
For an integer s>0s>0 and for u,v∈V(G)u,v∈V(G) with u≠vu≠v, an (s;u,v)(s;u,v)-trail-system of GG is a subgraph HH consisting of ss edge-disjoint (u,v)(u,v)-trails. A graph is supereulerian with widthss if for any u,v∈V(G)u,v∈V(G) with u≠vu≠v, GG has a spanning (s;u,v)(s;u,v)-trail-system. The supereulerian widthμ′(G)μ′(G) of a graph GG is the largest integer ss such that GG is supereulerian with width kk for every integer kk with 0≤k≤s0≤k≤s. Thus a graph GG with μ′(G)≥2μ′(G)≥2 has a spanning Eulerian subgraph. Catlin (1988) introduced collapsible graphs to study graphs with spanning Eulerian subgraphs, and showed that every collapsible graph GG satisfies μ′(G)≥2μ′(G)≥2 (Catlin, 1988; Lai et al., 2009). Graphs GG with μ′(G)≥2μ′(G)≥2 have also been investigated by Luo et al. (2006) as Eulerian-connected graphs. In this paper, we extend collapsible graphs to ss-collapsible graphs and develop a new related reduction method to study μ′(G)μ′(G) for a graph GG. In particular, we prove that K3,3K3,3 is the smallest 3-edge-connected graph with μ′<3μ′<3. These results and the reduction method will be applied to determine a best possible degree condition for graphs with supereulerian width at least 3, which extends former results in Catlin (1988) and Lai (1988).

Link to Repositum

Strict Confluent Drawing
Eppstein, David, Holten, Danny, Löffler, Maarten, Nöllenburg, Martin, Speckmann, Bettina, Verbeek, Kevin
Type: Article; In: Journal of Computational Geometry; Vol: 7; Issue: 1; Pages: 22-46
Show Abstract
We define strict confluent drawing, a form of confluent drawing in which the existence of an edge is indicated by the presence of a smooth path through a system of arcs and junctions (without crossings), and in which such a path, if it exists, must be unique. We prove that it is NP-complete to determine whether a given graph has a strict confluent drawing but polynomial to determine whether it has an outerplanar strict confluent drawing with a fixed vertex ordering (a drawing within a disk, with the vertices placed in a given order on the boundary).

Link to Repositum

Evaluation of Labeling Strategies for Rotating Maps
Gemsa, Andreas, Nöllenburg, Martin, Rutter, Ignaz
Type: Article; In: ACM Journal on Experimental Algorithmics; Vol: 21; Pages: 1-21
Show Abstract
We consider the following problem of labeling points in a dynamicmap that allows rotation.We are given a set of feature points in the plane labeled by a set of mutually disjoint labels, where each label is an axis-aligned rectangle attached with one corner to its respective point. We require that each label remains horizontally aligned during the map rotation, and our goal is to find a set of mutually nonoverlapping visible labels for every rotation angle α ∈ [0, 2π) so that the number of visible labels over a fullmap rotation of 2π is maximized. We discuss and experimentally evaluate several labeling strategies that define additional consistency constraints on label visibility to reduce flickering effects during monotone map rotation. We introduce three heuristic algorithms and compare them experimentally to an existing approximation algorithm and exact solutions obtained from an integer linear program. Our results show that on the one hand, low flickering can be achieved at the expense of only a small reduction in the objective value, and on the other hand, the proposed heuristics achieve a high labeling quality significantly faster than the other methods.

Link to Repositum

On Self-Approaching And Increasing-Chord Drawings Of 3-Connected Planar Graphs
Nöllenburg, Martin, Prutkin, Roman, Rutter, Ignaz
Type: Article; In: Journal of Computational Geometry; Vol: 7; Issue: 1; Pages: 47-69
Show Abstract
An st-path in a drawing of a graph is self-approaching if during the traversal of the corresponding curve from s to any point t0 on the curve the distance to t0 is nonincreasing. A path is increasing-chord if it is self-approaching in both directions. A drawing is self-approaching (increasing-chord) if any pair of vertices is connected by a self-approaching (increasing-chord) path. We study self-approaching and increasing-chord drawings of triangulations and 3-connected planar graphs. We show that in the Euclidean plane, triangulations admit increasing-chord drawings, and for planar 3-trees we can ensure planarity. We prove that strongly monotone (and thus increasing-chord) drawings of trees and binary cactuses require exponential resolution in the worst case, answering an open question by Kindermann et al. [14]. Moreover, we provide a binary cactus that does not admit a self-approaching drawing. Finally, we show that 3-connected planar graphs admit increasing-chord drawings in the hyperbolic plane and characterize the trees that admit such drawings.

Link to Repositum

New developments in metaheuristics and their applications
Lau, Hoong Chuin, Raidl, Günther, Van Hentenryck, Pascal
Type: Article; In: Journal of Heuristics; Vol: 22; Issue: 4; Pages: 359-363

Link to Repositum

Heuristic Approaches for Finding Uniquely Hamiltonian Graphs of Minimum Degree Three with Small Crossing Numbers
Klocker, Benedikt, Raidl, Günther
Type: Presentation
Show Abstract
In graph theory, a prominent conjecture of Bondy and Jackson states that every uniquely hamiltonian planar graph must have a vertex of degree two. We formulate an optimization problem for constructing a uniquely hamiltonian graph and an embedding, such that the number of crossings of the embedding and the number of degree two vertices is minimal. For solving the problem we propose a general variable neighborhood search (GVNS). To check feasibility of neighbors we need to solve hamiltonian cycle problems, which is done in a delayed manner to minimize the computation effort. In the experiments we were able to find uniquely hamiltonian graphs with no degree two vertices and four crossings, but no planar graphs. Furthermore we will discuss a more general approach to solve similar con- struction problems or more generally bilevel optimization problems with an evolutionary algorithm. Many construction problems arising in graph theory have a bilevel structure in which the lower-level optimization problem is by it- self already NP-hard. Using an evolutionary algorithm we want to address a broad range of problems with a similar structure.

Link to Repositum

Quantified conjunctive queries on partially ordered sets
Bova, Simone, Ganian, Robert, Szeider, Stefan
Type: Article; In: Theoretical Computer Science; Vol: 618; Pages: 72-84
Show Abstract
We study the computational problem of checking whether a quantified conjunctive query (a first-order sentence built using only conjunction as Boolean connective) is true in a finite poset (a reflexive, antisymmetric, and transitive directed graph). We prove that the problem is already NP-hard on a certain fixed poset, and investigate structural properties of posets yielding fixed-parameter tractability when the problem is parameterized by the query. Our main algorithmic result is that model checking quantified conjunctive queries on posets is fixed-parameter tractable when parameterized by the sentence and the width of the poset (the maximum size of a subset of pairwise incomparable elements). We complement our algorithmic result by complexity results with respect to classes of finite posets in a hierarchy of natural poset invariants, establishing its tightness in this sense.

Link to Repositum

Soundness of Q-resolution with dependency schemes
Slivovsky, Friedrich, Szeider, Stefan
Type: Article; In: Theoretical Computer Science; Vol: 612; Pages: 83-101
Show Abstract
Q-resolution and Q-term resolution are proof systems for quantified Boolean formulas (QBFs). We introduce generalizations of these proof systems named Q(D)-resolution and Q(D)-term resolution. Q(D)-resolution and Q(D)-term resolution are parameterized by a dependency scheme Dand use more powerful ∀-reduction and ∃-reduction rules, respectively. We show soundness of these systems for particular dependency schemes: we prove (1)soundness of Q(D)-resolution parameterized by the reflexive resolution-path dependency scheme, and (2)soundness of Q(D)-term resolution parameterized by the resolution-path dependency scheme. These results entail soundness of the proof systems used for certificate generation in the state-of-the-art solver DepQBF.

Link to Repositum

Backdoors to q-Horn
Gaspers, Serge, Ordyniak, Sebastian, Ramanujan, M. S., Saurabh, Saket, Szeider, Stefan
Type: Article; In: Algorithmica; Vol: 74; Issue: 1; Pages: 540-557
Show Abstract
The class q-Horn, introduced by Boros, Crama and Hammer in 1990, is one of the largest known classes of propositional CNF formulas for which satisfiability can be decided in polynomial time. This class properly contains the fundamental classes of Horn and 2-CNF formulas as well as the class of renamable (or disguised) Horn formulas. In this paper we extend this class so that its favorable algorithmic properties can be made accessible to formulas that are outside but "close" to this class. We show that deciding satisfiability is fixed-parameter tractable parameterized by the distance of the given formula from q-Horn. The distance is measured by the smallest number of variables that we need to delete from the formula in order to get a q-Horn formula, i.e., the size of a smallest deletion backdoor set into the class q-Horn. This result generalizes known fixed-parameter tractability results for satisfiability decision with respect to the parameters distance from Horn, 2-CNF, and renamable Horn.

Link to Repositum

Meta-kernelization with structural parameters
Ganian, Robert, Slivovsky, Friedrich, Szeider, Stefan
Type: Article; In: Journal of Computer and System Sciences; Vol: 82; Issue: 2; Pages: 333-346
Show Abstract
Kernelization is a polynomial-time algorithm that reduces an instance of a parameterized problem to a decision-equivalent instance, the kernel, whose size is bounded by a function of the parameter. In this paper we present meta-theorems that provide polynomial kernels for large classes of graph problems parameterized by a structural parameter of the input graph. Let Cbe an arbitrary but fixed class of graphs of bounded rank-width (or, equivalently, of bounded clique-width). We define the C-cover numberof a graph to be the smallest number of modules its vertex set can be partitioned into, such that each module induces a subgraph that belongs to C. We show that each decision problem on graphs which is expressible in Monadic Second Order (MSO) logic has a polynomial kernel with a linear number of vertices when parameterized by the C-cover number. We provide similar results for MSO expressible optimization and modulo-counting problems.

Link to Repositum

A Logic-based Benders Decomposition Approach for the 3-Staged Strip Packing Problem
Maschler, Johannes, Raidl, Günther
Type: Inproceedings; In: Operations Research Proceedings 2015 Selected Papers of the International Conference of the German, Austrian and Swiss Operations Research Societies; Pages: 85-102
Show Abstract
We consider the 3-staged Strip Packing Problem, in which rectangular items have to be arranged onto a rectangular strip of fixed width, such that the items can be obtained by three stages of guillotine cuts while the required strip height is to be minimized. We propose a new logic-based Benders decomposition with two kinds of Benders cuts and compare it with a compact integer linear programming formulation.

Link to Repositum

Models and algorithms for competitive facility location problems with different customer behavior.
Biesinger, Benjamin, Hu, Bin, Raidl, Günther
Type: Article; In: Annals of Mathematics and Artificial Intelligence; Vol: 76; Issue: 1-2; Pages: 93-119
Show Abstract
Competitive facility location problems arise in the context of two non-cooperating companies, a leader and a follower, competing for market share from a given set of customers. We assume that the rms place a given number of facilities on locations taken from a discrete set of possible points. For this bi-level optimization problem we consider six di erent customer behavior scenarios from the literature: binary, proportional and partially binary, each combined with essential and unessential demand. The decision making for the leader and the follower depends on these scenarios. In this work we present mixed integer linear programming models for the follower problem of each scenario and use them in combination with an evolutionary algorithm to optimize the location selection for the leader. A complete solution archive is used to detect already visited candidate solutions and convert them e ciently into similar, not yet considered ones. We present numerical results of our algorithm and compare them to so far state-of-the-art approaches from the literature. Our method shows good performance in all customer behavior scenarios and is able to outperform previous solution procedures on many occasions.

Link to Repositum

Temporal map labeling
Barth, Lukas, Niedermann, Benjamin, Nöllenburg, Martin, Strash, Darren
Type: Inproceedings; In: Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Show Abstract
The increased availability of interactive maps on the Internet and on personal mobile devices has created new challenges in computational cartography and, in particular, for label placement in maps. Operations like rotation, zoom, and translation dynamically change the map over time and make a consistent adaptation of the map labeling necessary. In this paper, we consider map labeling for the case that a map undergoes a sequence of operations over a specified time span. We unify and generalize several preceding models for dynamic map labeling into one versatile and flexible model. In contrast to previous research, we completely abstract from the particular operations (e.g., zoom, rotation, etc.) and express the labeling problem as a set of time intervals representing the labels' presences, activities, and conflicts. The model's strength is manifested in its simplicity and broad range of applications. In particular, it supports label selection both for map features with fixed position as well as for moving entities (e.g., for tracking vehicles in logistics or air traffic control). Through extensive experiments on OpenStreetMap data, we evaluate our model using algorithms of varying complexity as a case study for navigation systems. Our experiments show that even simple (and thus, fast) algorithms achieve near-optimal solutions in our model with respect to an intuitive objective function.

Link to Repositum

Graph Drawing and Network Visualization
Authors not available
Type: Proceedings; Vol: 9801

Link to Repositum

Hybrid Metaheuristics
Blum, Christian, Raidl, Günther
Type: Book

Link to Repositum

Proceedings of EmoVis 2016, ACM IUI 2016 Workshop on Emotion and Visualization, Sonoma, CA, USA, March 10, 2016
Authors not available
Type: Proceedings; Vol: 103
Show Abstract
This year, we are happy to announce the proceedings for the first Workshop on Emotion and Visualization (EmoVis 2016) that takes place as part of the ACM Intelligent User Interfaces (IUI 2016) conference in Sonoma, CA, USA. Planned as a bi-annual event, the Workshop on Emotion and Visualization welcomes researchers, practitioners and experts from a variety of scientific domains, including visualization, human-computer interaction, artificial intelligence, cognitive psychology, and multimedia. EmoVis 2016 acts as a forum where people with diverse backgrounds can present design principles and introduce novel techniques for affect measurement and visualization. The papers accepted at this year´s workshop focus on topics like emotion measurement through wearable technologies, sound and emotion, real-time emotion visualization, emotion visualization in different contexts, as well as the challenges faced when detecting and visualizing emotions. All papers in this proceedings book have been peer-reviewed by at least three reviewers from the international program committee consisting of 12 experts listed below. Many have contributed to make the workshop an enjoyable and enlightening experience. We would like to express our gratitude to the invited speaker, Michelle X. Zhou, and the international program committee for their commitment and reviewing efforts. Without all those great people, this workshop would not have been possible. Finally, we thank the ACM IUI 2016 organization committee for the acceptance of our workshop including local support during the conference and our sponsor for providing financial aid. Welcome to EmoVis 2016. We hope that you enjoy the workshop and use this event to share your experiences and be inspired. Andreas Kerren, Daniel Cernea, and Margit Pohl

Link to Repositum

Robust Genealogy Drawings
Klute, Fabian
Type: Inproceedings; In: Graph Drawing and Network Visualization. GD 2016; Vol: 9801; Pages: 637-639
Show Abstract
Inspired by the GD2016 challenge to draw a subset of the Greek gods ancestry graph we looked into the problem of drawing complex genealogies. Such graphs have still a hierarchical structure, but intermarriage and cross layer edges make it hard to use existing methods. We present a three step approach which is robust against these features.

Link to Repositum

Backdoor Trees for Answer Set Programming
Fichte, Johannes, Szeider, Stefan
Type: Report
Show Abstract
Answer set programming (ASP) is a popular framework for declarative modelling and problem solving. It has successfully been used to solve a wide variety of problems in artificial intelligence and reasoning. Many problems in propositional disjunctive ASP are of high computational complexity, such as reasoning, counting, and enumeration; in particular, the reasoning problems are complete for the second level of the Polynomial Hierarchy and thus even harder than NP. In this paper, we introduce backdoor trees for ASP and present a parameterized complexity analysis that takes the input size of an instance along with a composed parameter, which is based on backdoor trees, into account. When using backdoors for a parameterized complexity analysis one only considers the size k of a backdoor as a parameter. Evaluating a given backdoor results in 2^k assignments and thus 2^k programs the assignments. However, an assignment to fewer than k atoms can already yield a program under assignment that belongs to the fixed target class. Therefore, we consider binary decision trees, which make gradually assigning truth values to backdoor atoms in a program precise and lead us to the notion of backdoor trees, originally defined for propositional satisfiability. In this way, backdoor trees provide a more precise approach to the evaluation of backdoors, where we take the interaction of the assignments in the evaluation into account.

Link to Repositum

Long Distance Q-Resolution with Dependency Schemes
Peitl, Tomáš, Slivovsky, Friedrich, Szeider, Stefan
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2016 : 19th International Conference, Bordeaux, France, July 5-8, 2016, Proceedings; Vol: 9710; Pages: 500-518
Show Abstract
Resolution proof systems for quantified Boolean formulas (QBFs) provide a formal model for studying the limitations of state-of-the-art search-based QBF solvers, which use these systems to generate proofs. In this paper, we define a new proof system that combines two such proof systems: Q-resolution with generalized universal reduction according to a dependency scheme and long distance Q-resolution. We show that the resulting proof system is sound for the reflexive resolution-path dependency scheme—in fact, we prove that it admits strategy extraction in polynomial time. As a special case, we obtain soundness and polynomial-time strategy extraction for long distance Q-resolution with universal reduction according to the standard dependency scheme. We report on experiments with a configuration of DepQBF that generates proofs in this system.

Link to Repositum

Particle Therapy Patient Scheduling: First Heuristic Approaches
Maschler, Johannes, Riedler, Martin, Stock, Markus, Raidl, Günther
Type: Presentation
Show Abstract
The Particle Therapy Patient Scheduling Problem arises in radiotherapy used for cancer treatment. Previous contributions in the existing literature primarily dealt with photon and electron therapy with a one-to-one correspondence of treatment rooms and accelerators. In particle therapy, however, a single accelerator serves multiple rooms in an interleaved way. This leads to a novel scenario in which the main challenge is to utilize the particle beam as well as possible. Switching between rooms allows to reduce idle time of the beam that emerges as a consequence of preparation steps. In this work we present first algorithms for solving this problem. In particular, we address the midterm planning variant which involves a time horizon of a few months but also requires detailed scheduling within each day. We formalize the problem via a mixed integer linear programming model, which, however, turns out to be intractable in practice. Consequently, we start with a construction heuristic featuring a forward-looking mechanism. Based upon this fast method we further study a Greedy Randomized Adaptive Search Procedure as well as an Iterated Greedy metaheuristic. A computational comparison of these algorithms is performed on benchmark instances created in a way to reflect the most important aspects of a real-world scenario.

Link to Repositum

Algorithms for Vehicle Routing
Raidl, Günther
Type: Presentation

Link to Repositum

Capturing Structure in SAT and Related Problems
Szeider, Stefan
Type: Presentation
Show Abstract
Propositional satisfiability (SAT) and related problems (like Model Counting, QBF-SAT, and CSP) are in practise often much easier to solve than suggested by their respective theoretical worst-case complexities. This "friendliness" of the real world is often explained by the presence of some kind of "hidden structure" in practical problem instances. In this talk we will review some mathematical concepts that attempt to capture the structure in problem instances and discuss their virtues and limits. We will focus rather on general questions than on technical details.

Link to Repositum

Counting Linear Extensions Parameterizations by Treewidth
Eiben, Eduard, Ganian, Robert, Kangas, Kustaa, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the 24th Annual European Symposium on Algorithms; Pages: 1-18
Show Abstract
We consider the #P-complete problem of counting the number of linear extensions of a poset (#LE); a fundamental problem in order theory with applications in a variety of distinct areas. In particular, we study the complexity of #LE parameterized by the well-known decompositional parameter treewidth for two natural graphical representations of the input poset, i.e., the cover and the incomparability graph. Our main result shows that #LE is fixed-parameter intractable parameterized by the treewidth of the cover graph. This resolves an open problem recently posed in the Dagstuhl seminar on Exact Algorithms. On the positive side we show that #LE becomes fixed-parameter tractable parameterized by the treewidth of the incomparability graph.

Link to Repositum

Stable Matching with Uncertain Linear Preferences
Aziz, Haris, Biro, Peter, Gaspers, Serge, de Haan, Ronald, Mattei, Nicholas, Rastegari, Baharak
Type: Inproceedings; In: Proceedings of the 9th International Symposium on Algorithmic Game Theory - SAGT 2016; Pages: 195-206
Show Abstract
We consider the two-sided stable matching setting in which there may be uncertainty about the agents´ preferences due to limited information or communication. We consider three models of uncertainty: (1) lottery model - in which for each agent, there is a probability distribution over linear preferences, (2) compact indifference model - for each agent, a weak preference order is specified and each linear order compatible with the weak order is equally likely and (3) joint probability model - there is a lottery over preference profiles. For each of the models, we study the computational complexity of computing the stability probability of a given matching as well as finding a matching with the highest probability of being stable. We also examine more restricted problems such as deciding whether a certainly stable matching exists. We find a rich complexity landscape for these problems, indicating that the form uncertainty takes is significant.

Link to Repositum

Clique-Width and Directed Width Measures for Answer-Set Programming
Bliem, Bernhard, Ordyniak, Sebastian, Woltran, Stefan
Type: Inproceedings; In: ECAI 2016 - 22nd European Conference on Artificial Intelligence; Pages: 1105-1113
Show Abstract
Disjunctive Answer Set Programming (ASP) is a powerful declarative programming paradigm whose main decision problems are located on the second level of the polynomial hierarchy. Identifying tractable fragments and developing efficient algorithms for such fragments are thus important objectives in order to complement the sophisticated ASP systems available to date. Hard problems can become tractable if some problem parameter is bounded by a fixed constant; such problems are then called fixed-parameter tractable (FPT). While several FPT results for ASP exist, parameters that relate to directed or signed graphs representing the program at hand have been neglected so far. In this paper, we first give some negative observations showing that directed width measures on the dependency graph of a program do not lead to FPT results. We then consider the graph parameter of signed clique-width and present a novel dynamic programming algorithm that is FPT w.r.t. this parameter. Clique-width is more general than the well-known treewidth, and, to the best of our knowledge, ours is the first FPT algorithm for bounded clique-width for reasoning problems beyond SAT.

Link to Repositum

Districting and Routing for Security Control
Prischink, Michael, Kloimüllner, Christian, Biesinger, Benjamin, Raidl, Günther R.
Type: Inproceedings; In: Hybrid Metaheuristics; Pages: 87-103
Show Abstract
Regular security controls on a day by day basis are an essential and important mechanism to prevent theft and vandalism in business buildings. Typically, security workers patrol through a set of objects where each object requires a particular number of visits on all or some days within a given planning horizon, and each of these visits has to be performed in a specific time window. An important goal of the security company is to partition all objects into a minimum number of disjoint clusters such that for each cluster and each day of the planning horizon a feasible route for performing all the requested visits exists. Each route is limited by a maximum working time, must satisfy the visits´ time window constraints, and any two visits of one object must be separated by a minimum time difference. We call this problem the Districting and Routing Problem for Security Control. In our heuristic approach we split the problem into a districting part where objects have to be assigned to districts and a routing part where feasible routes for each combination of district and period have to be found. These parts cannot be solved independently though. We propose an exact mixed integer linear programming model and a routing construction heuristic in a greedy like fashion with variable neighborhood descent for the routing part as well as a districting construction heuristic and an iterative destroy & recreate algorithm for the districting part. Computational results show that the exact algorithm is only able to solve small routing instances and the iterative destroy & recreate algorithm is able to reduce the number of districts significantly from the starting solutions.

Link to Repositum

An Integer L-shaped Method for the Generalized Vehicle Routing Problem with Stochastic Demands
Biesinger, Benjamin, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Electronic Notes in Discrete Mathematics; Pages: 245-252
Show Abstract
In this work we consider the generalized vehicle routing problem with stochastic demands (GVRPSD). This NP-hard problem combines the clustering aspect of the generalized vehicle routing problem with the uncertainty aspect of the vehicle routing problem with stochastic demands. We propose an integer L-shaped method based on decomposition and branch-and-cut. The subproblem of computing the restocking costs is based on dynamic programming. We consider the preventive restocking strategy which is substantially harder than the standard restocking strategy used by the majority of the published articles for stochastic vehicle routing problems. Using this strategy the vehicle can make a return trip to the depot even before an actual stockout occurs and therefore save travel time. The GVRPSD has not been considered in the literature so far and this first exact solution attempt proves to be able to solve small to medium instances.

Link to Repositum

Backdoors to Tractable Valued CSP
Ganian, Robert, Ramanujan, M. Sridharan, Szeider, Stefan
Type: Inproceedings; In: Principles and Practice of Constraint Programming (Proceedings of 22nd CP); Pages: 233-250
Show Abstract
We extend the notion of a strong backdoor from the CSP setting to the Valued CSP setting (VCSP, for short). This provides a means for augmenting a class of tractable VCSP instances to instances that are outside the class but of small distance to the class, where the distance is measured in terms of the size of a smallest backdoor. We establish that VCSP is fixed-parameter tractable when parameterized by the size of a smallest backdoor into every tractable class of VCSP instances characterized by a (possibly infinite) tractable valued constraint language of finite arity and finite domain.We further extend this fixed-parameter tractability result to so-called "scattered classes" of VCSP instances where each connected component may belong to a different tractable class.

Link to Repositum

Particle Therapy Patient Scheduling: First Heuristic Approaches
Maschler, Johannes, Riedler, Martin, Stock, Markus, Raidl, Günther
Type: Inproceedings; In: PATAT 2016: Proceedings of the 11th International Conference of the Practice and Theory of Automated Timetabling; Pages: 223-244
Show Abstract
The Particle Therapy Patient Scheduling Problem arises in radiotherapy used for cancer treatment. Previous contributions in the existing literature primarily dealt with photon and electron therapy with a one-to-one correspondence of treatment rooms and accelerators. In particle therapy, however, a single accelerator serves multiple rooms in an interleaved way. This leads to a novel scenario in which the main challenge is to utilize the particle beam as well as possible. Switching between rooms allows to reduce idle time of the beam that emerges as a consequence of preparation steps. In this work we present first algorithms for solving this problem. In particular, we address the midterm planning variant which involves a time horizon of a few months but also requires detailed scheduling within each day. We formalize the problem via a mixed integer linear programming model, which, however, turns out to be intractable in practice. Consequently, we start with a construction heuristic featuring a forward-looking mechanism. Based upon this fast method we further study a Greedy Randomized Adaptive Search Procedure as well as an Iterated Greedy metaheuristic. A computational comparison of these algorithms is performed on benchmark instances created in a way to reflect the most important aspects of a real-world scenario.

Link to Repositum

Lifting QBF Resolution Calculi to DQBF
Beyersdorff, Olaf, Chew, Leroy, Schmidt, Renate A., Suda, Martin
Type: Inproceedings; In: Theory and Applications of Satisfiability Testing – SAT 2016; Pages: 490-499
Show Abstract
We examine existing resolution systems for quantified Boolean formulas (QBF) and answer the question which of these calculi can be lifted to the more powerful Dependency QBFs (DQBF). An interesting picture emerges: While for QBF we have the strict chain of proof systems 𝖰-𝖱𝖾𝗌<𝖨𝖱-𝖼𝖺𝗅𝖼<𝖨𝖱𝖬-𝖼𝖺𝗅𝖼Q-Res

Link to Repositum

Parameterized Complexity Results for the Kemeny Rule in Judgment Aggregation
de Haan, Ronald
Type: Inproceedings; In: Proceedings of the 22nd European Conference on Artificial Intelligence - ECAI 2016; Pages: 1502-1510
Show Abstract
We investigate the parameterized complexity of computing an outcome of the Kemeny rule in judgment aggregation, providing the first parameterized complexity results for this problem for any judgment aggregation procedure. As parameters, we consider (i) the number of issues, (ii) the maximum size of formulas used to represent issues, (iii) the size of the integrity constraint used to restrict the set of feasible opinions, (iv) the number of individuals, and (v) the maximum Hamming distance between any two individual opinions, as well as all possible combinations of these parameters. We provide parameterized complexity results for two judgment aggregation frameworks: formula-based judgment aggregation and constraint-based judgment aggregation. Whereas the classical complexity of computing an outcome of the Kemeny rule in these two frameworks coincides, the parameterized complexity results differ.

Link to Repositum

Polynomial-Time Construction of Optimal MPI Derived Datatype Trees
Ganian, Robert, Kalany, Martin, Szeider, Stefan, Träff, Jesper Larsson
Type: Inproceedings; In: 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS)
Show Abstract
The derived datatype mechanism is a powerful, integral feature of the Message-Passing Interface (MPI) for communicating arbitrarily structured, possibly non-consecutive and non-homogeneous application data. MPI defines a set of derived datatype constructors of increasing generality, which allows to describe arbitrary data layouts in a reasonably compact fashion. The constructors may be applied recursively, leading to tree-like representations of the application data layouts. Efficient derived datatype representations are required for MPI implementations to efficiently access and process structured application data. We study the problem of finding tree-like representations of MPI derived datatypes that are optimal in terms of space and processing cost. More precisely, we consider the so-called MPI Type Reconstruction Problem of determining a least-cost tree-like representation of a given data layout for a given set of constructors. In an additive cost model that accounts for the space consumption of the constructors and lower-bounds the processing costs, we show that the problem can be solved in polynomial time for the full set of MPI datatype constructors. Our algorithm uses dynamic programming and requires the solution of a series of shortest path problems on an incrementally built, directed, acyclic graph.

Link to Repositum

Succinctness of Languages for Judgment Aggregation
Endriss, Ulle, Grandi, Umberto, de Haan, Ronald, Lang, Jérôme
Type: Inproceedings; In: Proceedings of the 2016 International Conference on Principles of Knowledge Representation and Reasoning - KR 2016; Pages: 176-186
Show Abstract
We review several different languages for collective decision making problems, in which agents express their judgments, opinions, or beliefs over elements of a logically structured domain. Several such languages have been proposed in the literature to compactly represent the questions on which the agents are asked to give their views. In particular, the framework of judgment aggregation allows agents to vote directly on complex, logically related formulas, whereas the setting of binary aggregation asks agents to vote on propositional variables, over which dependencies are expressed by means of an integrity constraint. We compare these two languages and some of their variants according to their relative succinctness and according to the computational complexity of aggregating several individual views expressed in such languages into a collective judgment. Our main finding is that the formula-based language of judgment aggregation is more succinct than the constraint-based language of binary aggregation. In many (but not all) practically relevant cases, this increase in succinctness does not entail an increase in complexity of the corresponding problem of computing the outcome of an aggregation rule.

Link to Repositum

Parameterized Complexity Results for Symbolic Model Checking of Temporal Logics
de Haan, Ronald, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 15th International Conference on Principles of Knowledge Representation and Reasoning - KR 2016; Pages: 453-462
Show Abstract
Reasoning about temporal knowledge is a fundamental task in the area of artificial intelligence and knowledge representation. A key problem in this area is model checking, and indispensable for the state-of-the-art in solving this problem in large-scale settings is the technique of bounded model checking. We investigate the theoretical possibilities of this technique using parameterized complexity theory. In particular, we provide a complete parameterized complexity classification for the model checking problem for symbolically represented Kripke structures for various fragments of the temporal logics LTL, CTL and CTL . We argue that a known result from the literature for a restricted fragment of LTL can be seen as an fpt-reduction to SAT, and show that such reductions are not possible for any of the other fragments of the temporal logics that we consider. As a by-product of our investigation, we develop a novel parameterized complexity class that can be seen as a parameterized variant of the Polynomial Hierarchy.

Link to Repositum

A SAT Approach to Branchwidth
Lodha, Neha, Ordyniak, Sebastian, Szeider, Stefan
Type: Inproceedings; In: Proceedings of SAT 2016: Theory and Applications of Satisfiability Testing - SAT 2016; Pages: 179-195
Show Abstract
Branch decomposition is a prominent method for structurally decomposing a graph, hypergraph or CNF formula. The width of a branch decomposition provides a measure of how well the object is decomposed. For many applications it is crucial to compute a branch decomposition whose width is as small as possible. We propose a SAT approach to finding branch decompositions of small width. The core of our approach is an efficient SAT encoding which determines with a single SAT-call whether a given hypergraph admits a branch decomposition of certain width. For our encoding we developed a novel partition-based characterization of branch decomposition. The encoding size imposes a limit on the size of the given hypergraph. In order to break through this barrier and to scale the SAT approach to larger instances, we developed a new heuristic approach where the SAT encoding is used to locally improve a given candidate decomposition until a fixed-point is reached. This new method scales now to instances with several thousands of vertices and edges.

Link to Repositum

An Algorithmic Framework for Labeling Road Maps
Niedermann, Benjamin, Nöllenburg, Martin
Type: Inproceedings; In: Geographic Information Science 9th International Conference; Pages: 308-322
Show Abstract
Given an unlabeled road map, we consider, from an algorithmic perspective, the cartographic problem of placing non-overlapping road labels embedded in the roads. We first decompose the road network into logically coherent road sections, i.e., parts of roads between two junctions. Based on this decomposition, we present and implement a new and versatile framework for placing labels in road maps such that the number of labeled road sections is maximized. In an experimental evaluation with road maps of 11 major cities we show that our proposed labeling algorithm is both fast in practice and that it reaches near-optimal solution quality, where optimal solutions are obtained by mixed-integer linear programming. In direct comparison, our algorithm consistently outperforms the standard OpenStreetMap renderer Mapnik.

Link to Repositum

Capturing Structure in SAT and Related Problems
Szeider, Stefan
Type: Presentation
Show Abstract
Propositional satisfiability (SAT) and related problems (like Model Counting, QBF-SAT, and CSP) are in practise often much easier to solve than suggested by their respective theoretical worst-case complexities. This "friendliness" of the real world is often explained by the presence of some kind of "hidden structure" in practical problem instances. In this talk we will review some mathematical concepts that attempt to capture the structure in problem instances and discuss their virtues and limits. We will focus rather on general questions than on technical details.

Link to Repositum

On Existential MSO and its Relation to ETH
Ganian, Robert, de Haan, Ronald, Kanj, Iyad, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 41st International Symposium on Mathematical Foundations of Computer Science; Pages: 1-14
Show Abstract
Impagliazzo et al. proposed a framework, based on the logic fragment defining the complexity class SNP, to identify problems that are equivalent to k-CNF-Sat modulo subexponential-time reducibility (serf-reducibility). The subexponential-time solvability of any of these problems implies the failure of the Exponential Time Hypothesis (ETH). In this paper, we extend the framework of Impagliazzo et al., and identify a larger set of problems that are equivalent to k-CNF-Sat modulo serf-reducibility. We propose a complexity class, referred to as Linear Monadic NP, that consists of all problems expressible in existential monadic second order logic whose expressions have a linear measure in terms of a complexity parameter, which is usually the universe size of the problem. This research direction can be traced back to Fagin's celebrated theorem stating that NP coincides with the class of problems expressible in existential second order logic. Monadic NP, a well-studied class in the literature, is the restriction of the aforementioned logic fragment to existential monadic second order logic. The proposed class Linear Monadic NP is then the restriction of Monadic NP to problems whose expressions have linear measure in the complexity parameter. We show that Linear Monadic NP includes many natural complete problems such as the satisfiability of linear-size circuits, dominating set, independent dominating set, and perfect code. Therefore, for any of these problems, its subexponential-time solvability is equivalent to the failure of ETH. We prove, using logic games, that the aforementioned problems are inexpressible in the monadic fragment of SNP, and hence, are not captured by the framework of Impagliazzo et al. Finally, we show that Feedback Vertex Set is inexpressible in existential monadic second order logic, and hence is not in Linear Monadic NP, and investigate the existence of certain reductions between Feedback Vertex Set (and variants of it) and 3-CNF-Sat.

Link to Repositum

On the Complexity Landscape of Connected f-Factor Problems*
Ganian, Robert, Narayanaswamy, N. S., Ordyniak, Sebastian, Rahul, C. S., Ramanujan, M. Sridharan
Type: Inproceedings; In: Proceedings of the 41st International Symposium on Mathematical Foundations of Computer Science; Pages: 1-14
Show Abstract
Given an n-vertex graph G and a function f from V(G) to {0,...,n-1}, an f-factor is a subgraph H of G such that deg_H(v) = f(v) for every vertex v in V(G); we say that H is a connected f-factor if, in addition, the subgraph H is connected. A classical result of Tutte (1954) is the polynomial time algorithm to check whether a given graph has a specified f-factor. However, checking for the presence of a connected f-factor is easily seen to generalize Hamiltonian Cycle and hence is NP-complete. In fact, the Connected f-Factor problem remains NP-complete even when f(v) is at least n to the power of epsilon for each vertex v and epsilon < 1; on the other side of the spectrum, the problem was known to be polynomial-time solvable when f(v) is at least n divided by 3 for every vertex v. In this paper, we extend this line of work and obtain new complexity results based on restricting the function f. In particular, we show that when f(v) is required to be at least n divided by ((log n) to the power of c), the problem can be solved in quasi-polynomial time in general and in randomized polynomial time if c is at most 1. We also show that when c > 1, the problem is NP-intermediate.

Link to Repositum

Knowledge Compilation Meets Communication Complexity
Bova, Simone Maria, Capelli, Florent, Mengel, Stefan, Slivovsky, Friedrich
Type: Inproceedings; In: Proceedings of the 25th International Joint Conference on Artificial Intelligence - IJCAI 2016; Pages: 1008-1014
Show Abstract
Choosing a language for knowledge representation and reasoning involves a trade-off between two competing desiderata: succinctness (the encoding should be small) and tractability (the language should support efficient reasoning algorithms). The area of knowledge compilation is devoted to the systematic study of representation languages along these two dimensions-in particular, it aims to determine the relative succinctness of languages. Showing that one language is more succinct than another typically involves proving a nontrivial lower bound on the encoding size of a carefully chosen function, and the corresponding arguments increase in difficulty with the succinctness of the target language. In this paper, we introduce a general technique for obtaining lower bounds on Decomposable Negation Normal Form (DNNFs), one of the most widely studied and succinct representation languages, by relating the size of DNNFs to multi-partition communication complexity. This allows us to directly translate lower bounds from the communication complexity literature into lower bounds on the size of DNNF representations. We use this approach to prove exponential separations of DNNFs from deterministic DNNFs and of CNF formulas from DNNFs.

Link to Repositum

SOBRA - Shielding Optimization for BRAchytherapy
Blin, Guillaume, Gasparoux, Marie, Ordyniak, Sebastian, Popa, Alexandru
Type: Inproceedings; In: Combinatorial Algorithms - 27th International Workshop; Pages: 309-320
Show Abstract
In this paper, we study a combinatorial problem arising in the development of innovative treatment strategies and equipment using tunable shields in internal radiotherapy. From an algorithmic point of view, this problem is related to circular integer word decomposition into circular binary words under constraints. We consider several variants of the problem, depending on constraints and parameters and present exact algorithms, polynomial time approximation algorithms and NP-hardness results.

Link to Repositum

Time-Bucket Relaxation Based Mixed Integer Programming Models for Scheduling Problems: A Promising Starting Point for Matheuristics
Raidl, Günther, Jatschka, Thomas, Riedler, Martin, Maschler, Johannes
Type: Inproceedings; In: Proceedings of the Sixth International Workshop on Model-based Metaheuristics; Pages: 104-107
Show Abstract
In job shop and project scheduling problems, generally speaking, a set of activities shall be scheduled over time. The execution of the activities typically depends on certain resources of limited availability and diverse other restrictions like precedence constraints. A feasible schedule is sought that minimizes some objective function like the makespan. For such problems, mixed integer linear programming (MIP) techniques are frequently considered, but also known to have severe limitations. Here, we consider a relaxation of a potentially very fine-grained TI model in which the set of possible starting times is partitioned into so-called time-buckets. (TB). This TB relaxation is typically much smaller than the original TI model and can be solved relatively quickly. An obtained solution provides a lower bound for the TI model's solution value but in general does not directly represent a feasible schedule as activity start times are only restricted to certain time-intervals. This solution, however, provides a promising starting point for matheuristics. On the one hand, we may try to derive a feasible schedule by heuristically relaxing the start times to specific values, trying to full all constraints. On the other hand, we can further subdivide some time-buckets and re-solve the resulting refined model to obtain an improved bound and a model that comes closer to the TI model. Doing this refinement iteratively yields a matheuristic that in principle converges to a provably optimal solution. In practice, it is crucial to subdivide the time-buckets in a sensible way in order to increase the model's size only slowly while hopefully obtaining significantly stronger bounds. (Meta-)heuristic techniques and dual variable information may provide a strong guidance.

Link to Repositum

The Complexity Landscape of Decompositional Parameters for ILP
Ganian, Robert, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence; Pages: 710-716
Show Abstract
Integer Linear Programming (ILP) can be seen as the archetypical problem for NP-complete optimization problems, and a wide range of problems in artificial intelligence are solved in practice via a translation to ILP. Despite its huge range of applications, only few tractable fragments of ILP are known, probably the most prominent of which is based on the notion of total unimodularity. Using entirely different techniques, we identify new tractable fragments of ILP by studying structural parameterizations of the constraint matrix within the framework of parameterized complexity. In particular, we show that ILP is fixed-parameter tractable when parameterized by the treedepth of the constraint matrix and the maximum absolute value of any coefficient occurring in the ILP instance. Together with matching hardness results for the more general parameter treewidth, we draw a detailed complexity landscape of ILP w.r.t. decompositional parameters defined on the constraint matrix.

Link to Repositum

A Faster Parameterized Algorithm for Group Feedback Edge Set
Ramanujan, M. Sridharan
Type: Inproceedings; In: Graph-Theoretic Concepts in Computer Science; Pages: 269-281
Show Abstract
In the Group Feedback Edge Set (l) (GFES(l)) problem, the input is a group-labeled graph G over a group of order l and an integer k and the objective is to test whether there exists a set of at most k edges intersecting every non-null cycle in G. The study of the parameterized complexity of GFES(l) was motivated by the fact that it generalizes the classical Edge Bipartization problem when l=2. Guillemot [IWPEC 2008, Discrete Optimization 2011] initiated the study of the parameterized complexity of this problem and proved that it is fixed-parameter tractable (FPT) parameterized by k. Subsequently, Wahlström [SODA 2014] and Iwata et al. [2014] presented algorithms running in time $O(4^k n^{O(1)})$ (even in the oracle access model) and $O(l^{2k}m) respectively. In this paper, we give an algorithm for GFES(l) running in time O(4^kk^{3}l(m+n)). Our algorithm matches that of Iwata et al. when l=2 (upto a multiplicative factor of $k^3$) and gives an improvement for l>2.

Link to Repositum

Using Decomposition-Parameters for QBF: Mind the Prefix!
Eiben, Eduard, Ganian, Robert, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence; Pages: 964-970
Show Abstract
Similar to the satisfiability (SAT) problem, which can be seen to be the archetypical problem for NP, the quantified Boolean formula problem (QBF) is the archetypical problem for PSPACE. Recently, Atserias and Oliva (2014) showed that, unlike for SAT, many of the well-known decompositional parameters (such as treewidth and pathwidth) do not allow efficient algorithms for QBF. The main reason for this seems to be the lack of awareness of these parameters towards the dependencies between variables of a QBF formula. In this paper we extend the ordinary pathwidth to the QBF-setting by introducing prefix pathwidth, which takes into account the dependencies between variables in a QBF, and show that it leads to an efficient algorithm for QBF. We hope that our approach will help to initiate the study of novel tailor-made decompositional parameters for QBF and thereby help to lift the success of these decompositional parameters from SAT to QBF.

Link to Repositum

SDDs Are Exponentially More Succinct than OBDDs
Bova, Simone Maria
Type: Inproceedings; In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence; Pages: 929-935
Show Abstract
Introduced by Darwiche (2011), sentential decision diagrams (SDDs) are essentially as tractable as ordered binary decision diagrams (OBDDs), but tend to be more succinct in practice. This makes SDDs a prominent representation language, with many applications in artificial intelligence and knowledge compilation. We prove that SDDs are more succinct than OBDDs also in theory, by constructing a family of boolean functions where each member has polynomial SDD size but exponential OBDD size. This exponential separation improves a quasipolynomial separation recently established by Razgon (2014a), and settles an open problem in knowledge compilation (Darwiche 2011).

Link to Repositum

A Parameterized Algorithm for Mixed-Cut
Rai, Ashutosh, Ramanujan, M. Sridharan, Saurabh, Saket
Type: Inproceedings; In: Proceedings of LATIN 2016: Theoretical Informatics - 12th Latin American Symposium; Pages: 672-685
Show Abstract
The classical Menger´s theorem states that in any undirected (or directed) graph G, given a pair of vertices s and t, the maximum number of vertex (edge) disjoint paths is equal to the minimum number of vertices (edges) needed to disconnect s from t. This min-max result can be turned into a polynomial time algorithm to find the maximum number of vertex (edge) disjoint paths as well as the minimum number of vertices (edges) needed to disconnect s from t. In this paper we study a mixed version of this problem, called Mixed-Cut, where we are given an undirected graph G, vertices s and t, positive integers k and l and the objective is to test whether there exist a k sized vertex set S ⊆ V (G) and an l sized edge set F ⊆ E(G) such that deletion of S and F from G disconnects from s and t. Apart from studying a generalization of classical problem, one of our main motivations for studying this problem comes from the fact that this problem naturally arises as a subproblem in the study of several graph editing (modification) problems. We start with a small observation that this problem is NP-complete and then study this problem, in fact a much stronger generalization of this, in the realm of parameterized complexity. In particular we study the Mixed Multiway Cut-Uncut problem where along with a set of terminals T, we are also given an equivalence relation R on T, and the question is whether we can delete at most k vertices and at most l edges such that connectivity of the terminals in the resulting graph respects R. Ourmain result is a fixed parameter algorithm for Mixed Multiway Cut-Uncut using the method of recursive understanding introduced by Chitnis et al. (FOCS 2012).

Link to Repositum

Edge-Editing to a Dense and a Sparse Graph Class
Kotrbcık, Michal, Kralovic, Rastislav, Ordyniak, Sebastian
Type: Inproceedings; In: Proceedings of LATIN 2016: Theoretical Informatics - 12th Latin American Symposium; Pages: 562-575
Show Abstract
We consider a graph edge-editing problem, where the goal is to transform a given graph G into a disjoint union of two graphs from a pair of given graph classes, investigating what properties of the classes make the problem fixed-parameter tractable. We focus on the case when the first class is dense, i.e. every such graph G has minimum degree at least |V (G)| − δ for a constant δ, and assume that the cost of editing to this class is fixed-parameter tractable parameterized by the cost. Under the assumptions that the second class either has bounded maximum degree, or is edge-monotone, can be defined in MSO2, and has bounded treewidth, we prove that the problem is fixed-parameter tractable parameterized by the cost. We also show that the problem is fixed-parameter tractable parameterized by degeneracy if the second class consists of independent sets and Subgraph Isomorphism is fixedparameter tractable for the input graphs. On the other hand, we prove that parameterization by degeneracy is in general W[1]-hard even for editing to cliques and independent sets.

Link to Repositum

A New Perspective on FO Model Checking of Dense Graph Classes
Gajarsky, Jakub, Hlinený, Petr, Obdrzalek, Jan, Lokshtanov, Daniel, Ramanujan, M. Sridharan
Type: Inproceedings; In: Proceedings of the 31st Annual Symposium on Logic in Computer Science; Pages: 176-184
Show Abstract
We study the FO model checking problem of dense graph classes, namely those which are FO-interpretable in some sparse graph classes. Note that if an input dense graph is given together with the corresponding FO interpretation in a sparse graph, one can easily solve the model checking problem using the existing algorithms for sparse graph classes. However, if the assumed interpretation is not given, then the situation is markedly harder. In this paper we give a structural characterization of graph classes which are FO interpretable in graph classes of bounded degree. This characterization allows us to efficiently compute such an interpretation for an input graph. As a consequence, we obtain an FPT algorithm for FO model checking of graph classes FO interpretable in graph classes of bounded degree. The approach we use to obtain these results may also be of independent interest.

Link to Repositum

A Single-Exponential Fixed-Parameter Algorithm for Distance-Hereditary Vertex Deletion
Eiben, Eduard, Ganian, Robert, Kwon, O-Joung
Type: Inproceedings; In: Proceedings of the 41st International Symposium on Mathematical Foundations of Computer Science; Pages: 1-14
Show Abstract
Vertex deletion problems ask whether it is possible to delete at most k vertices from a graph so that the resulting graph belongs to a specified graph class. Over the past years, the parameterized complexity of vertex deletion to a plethora of graph classes has been systematically researched. Here we present the first single-exponential fixed-parameter algorithm for vertex deletion to distance-hereditary graphs, a well-studied graph class which is particularly important in the context of vertex deletion due to its connection to the graph parameter rank-width. We complement our result with matching asymptotic lower bounds based on the exponential time hypothesis.

Link to Repositum

Discovering Archipelagos of Tractability for Constraint Satisfaction and Counting
Ganian, Robert, Ramanujan, M. Sridharan, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms; Pages: 1670-1681
Show Abstract
The Constraint Satisfaction Problem (CSP) is a central and generic computational problem which provides a common framework for many theoretical and practical applications. A central line of research is concerned with the identification of classes of instances for which CSP can be solved in polynomial time; such classes are often called "islands of tractability." A prominent way of defining islands of tractability for CSP is to restrict the relations that may occur in the constraints to a fixed set, called a constraint language, whereas a constraint language is conservative if it contains all unary relations. Schaefer´s famous Dichotomy Theorem (STOC 1978) identifies all islands of tractability in terms of tractable constraint languages over a Boolean domain of values. Since then many extensions and generalizations of this result have been obtained. Recently, Bulatov (TOCL 2011, JACM 2013) gave a full characterization of all islands of tractability for CSP and the counting version #CSP that are defined in terms of conservative constraint languages. This paper addresses the general limit of the mentioned tractability results for CSP and #CSP, that they only apply to instances where all constraints belong to a single tractable language (in general, the union of two tractable languages isn´t tractable). We show that we can overcome this limitation as long as we keep some control of how constraints over the various considered tractable languages interact with each other. For this purpose we utilize the notion of a strong backdoor of a CSP instance, as introduced by Williams et al. (IJCAI 2003), which is a set of variables that when instantiated moves the instance to an island of tractability, i.e., to a tractable class of instances. We consider strong backdoors into scattered classes, consisting of CSP instances where each connected component belongs entirely to some class from a list of tractable classes. Figuratively speaking, a scattered class constitutes an archipelago of tractability. The main difficulty lies in finding a strong backdoor of given size k; once it is found, we can try all possible instantiations of the backdoor variables and apply the polynomial time algorithms associated with the islands of tractability on the list component wise. Our main result is an algorithm that, given a CSP instance with n variables, finds in time f(k)nO(1) a strong backdoor into a scattered class (associated with a list of finite conservative constraint languages) of size k or correctly decides that there isn´t such a backdoor. This also gives the running time for solving (#)CSP, provided that (#)CSP is polynomial-time tractable for the considered constraint languages. Our result makes significant progress towards the main goal of the backdoor-based approach to CSPs - the identification of maximal base classes for which small backdoors can be detected efficiently.

Link to Repositum

A Multi-Commodity FLow Based Model for Multi Layer Hierarchical Ring Network Design
Schauer, Christian, Raidl, Günther
Type: Inproceedings; In: Proceedings of INOC 2015 - 7th International Network Optimization Conference; Pages: 189-196
Show Abstract
We address the Multi Layer Hierarchical Ring Network Design Problem. The aim of this problem is to connect nodes that are assigned to different layers using a hierarchy of rings of bounded length. We present a multi-commodity flow based mixed integer linear programming formulation and experimentally evaluate it on various graphs. Instances up to 76 nodes and 281 edges could be solved to optimality.

Link to Repositum

Finding Uniquely Hamiltonian Graphs of Minimum Degree Three with Small Crossing Numbers
Klocker, Benedikt, Fleischner, Herbert, Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics - 10th International Workshop, HM 2016, Plymouth, UK, June 8-10, 2016, Proceedings; Pages: 1-16
Show Abstract
In graph theory, a prominent conjecture of Bondy and Jackson states that every uniquely hamiltonian planar graph must have a vertex of degree two. In this work we try to find uniquely hamiltonian graphs with minimum degree three and a small crossing number by minimizing the number of crossings in an embedding and the number of degree-two vertices. We formalize an optimization problem for this purpose and propose a general variable neighborhood search (GVNS) for solving it heuristically. The several different types of used neighborhoods also include an exponentially large neighborhood that is effectively searched by means of branch and bound. To check feasibility of neighbors we need to solve hamiltonian cycle problems, which is done in a delayed manner to minimize the computation effort. We compare three different configurations of the GVNS. Although our implementation could not find a uniquely hamiltonian planar graph with minimum degree three disproving Bondy and Jackson´s conjecture, we were able to find uniquely hamiltonian graphs of minimum degree three with crossing number four for all number of vertices from 10 to 100.

Link to Repositum

Parameterized Complexity Results for the Kemeny Rule in Judgment Aggregation
de Haan, Ronald
Type: Inproceedings; In: Proceedings of the Sixth International Workshop on Computational Social Choice - COMSOC 2016; Pages: 19
Show Abstract
We investigate the parameterized complexity of computing an outcome of the Kemeny rule in judgment aggregation, providing the first parameterized complexity results for this problem for any judgment aggregation procedure. As parameters, we consider (i) the number of issues, (ii) the maximum size of formulas used to represent issues, (iii) the size of the integrity constraint used to restrict the set of feasible opinions, (iv) the number of individuals, and (v) the maximum Hamming distance between any two individual opinions, as well as all possible combinations of these parameters. We provide parameterized complexity results for two judgment aggregation frameworks: formula-based judgment aggregation and constraint-based judgment aggregation. Whereas the classical complexity of computing an outcome of the Kemeny rule in these two frameworks coincides, the parameterized complexity results differ.

Link to Repositum

Cyclic Giant Tour Decoding for the EVRPTW
Bacher, Christopher, Raidl, Günther
Type: Presentation
Show Abstract
The scope of our work is the development of efficient route-first-cluster-second decoders for the Electric Vehicle Routing Problem with Time Windows (EVRPTW). Such a decoder has to fulfill two purposes: Splitting a permutation of all customer nodes into several feasible routes and inserting recharging stations into the giant tour. We contribute the following to the field: First, a giant tour decoder is provided for the EVRPTW. Second, the presented approach relies on Dynamic Programming (DP) for all steps of the procedure. In contrast the adapted Split-algorithm of (Montoya et al., 2015) for the related Green Vehicle Routing Problem (G-VRP) recomputes the optimal insertion of recharging stations for each evaluated split. Further the pervasive use of DP allows us to efficiently treat giant tours as cycles, determining the optimal starting position of the first vehicle automatically while avoiding costly recomputations. As the decoder is intended for future use in heuristic approaches the DP also facilitates partial (re-)evaluation. Lastly, as a result of our work, we will provide an open source DP framework for declaratively specifying DPs. The framework borrows the tree grammar analogy of Algebraic Dynamic Programming (Giegerich et al., 2002) and features separation of search tree traversal and evaluation, dominance conditions, search space constraints, and partial invalidation and recomputation.

Link to Repositum

Software Visualization via Hierarchic Micro/Macro Layouts
Nöllenburg, Martin, Rutter, Ignaz, Schuhmacher, Alfred
Type: Inproceedings; In: Information Visualization Theory and Applications Conf IVAPP 2016; Pages: 153-160
Show Abstract
We propose a system for visualizing the structure of software in a single drawing. In contrast to previous work we consider both the dependencies between different entities of the software and the hierarchy imposed by the nesting of classes and packages. To achieve this, we generalize the concept of micro/macro layouts introduced by Brandes and Baur (Baur and Brandes, 2008) to graphs that have more than two hierarchy levels. All entities of the software (e.g., attributes, methods, classes, packages) are represented as disk-shaped regions of the plane. The hierarchy is expressed by containment, all other relations, e.g., inheritance, functions calls and data access, are expressed by directed edges. As in the micro/macro layouts of Brandes and Baur, edges that "traverse" the hierarchy are routed together in channels to enhance the clarity of the layout. The resulting drawings provide an overview of the coarse structure of the software as well as detailed information about individual components.

Link to Repositum

Adjacency-preserving spatial treemaps
Buchin, Kevin, Eppstein, David, Löffler, Maarten, Nöllenburg, Martin, Silveira, Rodrigo I.
Type: Article; In: Journal of Computational Geometry; Vol: 7; Issue: 1; Pages: 100-122
Show Abstract
Rectangular layouts, subdivisions of an outer rectangle into smaller rectangles, have many applications in visualizing spatial information, for instance in rectangular cartograms in which the rectangles represent geographic or political regions. A spatial treemap is a rectangular layout with a hierarchical structure: the outer rectangle is subdivided into rectangles that are in turn subdivided into smaller rectangles. We describe algorithms for transforming a rectangular layout that does not have this hierarchical structure, together with a clustering of the rectangles of the layout, into a spatial treemap that respects the clustering and also respects to the extent possible the adjacencies of the input layout.

Link to Repositum

Consistent labeling of rotating maps
Gemsa, Andreas, Nöllenburg, Martin, Rutter, Ignaz
Type: Article; In: Journal of Computational Geometry; Vol: 7; Issue: 1; Pages: 308-331
Show Abstract
Dynamic maps that allow continuous map rotations, for example, on mobile devices, encounter new geometric labeling issues unseen in static maps before. We study the following dynamic map labeling problem: The input is an abstract map consisting of a set P of points in the plane with attached horizontally aligned rectangular labels. While the map with the point set P is rotated, all labels remain horizontally aligned. We are interested in a consistent labeling of P under rotation, i.e., an assignment of a single (possibly empty) active interval of angles for each label that determines its visibility under rotations such that visible labels neither intersect each other (soft conflicts) nor occlude points in P at any rotation angle (hard conflicts). Our goal is to find a consistent labeling that maximizes the number of visible labels integrated over all rotation angles. We first introduce a general model for labeling rotating maps and derive basic geo- metric properties of consistent solutions. We show NP-hardness of the above optimization problem even for unit-square labels. We then present a constant-factor approximation for this problem based on line stabbing, and refine it further into an efficient polynomial-time approximation scheme (EPTAS).

Link to Repositum

2015
A hybrid genetic algorithm with solution archive for the discrete (r|p)-centroid problem
Biesinger, Benjamin, Hu, Bin, Raidl, Günther
Type: Article; In: Journal of Heuristics; Vol: 21; Issue: 3; Pages: 391-431
Show Abstract
In this article we propose a hybrid genetic algorithm for the discrete(r/p)-centroid problem. We consider the competitive facility location problem where two non-cooperating companies enter a market sequentially and compete for market share. The rst decision maker, called the leader, wants to maximize his market share knowing that a follower will enter the same market. Thus, for evaluating a leader's candidate solution, a corresponding follower's subproblem needs to be solved, and the overall problem therefore is a bi-level optimization problem. This problem is P2-hard, i.e., harder than any problem in NP. A heuristic approach is employed which is based on a genetic algorithm with tabu search as local improvement procedure and a complete solution archive. The archive is used to store and convert already visited solutions in order to avoid costly unnecessary re-evaluations. Di erent solution evaluation methods are combined into an e ective multi-level evaluation scheme. The algorithm is tested on a well-known benchmark set as well as on larger newly created instances. For most of the instances we are able to outperform previous state-of-the-art heuristic approaches in solution quality and running time.

Link to Repositum

Decomposition based hybrid metaheuristics
Raidl, Günther R.
Type: Article; In: European Journal of Operational Research; Vol: 244; Issue: 1; Pages: 66-76
Show Abstract
Difficult combinatorial optimization problems coming from practice are nowadays often approached by hybrid metaheuristics that combine principles of classical metaheuristic techniques with advanced methods from fields like mathematical programming, dynamic programming, and constraint programming. If designed appropriately, such hybrids frequently outperform simpler "pure" approaches as they are able to exploit the underlying methods' individual advantages and benefit from synergy. This article starts with a general review of design patterns for hybrid approaches that have been successful on many occasions. More complex practical problems frequently have some special structure that might be exploited. In the field of mixed integer linear programming, three decomposition techniques are particularly well known for taking advantage of special structures: Lagrangian decomposition, Dantzig-Wolfe decomposition(column generation), and Benders' decomposition. It has been recognized that these concepts may also provide a very fruitful basis for effective hybrid metaheuristics. We review the basic principles of these decomposition techniques and discuss for each promising possibilities for combinations with metaheuristics. The approaches are illustrated with successful examples from literature.

Link to Repositum

Numerical Optimisation Approach for a Large Gas Engine Considering Different Fuel Gas Qualities
Holly, Werner, Lauer, Thomas, Pachler, Robert, Winter, Franz, Bacher, Christopher, Schuemie, Henrik A., Murakami, Shinsuke
Type: Inproceedings; In: Proceedings; Pages: 16

Link to Repositum

Recognizing Weighted Disk Contact Graphs
Klemz, Boris, Nöllenburg, Martin, Prutkin, Roman
Type: Inproceedings; In: Graph Drawing and Network Visualization; Pages: 433-446
Show Abstract
Disk contact representations realize graphs by mapping vertices bijectively to interior-disjoint disks in the plane such that two disks touch each other if and only if the corresponding vertices are adjacent in the graph. Deciding whether a vertex-weighted planar graph can be realized such that the disks´ radii coincide with the vertex weights is known to be NP-hard. In this work, we reduce the gap between hardness and tractability by analyzing the problem for special graph classes. We show that it remains NP-hard for outerplanar graphs with unit weights and for stars with arbitrary weights, strengthening the previous hardness results. On the positive side, we present constructive linear-time recognition algorithms for caterpillars with unit weights and for embedded stars with arbitrary weights.

Link to Repositum

Analyzing Decoding Strategies in a Memetic Algorithm for the Multi-Layer Hierarchical Ring Network Design Problem
Schauer, Christian, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 15th International Conference on Computer Aided Systems Theory; Pages: 81-82

Link to Repositum

On finding optimal polytrees
Gaspers, Serge, Koivisto, Mikko, Liedloff, Mathieu, Ordyniak, Sebastian, Szeider, Stefan
Type: Article; In: Theoretical Computer Science; Vol: 592; Pages: 49-58
Show Abstract
We study the NP-hard problem of finding a directed acyclic graph (DAG) on a given set of nodes so as to maximize a given scoring function. The problem models the task of inferring a probabilistic network from data, which has been studied extensively in the fields of artificial intelligence and machine learning. Several variants of the problem, where the output DAG is constrained in several ways, are NP-hard as well, for example when the DAG is required to have bounded in-degree, or when it is required to be a polytree. Polynomial-time algorithms are known only for rare special cases, perhaps most notably for branchings, that is, polytrees in which the in-degree of every node is at most one. In this paper, we generalize this polynomial-time result to polytrees that can be turned into a branching by deleting a constant number of arcs. Our algorithm stems from a matroid intersection formulation. As the order of the polynomial time bound depends on the number of deleted arcs, the algorithm does not establish fixed-parameter tractability when parameterized by that number. We show that certain additional constraints on the sought polytree render the problem fixed-parameter tractable. We contrast this positive result by showing that if we parameterize by the number of deleted nodes, a somewhat more powerful parameter, the problem is not fixed-parameter tractable, subject to a complexity-theoretic assumption.

Link to Repositum

Mixed Integer Programming Models for Hybrid Electric Vehicle Routing
Bacher, Christopher, Raidl, Günther
Type: Presentation

Link to Repositum

A Strongly Exponential Separation of DNNFs from CNFs
Bova, Simone Maria
Type: Presentation

Link to Repositum

Numerische Optimierung elektrifizierter Antriebsstränge
Krenek, Thorsten, Bacher, Christopher, Raidl, Günther, Lauer, Thomas
Type: Special Contribution; Vol: 76; Issue: 3; Pages: 66-74
Show Abstract
Um die von der Gesetzgebung vorgeschriebenen CO2-Ziele zu erreichen, wird die Zahl der Plug-in-Hybrid-Elektrofahrzeuge zukünftig weiter steigen. Die Komplexität des elektrifizierten Antriebsstrangs macht es aufgrund der höheren Anzahl an Freiheitsgraden schwierig, die Fahrzeugkomponenten, die Betriebsstrategie und das Wärmemanagement zu optimieren. Daher wurde eine spezielle Optimierungssoftware an der TU Wien entwickelt,um die Effizienz eines elektrifizierten Antriebsstrangs unter Berücksichtigung des Aufheizverhaltens der Fahrerkabine zu optimieren.

Link to Repositum

Numerical Optimization of Electro Hybrid Powertrains
Krenek, Thorsten, Bacher, Christopher, Raidl, Günther, Lauer, Thomas
Type: Article; In: MTZ worldwide; Vol: 76; Issue: 3; Pages: 46-52
Show Abstract
In the future, the number of plug-in hybrid electric vehicles will further increase in order to fulfil CO2 legislation. The complexity of electrified powertrains makes it difficult to optimise the vehicle performance and the thermal management efficiently due to the high number of degrees-of-freedom. Therefore, a tailor-made optimisation tool was developed at the Vienna University of Technology to optimise the efficiency of an electrified powertrain while considering comfort functions like the heating of the driver cabin.

Link to Repositum

FO Model Checking of Interval Graphs
Ganian, Robert, Hlineny, Petr, Kral, Daniel, Obdrzalek, Jan, Schwartz, Jarett, Teska, Jakub
Type: Article; In: Logical Methods in Computer Science; Vol: 11; Issue: 4
Show Abstract
We study the computational complexity of the FO model checking problem on interval graphs, i.e., intersection graphs of intervals on the real line. The main positive result is that FO model checking and successor-invariant FO model checking can be solved in time O(n log n) for n-vertex interval graphs with representations containing only intervals with lengths from a prescribed nite set. We complement this result by showing that the same is not true if the lengths are restricted to any set that is dense in an open subset, e.g. in the set (1; 1 + ").

Link to Repositum

Model Checking Existential Logic on Partially Ordered Sets
Bova, Simone, Ganian, Robert, Szeider, Stefan
Type: Article; In: ACM Transactions on Computational Logic; Vol: 17; Issue: 2; Pages: 1-35
Show Abstract
We study the problem of checking whether an existential sentence (i.e., a first-order sentence in prefix form built using existential quantifiers and all Boolean connectives) is true in a finite partially ordered set (a poset). A poset is a reflexive, antisymmetric, and transitive digraph. The problem encompasses the fundamental embedding problem of finding an isomorphic copy of a poset as an induced substructure of another poset. Model checking existential logic is already NP-hard on a fixed poset; thus, we investigate structural properties of posets yielding conditions for fixed-parameter tractability when the problem is parameterized by the sentence. We identify width as a central structural property (the width of a poset is the maximum size of a subset of pairwise incomparable elements); our main algorithmic result is that model checking existential logic on classes of finite posets of bounded width is fixed-parameter tractable. We observe a similar phenomenon in classical complexity, in which we prove that the isomorphism problem is polynomialtime tractable on classes of posets of bounded width; this settles an open problem in order theory. We surround our main algorithmic result with complexity results on less restricted, natural neighboring classes of finite posets, establishing its tightness in this sense.We also relate our work with (and demonstrate its independence of) fundamental fixed-parameter tractability results for model checking on digraphs of bounded degree and bounded clique-width.

Link to Repositum

Model Counting for CNF Formulas of Bounded Modular Treewidth
Paulusma, Daniel, Slivovsky, Friedrich, Szeider, Stefan
Type: Article; In: Algorithmica; Vol: 76; Issue: 1; Pages: 168-194
Show Abstract
We define the modular treewidth of a graph as its treewidth after contraction of modules. This parameter properly generalizes treewidth and is itself properly generalized by clique-width. We show that the number of satisfying assignments can be computed in polynomial time for CNF formulas whose incidence graphs have bounded modular treewidth. Our result generalizes known results for the treewidth of incidence graphs and is incomparable with known results for clique-width (or rank-width) of signed incidence graphs. The contraction of modules is an effective data reduction procedure. Our algorithm is the first one to harness this technique for #SAT. The order of the polynomial bounding the runtime of our algorithm depends on the modular treewidth of the input formula. We show that it is unlikely that this dependency can be avoided by proving that SAT is W[1]-hard when parameterized by the modular incidence treewidth of the given CNF formula.

Link to Repositum

Parameterized and subexponential-time complexity ofsatisfiability problems and applications
Kanj, Iyad, Szeider, Stefan
Type: Article; In: Theoretical Computer Science; Vol: 607; Pages: 282-295
Show Abstract
We study the parameterized and the subexponential-time complexity of the weighted and the unweighted satisfiability problems on bounded-depth normalized Boolean circuits. We establish relations between the subexponential-time complexity of the weighted and the unweighted satisfiability problems, and use them to derive relations among the subexponential-time complexity of several NP-hard problem. We then study the role of certain natural structural parameters of the circuit in characterizing the parameterized and the subexponential-time complexity of the circuit satisfiability problems under consideration. We obtain threshold functions on some circuit structural parameters, including the depth, the number of gates, the fan-in, and the maximum number of (variable) occurrences, that lead to tight characterizations of the parameterized and the subexponential-time complexity of the circuit satisfiability problems under consideration.

Link to Repositum

Quantifier Reordering for QBF
Slivovsky, Friedrich, Szeider, Stefan
Type: Article; In: Journal of Automated Reasoning; Vol: 56; Issue: 4; Pages: 459-477
Show Abstract
State-of-the-art procedures for evaluating quantified Boolean formulas often expect input formulas in prenex conjunctive normal form (PCNF). We study dependency schemes as a means of reordering the quantifier prefix of a PCNF formula while preserving its truth value. Dependency schemes associate each formula with a binary relation on its variables (the dependency relation) that imposes constraints on certain operations manipulating the formula's quantifier prefix.We prove that known dependency schemes support a stronger reordering operation than was previously known. We present an algorithm that, given a formula and its dependency relation, computes a compatible reordering with aminimum number of quantifier alternations. In combination with a dependency scheme that can be computed in polynomial time, this yields a polynomial time heuristic for reducing the number of quantifier alternations of an input formula. The resolution-path dependency scheme is the most general dependency scheme introduced so far. Using an interpretation of resolution paths as directed paths in a formula's implication graph, we prove that the resolution-path dependency relation can be computed in polynomial time.

Link to Repositum

Backdoors to tractable answer-set programming
Fichte, Johannes Klaus, Szeider, Stefan
Type: Article; In: Artificial Intelligence; Vol: 220; Pages: 64-103
Show Abstract
Answer Set Programming (ASP) is an increasingly popular framework for declarative programming that admits the description of problems by means of rules and constraints that form a disjunctive logic program. In particular, many AI problems such as reasoning in a nonmonotonic setting can be directly formulated in ASP. Although the main problems of ASP are of high computational complexity, complete for the second level of the Polynomial Hierarchy, several restrictions of ASP have been identified in the literature, under which ASP problems become tractable. In this paper we use the concept of backdoors to identify new restrictions that make ASP problems tractable. Small backdoors are sets of atoms that represent "clever reasoning shortcuts" through the search space and represent a hidden structure in the problem input. The concept of backdoors is widely used in theoretical investigations in the areas of propositional satisfiability and constraint satisfaction. We show that it can be fruitfully adapted to ASP. We demonstrate how backdoors can serve as a unifying framework that accommodates several tractable restrictions of ASP known from the literature. Furthermore, we show how backdoors allow us to deploy recent algorithmic results from parameterized complexity theory to the domain of answer set programming.

Link to Repositum

Backdoors to Normality for Disjunctive Logic Programs
Fichte, Johannes K., Szeider, Stefan
Type: Article; In: ACM Transactions on Computational Logic; Vol: 17; Issue: 1; Pages: 1-23
Show Abstract
The main reasoning problems for disjunctive logic programs are complete for the second level of the polynomial hierarchy and hence considered harder than the same problems for normal (i.e., disjunction-free) programs, which are on the first level. We propose a new exact method for solving the disjunctive problems which exploits the small distance of a disjunctive programs from being normal. The distance is measured in terms of the size of a smallest "backdoor to normality", which is the smallest number of atoms whose deletion makes the program normal. Our method consists of three phases. In the first phase, a smallest backdoor is computed. We show that this can be done using an efficient algorithm for computing a smallest vertex cover of a graph. In the second phase, the backdoor is used to transform the logic program into a quantified Boolean formula (QBF) where the number of universally quantified variables equals the size of the backdoor and where the total size of the quantified Boolean formula is quasilinear in the size of the given logic program. The quasilinearity is achieved by means of a characterization of the least model of a Horn program in terms of level numberings. In a third phase, the universal variables are eliminated using universal expansion yielding a propositional formula. The blowup in the last phase is confined to a factor that is exponential in the size of the backdoor but linear in the size of the quantified Boolean formula. By checking the satisfiability of the resulting formula with a Sat solver (or by checking the satisfiability of the quantified Boolean formula by a Qbf-Sat solver), we can decide the Asp reasoning problems on the input program. In consequence, we have a transformation from Asp problems to propositional satisfiability where the combinatorial explosion, which is expected when transforming a problem from the second level of the polynomial hierarchy to the first level, is confined to a function of the distance to normality of the input program. In terms of parameterized complexity, the transformation is fixed-parameter tractable. We complement this result by showing that (under plausible complexity-theoretic assumptions) such a fixed-parameter tractable transformation is not possible if we consider the distance to tightness instead of distance to normality.

Link to Repositum

Computational Performance Evaluation of Two Integer Linear Programming Models for the Minimum Common String Problem
Blum, Christian, Raidl, Günther R.
Type: Article; In: Optimization Letters; Vol: 10; Issue: 1; Pages: 189-205
Show Abstract
In the minimum common string partition (MCSP) problem two related input strings are given. "Related" refers to the property that both strings consist of the same set of letters appearing the same number of times in each of the two strings. The MCSP seeks a minimum cardinality partitioning of one string into non-overlapping substrings that is also a valid partitioning for the second string. This problem has applications in bioinformatics e.g. in analyzing related DNA or protein sequences. For strings with lengths less than about 1000 letters, a previously published integer linear programming (ILP) formulation yields, when solved with a state-of-the-art solver such as CPLEX, satisfactory results. In this work, we propose a new, alternative ILP model that is compared to the former one. While a polyhedral study shows the linear programming relaxations of the two models to be equally strong, a comprehensive experimental comparison using real-world as well as artificially created benchmark instances indicates substantial computational advantages of the new formulation.

Link to Repositum

A SAT Approach to Clique-Width
Heule, Marijn J. H., Szeider, Stefan
Type: Article; In: ACM Transactions on Computational Logic; Vol: 16; Issue: 3; Pages: 1-27
Show Abstract
Clique-width is a graph invariant that has been widely studied in combinatorics and computational logic. Computing the clique-width of a graph is an intricate problem, because the exact clique-width is not known even for very small graphs. We present a new method for computing clique-width via an encoding to proposi- tional satisfiability (SAT), which is then evaluated by a SAT solver. Our encoding is based on a reformulation of clique-width in terms of partitions that utilizes an efficient encoding of cardinality constraints. Our SAT- based method is the first to discover the exact clique-width of various small graphs, including famous named graphs from the literature as well as random graphs of various density. With our method, we determined the smallest graphs that require a small predescribed clique-width. We further show how our method can be modified to compute the linear clique-width of graphs, a variant of clique-width that has recently received considerable attention. In an appendix, we provide certificates for tight upper bounds for the clique-width and linear clique-width of famous named graphs.

Link to Repositum

Improving Vertex Cover as a Graph Parameter
Ganian, Robert
Type: Article; In: DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE; Vol: 17/2; Pages: 77-100
Show Abstract
Parameterized algorithms are often used to efficiently solve NP-hard problems on graphs. In this context, vertex cover is used as a powerful parameter for dealing with graph problems which are hard to solve even when parameterized by tree-width; however, the drawback of vertex cover is that bounding it severely restricts admissible graph classes. We introduce a generalization of vertex cover called twin-cover and show that FPT algorithms exist for a wide range of difficult problems when parameterized by twin-cover. The advantage of twin-cover over vertex cover is that it imposes a lesser restriction on the graph structure and attains low values even on dense graphs. Apart from introducing the parameter itself, this article provides a number of new FPT algorithms parameterized by twin-cover with a special emphasis on solving problems which are not in FPT even when parameterized by tree-width. It also shows that MS1 model checking can be done in elementary FPT time parameterized by twin-cover and discusses the field of kernelization.

Link to Repositum

On the Subexponential-Time Complexity of CSP
De Haan, Ronald, Kanj, Iyad, Szeider, Stefan
Type: Article; In: Journal of Artificial Intelligence Research; Vol: 52; Pages: 203-234
Show Abstract
Not all NP-complete problems share the same practical hardness with respect to exact computation. Whereas some NP-complete problems are amenable to efficient computational methods, others are yet to show any such sign. It becomes a major challenge to develop a theoretical framework that is more fine-grained than the theory of NP-completeness, and that can explain the distinction between the exact complexities of various NP-complete problems. This distinction is highly relevant for constraint satisfaction problems under natural restrictions, where various shades of hardness can be observed in practice. Acknowledging the NP-hardness of such problems, one has to look beyond polynomial time computation. The theory of subexponential-time complexity provides such a framework, and has been enjoying increasing popularity in complexity theory. An instance of the constraint satisfaction problem with n variables over a domain of d values can be solved by brute-force in d^n steps (omitting a polynomial factor). In this paper we study the existence of subexponential-time algorithms, that is, algorithms running in d^o(n) steps, for various natural restrictions of the constraint satisfaction problem. We consider both the constraint satisfaction problem in which all the constraints are given extensionally as tables, and that in which all the constraints are given intensionally in the form of global constraints. We provide tight characterizations of the subexponential-time complexity of the aforementioned problems with respect to several natural structural parameters, which allows us to draw a detailed landscape of the subexponential-time complexity of the constraint satisfaction problem. Our analysis provides fundamental results indicating whether and when one can significantly improve on the brute-force search approach for solving the constraint satisfaction problem.

Link to Repositum

A Survey on Parameterized Complexity and SAT
Szeider, Stefan
Type: Presentation
Show Abstract
In this talk I will discuss basic concepts of parameterized complexity (such as fixed-parameter tractability, reductions, hardness, and kernelization) and survey parameterized complexity results related to satisfiability (SAT). The focus will be on laying out what kind of questions can be asked and not on technical details.

Link to Repositum

Dependency Schemes for Quantified Boolean Formulas
Slivovsky, Friedrich
Type: Presentation
Show Abstract
The nesting of existential and universal quanti ers in Quanti ed Boolean Formulas causes dependencies among variables that have to be respected by solvers and preprocessing techniques. Given formulas in prenex normal form, standard algorithms implicitly make the most conservative assumption about variable dependencies: vari- able y depends on variable x whenever x and y are associated with di erent quanti ers and x precedes y in the quanti er pre x. The resulting set of de- pendencies is often a coarse overapproximation containing many \spurious" dependencies which lead to unnecessary restrictions that, in turn, inhibit performance.

Link to Repositum

Generalized Basic Logic, Polynomial Space, and Disjunction Property
Bova, Simone Maria
Type: Presentation
Show Abstract
We report on research by Montagna and collaborators on the combinatorial and computational aspects of generalized basic logic. In the first part of the talk, we focus on the PSPACE-completeness of the tautology and entailment problems. In the second part of the talk, we discuss a syntactic relaxation of the disjunction property leading to an uncountable family of substructural logics with a PSPACE-hard tautology problem (it is known that substructural logics enjoying the full disjunction property have a PSPACE-hard tautology problem).

Link to Repositum

Succinctness in Knowledge Representation
Bova, Simone Maria
Type: Presentation

Link to Repositum

Discovering Archipelagos of Tractability for Constraint Satisfaction and Counting
Ganian, Robert
Type: Presentation
Show Abstract
The Constraint Satisfaction Problem (CSP) is a central and generic computational problem which provides a common framework for many theoretical and practical applications. A central line of research is concerned with the identification of classes of instances for which CSP can be solved in polynomial time; such classes are often called "islands of tractability". A prominent way of defining islands of tractability for CSP is to restrict the relations that may occur in the constraints to a fixed set, called a constraint language, whereas a constraint language is conservative if it contains all unary relations. Schaefer's famous Dichotomy Theorem (STOC 1978) identifies all islands of tractability in terms of tractable constraint languages over a Boolean domain of values. Since then many extensions and generalizations of this result have been obtained. Recently, Bulatov (TOCL 2011, JACM 2013) gave a full characterization of all islands of tractability for CSP and the counting version #CSP that are defined in terms of conservative constraint languages. This paper addresses the general limit of the mentioned tractability results for CSP and #CSP, that they only apply to instances where all constraints belong to a single tractable language (in general, the union of two tractable languages isn't tractable). We show that we can overcome this limitation as long as we keep some control of how constraints over the various considered tractable languages interact with each other. For this purpose we utilize the notion of a strong backdoor of a CSP instance, as introduced by Williams et al. (IJCAI 2003), which is a set of variables that when instantiated moves the instance to an island of tractability, i.e., to a tractable class of instances. We consider strong backdoors into scattered classes, consisting of CSP instances where each connected component belongs entirely to some class from a list of tractable classes. Figuratively speaking, a scattered class constitutes an archipelago of tractability. The main difficulty lies in finding a strong backdoor of given size k; once it is found, we can try all possible instantiations of the backdoor variables and apply the polynomial time algorithms associated with the islands of tractability on the list component wise. Our main result is an algorithm that, given a CSP instance with $n$ variables, finds in FPT time a strong backdoor into a scattered class (associated with a list of finite conservative constraint languages) of size k or correctly decides that there isn't such a backdoor. This also gives the running time for solving #CSP, provided that #CSP is polynomial-time tractable for the considered constraint languages. Our result makes significant progress towards the main goal of the backdoor-based approach to CSPs - the identification of maximal base classes for which small backdoors can be detected efficiently.

Link to Repositum

Algorithmic Applications of Large Well-Structured Modulators
Ganian, Robert
Type: Presentation
Show Abstract
A modulator of a graph G to a specified base class C is a set of vertices whose deletion puts G in C. The cardinality of a modulator to various tractable graph classes has long been used as a form of structure which can be exploited to obtain efficient algorithms for a range of important problems, and various popular notions such as vertex cover and feedback vertex set form special cases of modulators (see for instance the work of Fellows et al. or Fomin et al.). Here we investigate what happens when a graph contains a modulator to some tractable class which is large but "well-structured" (in the sense of having bounded rank-width). Can such modulators still be exploited to obtain efficient algorithms? And is it even possible to find such modulators efficiently? We show that the parameters derived from such well-structured modulators are not only more general than the cardinality of modulators, but are in fact more general than rank-width itself. We provide an FPT algorithm for finding such well-structured modulators to any graph class which can be characterized by a finite set of obstructions. Aside from developing algorithms utilizing well-structured modulators for individual problems, we also show that well-structured modulators can be used for the efficient model checking of any Monadic Second Order sentence, as long as certain necessary conditions are met.

Link to Repositum

Parameterized Compilability of Clause Entailment
Bova, Simone Maria
Type: Presentation

Link to Repositum

Variable-Deletion Backdoors to Planning
Ordyniak, Sebastian
Type: Presentation

Link to Repositum

A Complete Parameterized Complexity Analysis of Bounded Planning
Bäckström, Christer, Jonsson, Peter, Ordyniak, Sebastian, Szeider, Stefan
Type: Article; In: Journal of Computer and System Sciences; Vol: 81; Issue: 7; Pages: 1311-1332
Show Abstract
The propositional planning problem is a notoriously difficult computational problem, which remains hard even under strong syntactical and structural restrictions. Given its difficulty it becomes natural to study planning in the context of parameterized complexity. In this paper we continue the work initiated by Downey, Fellows and Stege on the parameterized complexity of planning with respect to the parameter "length of the solution plan." We provide a complete classification of the parameterized complexity of the planning problem under two of the most prominent syntactical restrictions, i.e., the so called PUBS restrictions introduced by Bäckström and Nebel and restrictions on the number of preconditions and effects as introduced by Bylander. We also determine which of the considered fixed-parameter tractable problems admit a polynomial kernel and which do not.

Link to Repositum

A New Type of Metamodel for Longitudinal Dynamics Optimization of Hybrid Electric Vehicles
Bacher, Christopher, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 15th International Conference on Computer Aided Systems Theory; Pages: 119-120

Link to Repositum

Heuristic Approaches for the Probabilistic Traveling Salesman Problem
Biesinger, Benjamin, Weiler, Christoph, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 15th International Conference on Computer Aided Systems Theory; Pages: 99-100

Link to Repositum

A New Solution Representation for the Firefighter Problem
Hu, Bin, Windbichler, Andreas, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization - EvoCOP 2015; Pages: 25-35
Show Abstract
The firefighter problem (FFP) is used as a model to simulate how a fire breaks out and spreads to its surroundings over a discrete time period. The goal is to deploy a given number of firefighters on strategic points at each time step to contain the fire in a most efficient way, so that as many areas are saved from the fire as possible. In this paper we introduce a new solution representation for the FFP which can be applied in metaheuristic approaches. Compared to the existing approach in the literature, it is more compact in a sense that the solution space is smaller although the complexity for evaluating a solution remains unchanged. We use this representation in conjunction with a variable neighborhood search (VNS) approach to tackle the FFP. To speed up the optimization process, we propose an incremental evaluation technique that omits unnecessary re-calculations. Computational tests were performed on a benchmark instance set containing 120 random graphs of different size and density. Results indicate that our VNS approach is highly competitive with existing state-of-the-art approaches.

Link to Repositum

A Cluster-First Route-Second Approach for Balancing Bicycle Sharing Systems
Kloimüllner, Christian, Papazek, Petrina, Raidl, Günther, Hu, Bin
Type: Inproceedings; In: Extended Abstracts of the 15th International Conference on Computer Aided Systems Theory; Pages: 125-126

Link to Repositum

On Solving the Most Strings With Few Bad Columns Problem: An ILP Model and Heuristics
Lizarraga, Evelia, Blesa, Maria José, Blum, Christian, Raidl, Günther
Type: Inproceedings; In: Innovations in Intelligent Systems and Applications (INISTA), 2015 International Symposium on; Pages: 1-8
Show Abstract
The most strings with few bad columns problem is an NP-hard combinatorial optimization problem from the bioinformatics field. This paper presents the first integer linear programming model for this problem. Moreover, a simple greedy heuristic and a more sophisticated extension, namely a greedy-based pilot method, are proposed. Experiments show that, as expected, the greedy-based pilot method improves over the greedy strategy. For problem instances of small and medium size the best results were obtained by solving the integer linear programming model by CPLEX, while the greedy-based pilot methods scales much better to large problem instances.

Link to Repositum

A Variable Neighborhood Search for the Generalized Vehicle Routing Problem with Stochastic Demands
Biesinger, Benjamin, Hu, Bin, Raidl, Günther R.
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization; Pages: 48-60
Show Abstract
In this work we consider the generalized vehicle routing problem with stochastic demands (GVRPSD) being a combination of the generalized vehicle routing problem, in which the nodes are partitioned into clusters, and the vehicle routing problem with stochastic demands, where the exact demands of the nodes are not known beforehand. It is an NP-hard problem for which we propose a variable neighborhood search (VNS) approach to minimize the expected tour length through all clusters. We use a permutation encoding for the cluster sequence and consider the preventive restocking strategy where the vehicle restocks before it potentially runs out of goods. The exact solution evaluation is based on dynamic programming and is very time-consuming. Therefore we propose a multi-level evaluation scheme to significantly reduce the time needed for solution evaluations. Two different algorithms for finding an initial solution and three well-known neighborhood structures for permutations are used within the VNS. Results show that the multi-level evaluation scheme is able to drastically reduce the overall run-time of the algorithm and that it is essential to be able to tackle larger instances. Acomparison to an exact approach shows that the VNS is able to find an optimal or near-optimal solution in much shorter time.

Link to Repositum

Parameterized Complexity Results for Agenda Safety in Judgment Aggregation
de Haan, Ronald, Endriss, Ulle, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems - AAMAS 2015; Pages: 127-136
Show Abstract
Many problems arising in computational social choice are of high computational complexity, and some are located at higher levels of the Polynomial Hierarchy. We argue that a parameterized complexity analysis provides valuable insight into the factors contributing to the complexity of these problems, and can lead to practically useful algorithms. As a case study, we consider the problem of agenda safety for the majority rule in judgment aggregation, consider several natural parameters for this problem, and determine the parameterized complexity for each of these. Our analysis is aimed at obtaining fixed-parameter tractable (fpt) algorithms that use a small number of calls to a SAT solver. We identify several positive results, including several results where the problem can be fpt-reduced to a single SAT instance. In addition, we identify several negative results. We hope that this work may help initiate a structured parameterized complexity investigation of problems arising in the field of computational social choice that are located at higher levels of the Polynomial Hierarchy.

Link to Repositum

A Value-Correction Construction Heuristic for the Two-Dimensional Cutting Stock Problem with Variable Sheet Size
Dusberger, Frederico, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 15th International Conference on Computer Aided Systems Theory; Pages: 109-110

Link to Repositum

Metaheuristics for the Two-Dimensional Container-Pre-Marshalling-Problem
Tus, Alan, Rendl, Andrea, Raidl, Günther
Type: Inproceedings; In: Learning and Intelligent Optimization; Pages: 186-201
Show Abstract
We introduce a new problem arising in small and medium-sized container terminals: the Two-Dimensional Pre-Marshalling Problem (2D-PMP). It is an extension of the well-studied Pre-Marshalling Problem (PMP) that is crucial in container storage. The 2D-PMP is particularly challenging due to its complex side constraints that are challenging to express and difficult to consider with standard techniques for the PMP. We present three different heuristic approaches for the 2D-PMP. First, we adapt an existing construction heuristic that was designed for the classical PMP. We then apply this heuristic within two metaheuristics: a Pilot method and a Max-Min Ant System that incorporates a special pheromone model. In our empirical evaluation we observe that the Max-Min Ant System outperforms the other approaches by yielding better solutions in almost all cases.

Link to Repositum

Solving the Longest Common Subsequence Problem Using a Parallel Ant Colony Optimization Algorithm
Markvica, David, Schauer, Christian, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 15th International Conference on Computer Aided Systems Theory; Pages: 113-114

Link to Repositum

Community Structure Inspired Algorithms for SAT and #SAT
Ganian, Robert, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 18th International Conference on Theory and Applications of Satisfiability Testing; Pages: 223-238
Show Abstract
We introduce h-modularity, a structural parameter of CNF formulas, and present algorithms that render the decision problem SAT and the model counting problem #SAT fixed-parameter tractable when parameterized by h-modularity. The new parameter is defined in terms of a partition of clauses of the given CNF formula into strongly interconnected communities which are sparsely interconnected with each other. Each community forms a hitting formula, whereas the interconnections between communities form a graph of small treewidth. Our algorithms first identify the community structure and then use it for an efficient solution of SAT and #SAT, respectively. We further show that h-modularity is incomparable with known parameters under which SAT or #SAT is fixed-parameter tractable.

Link to Repositum

First-Order Queries on Finite Abelian Groups
Bova, Simone Maria, Barnaby, Martin
Type: Inproceedings; In: Proceedings of the 24th EACSL Annual Conference on Computer Science Logic; Pages: 19
Show Abstract
We study the computational problem of checking whether a logical sentence is true in a finite abelian group. We prove that model checking first-order sentences on finite abelian groups is fixed-parameter tractable, when parameterized by the size of the sentence. We also prove that model checking monadic second-order sentences on finite abelian groups finitely presented by integer matrices is not fixed-parameter tractable (under standard assumptions in parameterized complexity).

Link to Repositum

A Dichotomy Result for Ramsey Quantifiers
de Haan, Ronald, Szymanik, Jakub
Type: Inproceedings; In: Logic, Language, Information, and Computation; Pages: 69-80
Show Abstract
Ramsey quantifiers are a natural object of study not only for logic and computer science, but also for formal semantics of natural language. Restricting attention to finite models leads to the natural question whether all Ramsey quantifiers are either polynomial-time computable or NP-hard, and whether we can give a natural characterization of the polynomial-time computable quantifiers. In this paper, we first show that there exist intermediate Ramsey quantifiers and then we prove a dichotomy result for a large and natural class of Ramsey quantifiers, based on a reasonable and widely-believed complexity assumption. We show that the polynomial-time computable quantifiers in this class are exactly the constant-log-bounded Ramsey quantifiers.

Link to Repositum

Fixed-parameter Tractable Reductions to SAT for Planning
de Haan, Ronald, Kronegger, Martin, Pfandler, Andreas
Type: Inproceedings; In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence - IJCAI 2015; Pages: 2897-2903
Show Abstract
Planning is an important AI task that gives rise to many hard problems. In order to come up with efficient algorithms for this setting, it is important to understand the sources of complexity. For planning problems that are beyond NP, identifying fragments that allow an efficient reduction to SAT can be a feasible approach due to the great performance of modern SAT solvers. In this paper, we use the framework of parameterized complexity theory to obtain a more fine-grained complexity analysis of natural planning problems beyond NP. With this analysis we are able to point out several variants of planning where the structure in the input makes encodings into SAT feasible. We complement these positive results with some hardness results and a new machine characterization for the intractability class ∃*∀k-W[P].

Link to Repositum

On Minimizing Crossings in Storyline Visualizations
Nöllenburg, Martin, Kostitsyna, Irina, Polishchuk, Valentin, Schulz, André, Strash, Darren
Type: Inproceedings; In: Graph Drawing and Network Visualization 23rd International Symposium, GD 2015, Los Angeles, CA, USA, September 24-26, 2015, Revised Selected Papers; Pages: 192-198
Show Abstract
In a storyline visualization, we visualize a collection of interacting characters (e.g., in a movie, play, etc.) by x-monotone curves that converge for each interaction, and diverge otherwise. Given a storyline with n characters, we show tight lower and upper bounds on the number of crossings required in any storyline visualization for a restricted case. In particular, we show that if (1) each meeting consists of exactly two characters and (2) the meetings can be modeled as a tree, then we can always find a storyline visualization with O(n log n) crossings. Furthermore, we show that there exist storylines in this restricted case that require Ω(n log n) crossings. Lastly, we show that, in the general case, minimizing the number of crossings in a storyline visualization is fixed-parameter tractable, when parameterized on the number of characters k. Our algorithm runs in time O(k!2k log k + k!2m), where m is the number of meetings.

Link to Repositum

Drawing Large Graphs by Multilevel Maxent-Stress Optimization
Nöllenburg, Martin, Meyerhenke, Henning, Schulz, Christian
Type: Inproceedings; In: Graph Drawing and Network Visualization 23rd International Symposium, GD 2015, Los Angeles, CA, USA, September 24-26, 2015, Revised Selected Papers; Pages: 30-43
Show Abstract
Drawing large graphs appropriately is an important step for the visual analysis of data from real-world networks. Here we present a novel multilevel algorithm to compute a graph layout with respect to a recently proposed metric that combines layout stress and entropy. As opposed to previous work, we do not solve the linear systems of the maxent-stress metric with a typical numerical solver. Instead we use a simple local iterative scheme within a multilevel approach. To accelerate local optimization, we approximate long-range forces and use shared-memory parallelism. Our experiments validate the high potential of our approach, which is particularly appealing for dynamic graphs. In comparison to the previously best maxent-stress optimizer, which is sequential, our parallel implementation is on average 30 times faster already for static graphs (and still faster if executed on one thread) while producing a comparable solution quality.

Link to Repositum

On Compiling Structured CNFs to OBDDs
Bova, Simone Maria, Slivovsky, Friedrich
Type: Inproceedings; In: Proceedings of the 10th International Computer Science Symposium; Pages: 80-94
Show Abstract
We present new results on the size of OBDD representations of structurally characterized classes of CNF formulas. First, we prove that variable convex formulas (that is, formulas with incidence graphs that are convex with respect to the set of variables) have polynomial OBDD size. Second, we prove an exponential lower bound on the OBDD size of a family of CNF formulas with incidence graphs of bounded degree.We obtain the first result by identifying a simple sufficient condition-which we call the few subterms property-for a class of CNF formulas to have polynomial OBDD size, and show that variable convex formulas satisfy this condition. To prove the second result, we exploit the combina-torial properties of expander graphs; this approach allows us to establish an exponential lower bound on the OBDD size of formulas satisfying strong syntactic restrictions.

Link to Repositum

Solving Problems on Graphs of High Rank-Width
Eiben, Eduard, Ganian, Robert, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 14th International Symposium on Algorithms and Data Structures; Pages: 314-326
Show Abstract
A modulator of a graph G to a specified graph class H is a set of vertices whose deletion puts G into H. The cardinality of a modulator to various graph classes has long been used as a structural parameter which can be exploited to obtain FPT algorithms for a range of hard problems. Here we investigate what happens when a graph contains a modulator which is large but "well-structured" (in the sense of having bounded rank-width). Can such modulators still be exploited to obtain efficient algorithms? And is it even possible to find such modulators efficiently? We first show that the parameters derived from such well-structured modulators are strictly more general than the cardinality of modulators and rank-width itself. Then, we develop an FPT algorithm for finding such well-structured modulators to any graph class which can be characterized by a finite set of forbidden induced subgraphs. We proceed by showing how well-structured modulators can be used to obtain efficient parameterized algorithms for Minimum Vertex Cover and Maximum Clique. Finally, we use the concept of well-structured modulators to develop an algorithmic meta-theorem for efficiently deciding problems expressible in Monadic Second Order (MSO) logic, and prove that this result is tight in the sense that it cannot be generalized to LinEMSO problems.

Link to Repositum

Partitioning Graph Drawings and Triangulated Simple Polygons into Greedily Routable Regions
Nöllenburg, Martin, Prutkin, Roman, Rutter, Ignaz
Type: Inproceedings; In: Algorithms and Computation; Pages: 637-649
Show Abstract
A greedily routable region (GRR) is a closed subset of R2, in which each destination point can be reached from each starting point by choosing the direction with maximum reduction of the distance to the destination in each point of the path. Recently, Tan and Kermarrec proposed a geographic routing protocol for dense wireless sensor networks based on decomposing the network area into a small number of interior- disjoint GRRs. They showed that minimum decomposition is NP-hard for polygons with holes. We consider minimum GRR decomposition for plane straight-line drawings of graphs. Here, GRRs coincide with self-approaching drawings of trees, a drawing style which has become a popular research topic in graph drawing. We show that minimum decomposition is still NP-hard for graphs with cycles, but can be solved optimally for trees in polynomial time. Additionally, we give a 2-approximation for simple polygons, if a given triangulation has to be respected.

Link to Repositum

On the Readability of Boundary Labeling
Barth, Lukas, Gemsa, Andreas, Niedermann, Benjamin, Nöllenburg, Martin
Type: Inproceedings; In: Graph Drawing and Network Visualization; Pages: 515-527
Show Abstract
Boundary labeling deals with annotating features in images such that labels are placed outside of the image and are connected by curves (so-called leaders) to the corresponding features. While boundary labeling has been extensively investigated from an algorithmic perspective, the research on its readability has been neglected. In this paper we present the first formal user study on the readability of boundary labeling. We consider the four most studied leader types with respect to their performance, i.e., whether and how fast a viewer can assign a feature to its label and vice versa. We give a detailed analysis of the results regarding the readability of the four models and discuss their aesthetic qualities based on the users´ preference judgments and interviews.

Link to Repositum

Combinatorial Properties of Triangle-Free Rectangle Arrangements and the Squarability Problem
Nöllenburg, Martin, Klawitter, Jonathan, Ueckerdt, Thorsten
Type: Inproceedings; In: Graph Drawing and Network Visualization (GD'15); Pages: 231-244
Show Abstract
We consider arrangements of axis-aligned rectangles in the plane. A geometric arrangement specifies the coordinates of all rectangles, while a combinatorial arrangement specifies only the respective intersection type in which each pair of rectangles intersects. First, we investigate combinatorial contact arrangements, i.e., arrangements of interior-disjoint rectangles, with a triangle-free intersection graph. We show that such rectangle arrangements are in bijection with the 4-orientations of an underlying planar multigraph and prove that there is a corresponding geometric rectangle contact arrangement. Using this, we give a new proof that every triangle-free planar graph is the contact graph of such an arrangement. Secondly, we introduce the question whether a given rectangle arrangement has a combinatorially equivalent square arrangement. In addition to some necessary conditions and counterexamples, we show that rectangle arrangements pierced by a horizontal line are squarable under certain sufficient conditions.

Link to Repositum

Variable Neighborhood Search for Integrated Timetable Based Design of Railway Infrastructure
Grujicic, Igor, Raidl, Günther, Schöbel, Andreas
Type: Article; In: Electronic Notes in Discrete Mathematics; Vol: 47; Pages: 141-148
Show Abstract
In this paper we deal with the problem of building new or extending an existing railway infrastructure. The goal is to determine a minimum cost infrastructure ful lling the requirements de ned by an integrated timetable and the operation of the railway system. We rst model this planning task as a combinatorial network optimization problem, capturing the essential aspects. We then present a metaheuristic solution method based on general variable neighborhood search that makes use of a dynamic programming procedure for realizing individual connections. Computational experiments indicate that the suggested approach is promising and the analysis of obtained results gives useful hints for future work in this area.

Link to Repositum

Complexity of the Winner Determination Problem in Judgment Aggregation: Kemeny, Slater, Tideman, Young
de Haan, Ronald, Endriss, Ulle
Type: Inproceedings; In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems - AAMAS 2015; Pages: 117-125
Show Abstract
Judgment aggregation is a collective decision making framework where the opinions of a group of agents is combined into a collective opinion. This can be done using many different judgment aggregation procedures. We study the computational complexity of computing the group opinion for several of the most prominent judgment aggregation procedures. In particular, we show that the complexity of this winner determination problem for analogues of the Kemeny rule, the Slater rule and the Young rule lies at the Θp2-level of the Polynomial Hierarchy (PH). Moreover, we show that the problem has a complexity at the Δp2-level of the PH for the analogue of Tideman's procedure with a fixed tie-breaking rule, and at the Σp2-level of the PH for the analogue of Tideman's procedure without a fixed tie-breaking rule.

Link to Repositum

Parameterized Algorithms for Parity Games
Gajarsky, Jakub, Lampis, Michael, Makino, Kazuhisa, Mitsou, Valia, Ordyniak, Sebastian
Type: Inproceedings; In: Mathematical Foundations of Computer Science 2015 - 40th International Symposium, {MFCS} 2015, Milan, Italy, August 24-28, 2015, Proceedings, Part {II}; Pages: 336-347

Link to Repositum

Solving the 3-Staged 2-Dimensional Cutting Stock Problem by Dynamic Programming and Variable Neighborhood Search
Dusberger, Frederico, Raidl, Günther R.
Type: Inproceedings; In: Electronic Notes in Discrete Mathematics; Pages: 133-140
Show Abstract
We present a variable neighborhood search (VNS) for the 3-st aged 2-dimensional cutting stock problem employing "ruin-and-recreate"-based very large neighborhood search in which parts of the incumbent solution are destroyed and rebuilt using construction heuristics and dynamic programming. Experimental results show that for instances where the sizes of the elements are not too small compared to the sheet size the hybridization of VNS with dynamic programming significantly outperforms a VNS relying solely on construction heuristics.

Link to Repositum

Meta-kernelization using Well-structured Modulators
Eiben, Eduard, Ganian, Robert, Szeider, Stefan
Type: Inproceedings; In: 10th International Symposium on Parameterized and Exact Computation (IPEC 2015); Vol: 43; Pages: 114-126
Show Abstract
Kernelization investigates exact preprocessing algorithms with performance guarantees. The most prevalent type of parameters used in kernelization is the solution size for optimization problems; however, also structural parameters have been successfully used to obtain polynomial kernels for a wide range of problems. Many of these parameters can be defined as the size of a smallest modulator of the given graph into a fixed graph class (i.e., a set of vertices whose deletion puts the graph into the graph class). Such parameters admit the construction of polynomial kernels even when the solution size is large or not applicable. This work follows up on the research on meta-kernelization frameworks in terms of structural parameters. We develop a class of parameters which are based on a more general view on modulators: instead of size, the parameters employ a combination of rank-width and split decompositions to measure structure inside the modulator. This allows us to lift kernelization results from modulator-size to more general parameters, hence providing smaller kernels. We show (i) how such large but well-structured modulators can be efficiently approximated, (ii) how they can be used to obtain polynomial kernels for any graph problem expressible in Monadic Second Order logic, and (iii) how they allow the extension of previous results in the area of structural meta-kernelization.

Link to Repositum

Well-Structured Modulators: FPT Algorithms and Kernels
Ganian, Robert
Type: Presentation
Show Abstract
A modulator of a graph G to a specified graph class H is a set of vertices whose deletion puts G into H. The cardinality of a modulator to various tractable graph classes has long been used as a structural parameter which can be exploited to obtain both FPT algorithms and polynomial kernels for a range of hard problems. Here we investigate what happens when a graph contains a modulator which is large but "well-structured" (in the sense of having bounded rank-width). Can such modulators still be exploited to obtain efficient algorithms? And is it even possible to find such modulators efficiently? We first show that the parameters derived from such well-structured modulators are more general (and hence applicable on broader graph classes) than modulators as well as other established parameters used for kernelization and fixed parameter tractability. Then, we develop algorithms for finding such well-structured modulators to a range of graph classes. Finally, we use the concept of well-structured modulators to develop algorithmic meta-theorems for deciding problems expressible in Monadic Second Order (MSO) logic, and prove that this result is tight in the sense that it cannot be generalized to LinEMSO problems.

Link to Repositum

Machine Characterizations for Parameterized Complexity Classes Beyond Para-NP
de Haan, Ronald, Szeider, Stefan
Type: Inproceedings; In: SOFSEM 2015: Theory and Practice of Computer Science 41st International Conference on Current Trends in Theory and Practice of Computer Science, Pec pod Sněžkou, Czech Republic, January 24-29, 2015, Proceedings
Show Abstract
Due to the remarkable power of modern SAT solvers, one can efficiently solve NP-complete problems in many practical settings by encoding them into SAT. However, many important problems in various areas of computer science lie beyond NP, and thus we cannot hope for polynomial-time encodings into SAT. Recent research proposed the use of fixed-parameter tractable (fpt) reductions to provide efficient SAT encodings for these harder problems. The parameterized complexity classes ∃k∀* and ∀k∃* provide strong theoretical evidence that certain parameterized problems are not fpt-reducible to SAT. Originally, these complexity classes were defined via weighted satisfiability problems for quantified Boolean formulas, extending the general idea for the canonical problems for the Weft Hierarchy. In this paper, we provide alternative characterizations of ∃k∀* and ∀k∃* in terms of first-order logic model checking problems and problems involving alternating Turing machines with appropriate time bounds and bounds on the number of alternations. We also identify parameterized Halting Problems for alternating Turing machines that are complete for these classes. The alternative characterizations provide evidence for the robustness of the new complexity classes and extend the toolbox for establishing membership results. As an illustration, we consider various parameterizations of the 3-coloring extension problem.

Link to Repositum

Boosting an exact logic-based benders decomposition approach by variable neighborhood search.
Raidl, Günther, Baumhauer, Thomas, Hu, Bin
Type: Inproceedings; In: Electronic Notes in Discrete Mathematics; Pages: 1-4
Show Abstract
Logic-based Benders decomposition (BD) extends classic BD by allowing more complex subproblems with integral variables. Metaheuristics like variable neighborhood search are becoming useful here for faster solving the subproblems' inference duals in order to separate approximate Benders cuts. After performing such a purely heuristic BD approach, we continue by exactly verifying and possibly correcting each heuristic cut to finally obtain a proven optimal solution. On a bi-level vehicle routing problem, this new hybrid approach exhibits shorter overall runtimes and yields excellent intermediate solutions much earlier than the classical exact method.

Link to Repositum

Algorithmic Applications of Tree-Cut Width
Ganian, Robert, Kim, Eun Jung, Szeider, Stefan
Type: Inproceedings; In: Proceedings of the 40th International Symposium Mathematical Foundations of Computer Science 2015; Pages: 348-361
Show Abstract
The recently introduced graph parameter tree-cut width plays a similar role with respect to immersions as the graph parameter treewidth plays with respect to minors. In this paper we provide the first algorithmic applications of tree-cut width to hard combinatorial problems. Tree-cut width is known to be lower-bounded by a function of treewidth, but it can be much larger and hence has the potential to facilitate the efficient solution of problems which are not known to be fixed-parameter tractable (FPT) when parameterized by treewidth. We introduce the notion of nice tree-cut decompositions and provide FPT algorithms for the showcase problems Capacitated Vertex Cover, Capacitated Dominating Set and Imbalance parameterized by the tree-cut width of an input graph G. On the other hand, we show that List Coloring, Precoloring Extension and Boolean CSP (the latter parameterized by the tree-cut width of the incidence graph) are W[1]-hard and hence unlikely to be fixed-parameter tractable when parameterized by tree-cut width.

Link to Repositum

On Compiling CNFs into Structured Deterministic DNNFs
Bova, Simone Maria, Capelli, Florent, Mengel, Stefan, Slivovsky, Friedrich
Type: Inproceedings; In: Proceedings of the 18th International Conference on Theory and Applications of Satisfiability Testing; Pages: 199-214
Show Abstract
We show that the traces of recently introduced dynamic programming algorithms for #SAT can be used to construct structured deterministic DNNF (decomposable negation normal form) representations of propositional formulas in CNF (conjunctive normal form). This allows us prove new upper bounds on the complexity of compiling CNF formulas into structured deterministic DNNFs in terms of parameters such as the treewidth and the clique-width of the incidence graph.

Link to Repositum

2014
PILOT, GRASP, and VNS approaches for the static balancing of bicycle sharing systems.
Rainer-Harbach, Marian, Papazek, Petrina, Raidl, Günther R., Hu, Bin, Kloimüllner, Christian
Type: Article; In: Journal of Global Optimization; Vol: 63; Issue: 3; Pages: 597-629
Show Abstract
We consider a transportation problem arising in public bicycle sharing systems: To avoid rental stations to run entirely empty or full, a fleet of vehicles continuously performs tours moving bikes among stations. In the static problem variant considered in this paper, we are given initial and target fill levels for all stations, and the goal is primarily to find vehicle tours including corresponding loading instructions in order to minimize the deviations from the target fill levels. As secondary objectives we are further interested in minimizing the tours' total duration and the overall number of loading actions. For this purpose we first propose a fast greedy construction heuristic and extend it to a PILOT method that evaluates each candidate station considered for addition to the current partial tour in a refined way by looking forward via a recursive call. Next we describe a Variable Neighborhood Descent (VND) that exploits a set of specifically designed neighborhood structures in a deterministic way to locally improve the solutions. While the VND is processing the search space of candidate routes to determine the stops for vehicles at unbalanced rental stations, the number of bikes to be loaded or unloaded at each stop is derived by an efficient method. Four alternatives are considered for this embedded procedure based on a greedy heuristic, two variants of maximum flow calculations, and linear programming. Last but not least, we investigate a general Variable Neighborhood Search (VNS) and variants of a Greedy Randomized Adaptive Search Procedure (GRASP) for further diversification and extended runs. Rigorous experiments using benchmark instances derived from a real-world scenario in Vienna with up to 700 stations document the performance of the suggested approaches and individual pros and cons. While the VNS yields the best results on instances of moderate size, a PILOT/GRASP hybrid turns out to be superior on very large instances. If solutions are required in short time, the construction heuristic or PILOT method optionally followed by VND still yield reasonable results.

Link to Repositum

Balancing bicycle sharing systems: An analysis of path relinking and recombination within a GRASP hybrid.
Papazek, Petrina, Kloimüllner, Christian, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Parallel Problem Solving from Nature -- PPSN XIII : 13th International Conference, Ljubljana, Slovenia, September 13-17,2014, Proceedings; Vol: 8672; Pages: 792-801
Show Abstract
In bike sharing systems, a vehicle fleet rebalances the system by continuously moving bikes among stations in order to avoid rental stations to run entirely empty or full. We address the static problem variant assuming initial fill levels for each station and seek vehicle tours with corresponding loading instructions to reach given target fill levels as far as possible. Our primary objective is to minimize the absolute deviation between target and final fill levels for all rental stations. Building upon a previously suggested GRASP hybrid, we investigate different approaches for hybridizing them with Path Relinking (PR) and simpler recombination operators. Computational tests on benchmark instances derived from a real world scenario in Vienna give insight on the impacts of the PR and recombination techniques and manifest that certain PR extension improve the results significantly. Ultimately, a hybrid exclu- sively searching a partial PR path in the neighborhood of the guiding solutions turns out to be most fruitful.

Link to Repositum

Balancing Bicycle Sharing Systems
Kloimüllner, Christian, Papazek, Petrina, Hu, Bin, Raidl, Günther
Type: Presentation

Link to Repositum

Exact Approaches to the Network Design Problem with Relays
Leitner, Markus, Ljubic, Ivana, Riedler, Martin, Ruthmair, Mario
Type: Presentation

Link to Repositum

A Memetic Algorithm for the Virtual Network Mapping Problem.
Inführ, Johannes, Raidl, Günther
Type: Article; In: Journal of Heuristics; Vol: 22; Issue: 4; Pages: 475-505
Show Abstract
The Internet has ossified. It has lost its capability to adapt as requirements change. A promising technique to solve this problem is the introduction of network virtualization. Instead of directly using a single physical network, working just well enough for a limited range of applications, multiple virtual networks are embedded on demand into the physical network, each of them perfectly adapted to a specific application class. The challenge lies in mapping the different virtual networks with all the resources they require into the available physical network, which is the core of the virtual network mapping problem. In this work, we introduce a memetic algorithm that significantly outperforms the previously best algorithms for this problem. We also offer an analysis of the influence of different problem representations and in particular the implementation of a uniform crossover for the grouping genetic algorithm that may also be interesting outside of the virtual network mapping domain. Furthermore, we study the influence of different hybridization techniques and the behaviour of the developed algorithm in an online setting.

Link to Repositum

Spanning trees with variable degree bounds.
Gouveia, L., Moura, P., Ruthmair, M., Sousa, A.
Type: Article; In: European Journal of Operational Research; Vol: 239; Issue: 3; Pages: 830-841
Show Abstract
In this paper, we introduce and study a generalization of the degree constrained minimum spanning tree problem where we may install one of several available transmission systems (each with a different cost value) in each edge. The degree of the endnodes of each edge depends on the system installed on the edge. We also discuss a particular case that arises in the design of wireless mesh networks (in this variant the degree of the endnodes of each edge depend on the transmission system installed on it as well as on the length of the edge). We propose three classes of models using different sets of variables and compare from a theoretical perspective as well as from a computational point of view, the models and the corresponding linear programming relaxations. The computational results show that some of the proposed models are able to solve to optimality instances with 100 nodes and different scenarios.

Link to Repositum

An SME Transition from Plan-Driven to Hybrid Project Management with Agile Software Development Methods
Biffl, Stefan, Mordinyi, Richard, Raidl, Günther, Steininger, Heinrich, Winkler, Dietmar
Type: Inproceedings; In: Proceedings of the 21th EuroSPI Conference on Systems Software and Service Process Improvement, Industrial Track
Show Abstract
In critical software development contexts, such as industrial software product development, plan-driven methods for project management are established to achieve well-defined goals. However, for new product development agile software development and management meth-ods are attractive due to the flexibility they provide as new requirements or findings emerge. In this paper we report on the process improvement experience of a 25-person small-to-medium enterprise (SME) with new software product development of programming tools for large-scale industrial automation systems. Plan-driven project management methods were well aligned to contracts with customers but not to the challenges of market-driven new prod-uct development. Therefore, the SME adopted agile methods and developed a hybrid project management approach with agile software development methods and the tool support to op-timize the benefits from these process improvements. We discuss lessons learned from the case study, risks, and success factors regarding the transition from plan-driven to hybrid pro-ject management. Major effects from the process improvement to hybrid project management with agile software development methods were (a) better awareness of development needs and progress on the team and management levels, (b) more efficient controlling of resources and cost, and (c) the innovative integration of research and development partners into agile sprint management.

Link to Repositum

A Metaheuristic Approach for Integrated Timetable based Design of Railway Infrastructure
Grujicic, Igor, Raidl, Günther, Schöbel, Andreas, Besau, Gerhard
Type: Inproceedings; In: Road and Rail Infrastructure III, Proceedings of the Conference CETRA 2014; Pages: 691-696
Show Abstract
The design of new railway infrastructure is a complex planning process in most countries to- day due to a multitude of requirements. From an operational point of view new infrastructure basically has to fulfill the needs defined by customers. To this end passenger traffic is often organized in an integrated timetable with well defined arrival and departure times at major hub stations. So far there is no automated tool available to help in determining a minimum cost infrastructure fulfilling all the requirements defined by a timetable and the operation of the railway system. Instead, this task is typically carried out manually, based on graphical design, human experience, and also intuition. In our work we model this planning task as a combinatorial network optimization problem, capturing the most essential aspects. We then present a constructive heuristic algorithm that makes use of a dynamic programming proce- dure for realizing individual commercial stops. Computational experiments on instances de- rived from real scenarios indicate that the suggested approach is promising and the analysis of obtained results gives useful hints for future work in this area.

Link to Repositum

A metaheuristic approach for integrated timetable based design of railway infrastructure
Grujicic, Igor, Raidl, Günther, Schöbel, Andreas, Besau, Gerhard
Type: Inproceedings; In: Proceedings of the 3rd International Conference on Road and Rail Infrastructure CETRA 2014; Pages: 691-696
Show Abstract
The design of new railway infrastructure is a complex planning process in most countries today due to a multitude of requirements. From an operational point of view new infrastructure basically has to fulfill the needs defined by customers. To this end passenger traffic is often organized in an integrated timetable with well defined arrival and departure times at major hub stations. So far there is no automated tool available to help in determining a minimum cost infrastructure fulfilling all the requirements defined by a timetable and the operation of the railway system. Instead, this task is typically carried out manually, based on graphical design, human experience, and also intuition. In our work we model this planning task as a combinatorial network optimization problem, capturing the most essential aspects. We then present a constructive heuristic algorithm that makes use of a dynamic programming procedure for realizing individual commercial stops. Computational experiments on instances derived from real scenarios indicate that the suggested approach is promising and the analysis of obtained results gives useful hints for future work in this area.

Link to Repositum

A Variable Neighborhood Search Using Very Large Neighborhood Structures for the 3-Staged 2-Dimensional Cutting Stock Problem.
Dusberger, Frederico, Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics; Pages: 183-197
Show Abstract
In this work we consider the 3-staged 2-dimensional cutting stock problem, which appears in many real-world applications such as glass and wood cutting and various scheduling tasks. We suggest a variable neighborhood search (VNS) employing \ruin-and-recreate"-based very large neighborhood searches (VLNS). We further present a polynomial-sized integer linear programming model (ILP) for solving the subproblem of 2-staged 2-dimensional cutting with variable sheet sizes, which is exploited in an additional neighborhood search within the VNS. Both methods yield signicantly better results on about half of the benchmark instances from literature than have been published before.

Link to Repositum

Speeding Up Logic-Based Benders’ Decomposition by a Metaheuristic for a Bi-Level Capacitated Vehicle Routing Problem
Raidl, Günther R., Baumhauer, Thomas, Hu, Bin
Type: Inproceedings; In: Hybrid Metaheuristics; Pages: 183-197
Show Abstract
Benders´ Decomposition (BD) is a prominent technique for tackling large mixed integer programming problems having a certain structure by iteratively solving a series of smaller master and subproblem instances. We apply a generalization of this technique called Logic-Based BD, which does not restrict the subproblems to have continuous variables only, to a bi-level vehicle routing problem originating in the timely distribution of printed newspapers to subscribers. When solving all master and subproblem instances exactly by CPLEX, it turns out that the scalability of the approach is quite limited. The situation can be dramatically improved when using a meaningful metaheuristic - in our case a variable neighborhood search - for approximately solving either only the subproblems or both, the master as well as the subproblem instances. More generally, it is shown that Logic-Based BD can be a highly promising framework also for hybrid metaheuristics.

Link to Repositum

Variable Neighborhood Search Hybrids
Raidl, Günther
Type: Presentation

Link to Repositum

Reducing the Number of Simulations in Operation Strategy Optimization for Hybrid Electric Vehicles
Bacher, Christopher, Krenek, Thorsten, Raidl, Günther R.
Type: Inproceedings; In: Applications of Evolutionary Computation; Pages: 553-564
Show Abstract
The fuel consumption of a simulation model of a real Hybrid Electric Vehicle is optimized on a standardized driving cycle using metaheuristics (PSO, ES, GA). Search space discretization and metamodels are considered for reducing the number of required,time-expensive simulations. Two hybrid metaheuristics for combining the discussed methods are presented. In experiments it is shown that the use of hybrid metaheuristics with discretization and metamodels can lower the number of required simulations without significant loss in solution quality.

Link to Repositum

An efficient variable neighborhood search for solving a robust dynamic facility location problem in emergency service network
Mišković, Stefan, Stanimirović, Zorica, Grujičić, Igor
Type: Inproceedings; In: Electronic Notes in Discrete Mathematics; Pages: 261-268
Show Abstract
In this study, we propose a robust variant of a dynamic facility location problem that arises from optimizing the emergency service network of Police Special Forces Units (PSFUs) in the Republic of Serbia. We present for the first time a mathematical programming formulation of the problem under consideration. We further propose a Variable Neighborhood Search (VNS) method with an efficient local search procedure for solving real-life problem instances that remained out of reach of CPLEX solver. The results presented in this paper may help in optimizing the network of PSFUs and other security networks as well.

Link to Repositum

An Evolutionary Algorithm for the Leader-Follower Facility Location Problem with Proportional Customer Behavior
Biesinger, Benjamin, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Learning and Intelligent Optimization 8th International Conference, Lion 8, Gainesville, FL, USA, February 16-21, 2014. Revised Selected Papers; Pages: 203-217
Show Abstract
The leader-follower facility location problem arises in the context of two non-cooperating companies, a leader and a follower, competing for market share from a given set of customers. In our work we assume that the rms place a given number of facilities on locations taken from a discrete set of possible points. The customers are assumed to split their demand inversely proportional to their distance to all opened facilities. In this work we present an evolutionary algorithm with an embedded tabu search to optimize the location selection for the leader. A complete solution archive is used to detect already visited candidate solutions and convert them into not yet considered ones. This avoids unnecessary time-consuming re-evaluations, reduces premature convergence and increases the population diversity at the same time. Results show significant advantages of our approach over an existing algorithm from the literature.

Link to Repositum

Balancing Bicycle Sharing Systems: An Approach for the Dynamic Case.
Kloimüllner, Christian, Papazek, Petrina, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization 14th European Conference, EvoCOP 2014, Granada, Spain, April 23-25, 2014, Revised Selected Papers
Show Abstract
Operators of public bicycle sharing systems (BSSs) have to regularly redistribute bikes across their stations in order to avoid them getting overly full or empty. We consider the dynamic case where this is done while the system is in use. There are two main objectives: On the one hand it is desirable to reach particular target fill levels at the end of the process so that the stations are likely to meet user demands for the upcoming day(s). On the other hand operators also want to prevent stations from running empty or full during the rebalancing process which would lead to unsatisfied customers. We extend our previous work on the static variant of the problem by introducing an efficient way to model the dynamic case as well as adapting our previous greedy and PILOT construction heuristic, variable neighborhood search and GRASP. Computational experiments are performed on instances based on real-world data from Citybike Wien, a BSS operator in Vienna, where the model for user demands is derived from historical data.

Link to Repositum

2013
Metaheuristics for solving a multimodal home-healthcare scheduling problem
Hiermann, Gerhard, Prandtstetter, Matthias, Rendl, Andrea, Puchinger, Jakob, Raidl, Günther R.
Type: Article; In: Central European Journal of Operations Research; Vol: 23; Issue: 1; Pages: 89-113
Show Abstract
We present a general framework for solving a real-world multimodal home-healthcare scheduling (MHS) problem from a major Austrian home-healthcare provider. The goal ofMHSis to assign home-care staff to customers and determine efficient multimodal tours while considering staff and customer satisfaction. Our approach is designed to be as problem-independent as possible, such that the resulting methods can be easily adapted to MHS setups of other home-healthcare providers. We chose a two-stage approach: in the first stage, we generate initial solutions either via constraint programming techniques or by a random procedure. During the second stage, the initial solutions are (iteratively) improved by applying one of four metaheuristics: variable neighborhood search, a memetic algorithm, scatter search and a simulated annealing hyper-heuristic. An extensive computational comparison shows that the approach is capable of solving real-world instances in reasonable time and produces valid solutions within only a few seconds.

Link to Repositum

A timeslot-filling based heuristic approach to construct high-school timetables
Pimmer, Michael, Raidl, Günther
Type: Book Contribution; In: Advances in Metaheuristics; Pages: 143-158
Show Abstract
This work describes an approach for creating high-school timetables. To develop and test our algorithm, we used the international, real-world instances of the Benchmarking project for (High) School Timetabling. Contrary to most other heuristic approaches, we do not try to iteratively assign single meetings (events) to timeslots. Instead, we repeatedly choose a not entirely occupied timeslot and aim at simultaneously assigning the most suitable set of meetings. To improve and diversify the solutions, a heuristic that deletes and reassigns certain timeslots, events or resources is applied and combined with a hill-climbing procedure to find suitable parameters for grading constraints. Experimental results indicate the competitiveness of this new approach.

Link to Repositum

Using optimized virtual network embedding for network dimensioning
Inführ, Johannes, Stezenbach, David, Hartmann, Matthias, Tutschku, Kurt, Raidl, Günther
Type: Inproceedings; In: Proceedings of Networked Systems 2013; Pages: 118-125
Show Abstract
Virtual Network Embedding will be one of the key concepts of the Future Internet. For an ISP it is important to know how many additional Virtual Networks (VNs) of a specific application (e.g. web, streaming, P2P, and VoIP) are mappable into the current resource substrate with a certain probability. In this work we calculate this probability with our embedding algorithm which enables us to consider side effects based on remapping of VNs (e.g. due to reduced link delay). Our results show that minimal extra resources can significantly increase embedding probability of additional VNs. Index Terms-Virtual Network Embedding, Network Dimensioning, Service Availability, Network Optimization

Link to Repositum

Enhancing a Genetic Algorithm with a Solution Archive to Reconstruct Cross Cut Shredded Text Documents
Biesinger, Benjamin, Schauer, Christian, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2013 - Part 1; Pages: 380-387
Show Abstract
In this work the concept of a trie-based complete solution archive in combination with a genetic algorithm is applied to the Reconstruction of Cross-Cut Shredded Text Documents (RCCSTD) problem. This archive is able to detect and subsequently convert duplicates into new yet unvisited solutions. Cross-cut shredded documents are documents that are cut into rectangular pieces of equal size and shape. The reconstruction of documents can be of high interest in forensic science. Two types of tries are compared as underlying data structure, an indexed trie and a linked trie. Experiments indicate that the latter needs considerably less memory without affecting the run-time. While the archiveenhanced genetic algorithm yields better results for runs with a fixed number of iterations, advantages diminish due to the additional overhead when considering run-time.

Link to Repositum

A PILOT/VND/GRASP Hybrid for the Static Balancing of Public Bicycle Sharing Systems
Papazek, Petrina, Raidl, Günther, Rainer-Harbach, Marian, Hu, Bin
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2013 - Part 1; Pages: 372-379
Show Abstract
Due to varying user demands in bicycle sharing systems, operators need to actively shift bikes between stations by a fleet of vehicles.We address the problem of finding efficient vehicle tours by an extended version of an iterated greedy construction heuristic following the concept of the PILOT method and GRASP and applying a variable neighborhood descend (VND) as local improvement. Computational results on benchmark instances derived from the real-world scenario in Vienna with up to 700 stations indicate that our PILOT/GRASP hybrid especially scales significantly better to very large instances than a previously proposed variable neighborhood search (VNS) approach. Applying only one iteration, the PILOT construction heuristic followed by the VND provides good solutions very quickly, which can be potentially useful for urgent requests.

Link to Repositum

A Memetic Algorithm with Two Distinct Solution Representations for the Partition Graph Coloring Problem
Pop, Petrica, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2013 - Part 1; Pages: 219-226
Show Abstract
In this paper we propose a memetic algorithm (MA) for the partition graph coloring problem. Given a clustered graph G = (V,E), the goal is to find a subset V ∗ ⊂ V that contains exactly one node for each cluster and a coloring for V ∗ so that in the graph induced by V ∗ , two adjacent nodes have different colors and the total number of used colors is minimal. In our MA we use two distinct solution representations, one for the genetic operators and one for the local search procedure, which are tailored for the corresponding situations, respectively. The algorithm is evaluated on a common benchmark instances set and the computational results show that compared to a state-of-the-art branch and cut algorithm, our MA achieves solid results in very short run-times.

Link to Repositum

An Optimization Model for Integrated Timetable Based Design of Railway Infrastructure
Schöbel, Andreas, Raidl, Günther, Grujicic, Igor, Besau, Gerhard, Schuster, Gottfried
Type: Inproceedings; In: Proceedings; Pages: 10
Show Abstract
The design of new railway infrastructure is a complex planning process today in most Eu- ropean countries due to several requirements. From an operational point of view new in- frastructure basically has to fulfil the requirements defined by the later customers who are the railway undertakings. Hereby passenger traffic is often organised in a periodic timetable with well defined arrival and departure times in the hubs. So far there is no automated tool available to help in determining a minimum cost infrastructure fulfilling all the requirements defined by a timetable and the operation of the railway system. Instead, this task is typically carried out manually based on graphical design, human experience, and also intuition. This paper presents a first formalization of this task as a combinatorial optimization problem trying to capture the most essential aspects. For solving it promising algorithmic concepts based on mathematical programming techniques and metaheuristics are sketched.

Link to Repositum

Proceedings of the 10th Metaheuristics International Conference
Authors not available
Type: Proceedings

Link to Repositum

Optimization Approaches for Balancing Bicycle Sharing Systems
Raidl, Günther
Type: Presentation

Link to Repositum

Robust variable selection in linear regression with compositional explanatory variables
Schroeder, F, Braumann, Andreas, Filzmoser, Peter, Hron, Karel
Type: Inproceedings; In: Proceedings of the 5th International Workshop on Compositional Data Analysis CoDaWork 2013 June 3-7, 2013, Vorau, Austria; Pages: 55

Link to Repositum

Stabilizing branch-and-price for constrained tree problems
Leitner, Markus, Ruthmair, Mario, Raidl, Günther
Type: Article; In: Networks; Vol: 61; Issue: 2; Pages: 150-170
Show Abstract
We consider a rather generic class of network design problems in which a set or subset of given terminal nodes must be connected to a dedicated root node by simple paths and a variety of resource and/or quality of service constraints must be respected. These extensions of the classical Steiner tree problem on a graph can be well modeled by a path formulation in which individual variables are used for all feasible paths. To solve this formulation in practice, branch-and-price is used. It turns out, however, that a naive implementation of column generation suffers strongly from certain degeneracies of the pricing subproblem, leading to excessive running times. After analyzing these computational problems, we propose two methods for stabilizing column generation by using alternative dual-optimal solutions. This stabilized branch-and-price is practically tested on the rooted delay-constrained Steiner tree problem and a quota-constrained version of it. Results indicate that the new stabilization methods in general speed up the solution process dramatically, far more than a piecewise linear stabilization to which we compare. Furthermore, our stabilized branch-and-price exhibits on most test instances a better performance than a so far leading mixed integer programming approach based on a layered graph model and branch-and-cut. As the new stabilization technique utilizing alternative dual-optimal solutions is generic in the sense that it easily adapts to the inclusion of a large variety of further constraints and different objective functions, the proposed method is highly promising for a large class of network design problems.

Link to Repositum

Clique and independent set based grasp approaches for the regenerator location problem
Jahrmann, Peter, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 10th Metaheuristics International Conference; Pages: 1-10
Show Abstract
We consider the Regenerator Location Problem (RLP) in optical fibre communication networks: As optical signals deteriorate in dependence of the distance from the source, regenerator devices need to be installed at a subset of the network nodes so that no segment of any communication path without an intermediate regenerator exceeds an allowed maximum length. The objective is to place a smallest possible number of regenerators in order to satisfy this condition. We propose two new construction heuristics based on identifying and exploiting cliques and independent sets of the network graph. These strategies are further extended to Greedy Randomized Adaptive Search Procedures (GRASP) that also include new destroy and recreate local search phases. Excellent results are obtained in an experimental comparison with a previously described GRASP.

Link to Repositum

A memetic algorithm for the partition graph coloring problem
Pop, Petrica, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 14th International Conference on Computer Aided Systems Theory; Pages: 167-169

Link to Repositum

A pilot/vnd/grasp hybrid for balancing bicycle sharing systems
Papazek, Petrina, Raidl, Günther, Rainer-Harbach, Marian, Hu, Bin
Type: Inproceedings; In: Extended Abstracts of the 14th International Conference on Computer Aided Systems Theory; Pages: 223-225

Link to Repositum

A mixed integer model for the stamina-aware sightseeing tour problem
Hu, Bin, Ölz, Werner, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 14th International Conference on Computer Aided Systems Theory; Pages: 200-202

Link to Repositum

Reconstructing cross cut shredded documents with a genetic algorithm with solution archive
Biesinger, Benjamin, Schauer, Christian, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the 14th International Conference on Computer Aided Systems Theory; Pages: 226-228

Link to Repositum

Metaheuristics for the Static Balancing of Bicycle Sharing Systems
Raidl, Günther
Type: Presentation

Link to Repositum

Metaheuristics and Hybrid Optimization Approaches - A Unifying View
Raidl, Günther
Type: Presentation

Link to Repositum

Solving the virtual network mapping problem with construction heuristics, local search and variable neighborhood descent
Inführ, Johannes, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimisation - 13th European Conference, EvoCOP 2013; Pages: 250-261
Show Abstract
The Virtual Network Mapping Problem arises in the context of Future Internet research. Multiple virtual networks with di erent characteristics are de ned to suit speci c applications. These virtual networks, with all of the resources they require, need to be realized in one physical network in a most cost e ective way. Two properties make this problem challenging: Already nding any valid mapping of all virtual networks into the physical network without exceeding the resource capacities is NP-hard, and the problem consists of two strongly dependent stages as the implementation of a virtual network's connections can only be decided once the locations of the virtual nodes in the physical network are xed. In this work we introduce various construction heuristics, Local Search and Variable Neighborhood Descent approaches and perform an extensive computational study to evaluate the strengths and weaknesses of each proposed solution method.

Link to Repositum

GRASP and variable neighborhood search for the virtual network mapping problem
Inführ, Johannes, Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics, 8th Int. Workshop, HM 2013; Pages: 159-173
Show Abstract
Virtual network mapping considers the problem of fitting multiple virtual networks into one physical network in a cost-optimal way. This problem arises in Future Internet research. One of the core ideas is to utilize different virtual networks to cater to different application classes, each with customized protocols that deliver the required Quality-of- Service. In this work we introduce a Greedy Randomized Adaptive Search Procedure (GRASP) and Variable Neighborhood Search (VNS) algorithm for solving the Virtual NetworkMapping Problem. Both algorithms make use of a Variable Neighborhood Descent with ruin-and-recreate neighborhoods. We show that the VNS approach significantly outperforms the previously best known algorithms for this problem.

Link to Repositum

A memetic algorithm for the virtual network mapping problem
Inführ, Johannes, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 10th Metaheuristics International Conference; Pages: 10
Show Abstract
The Virtual Network Mapping Problem arises in the context of Future Internet research. The core idea is the introduction of virtual networks to the Internet to be able to improve its functionality in a non-disruptive way. This also enables the creation of specialized networks which directly provide functionality required by some application classes. The challenge of fitting all the virtual networks (and the resources they require) into a physical network is the Virtual Network Mapping Problem. In this work, we introduce a Memetic Algorithm that significantly outperforms the previously best algorithms for this problem. We also offer an analysis of the influence of different problem representations and in particular the implementation of an uniform crossover for the Grouping Genetic Algorithm that may also be interesting outside of the Virtual Network Mapping domain.

Link to Repositum

Balancing bicycle sharing systems: Improving a VNS by efficiently determining optimal loading operations
Raidl, Günther, Hu, Bin, Rainer-Harbach, Marian, Papazek, Petrina
Type: Inproceedings; In: Hybrid Metaheuristics, 8th Int. Workshop, HM 2013; Pages: 130-143
Show Abstract
Public bike sharing systems are important alternatives to motorized individual tra c and are gaining popularity in larger cities worldwide. In order to maintain user satisfaction, operators need to ac- tively rebalance the systems so that there are enough bikes available for rental as well as su cient free slots for returning them at each station. This is done by a vehicle eet that moves bikes among the stations. In a previous work we presented a variable neighborhood search metaheuris- tic for nding e ective vehicle routes and three di erent auxiliary proce- dures to calculate loading operations for each candidate solution. For the most exible auxiliary procedure based on LP, the current work provides a new, practically more e cient method for calculating proven optimal loading operations based on two maximum ow computations. The dif- ferent strategies for determining loading operations are further applied in combination controlled by an additional neighborhood structure. Exper- imental results indicate that this combined approach yields signi cantly better results than the original variable neighborhood search.

Link to Repositum

Balancing bicycle sharing systems: A variable neighborhood search approach
Rainer-Harbach, Marian, Papazek, Petrina, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimisation - 13th European Conference, EvoCOP 2013; Pages: 121-132
Show Abstract
We consider the necessary redistribution of bicycles in public bicycle sharing systems in order to avoid rental stations to run empty or entirely full. For this purpose we propose a general Variable Neighborhood Search (VNS) with an embedded Variable Neighborhood Descent (VND) that exploits a series of neighborhood structures. While this metaheuristic generates candidate routes for vehicles to visit unbalanced rental stations, the numbers of bikes to be loaded or unloaded at each stop are efficiently derived by one of three alternative methods based on a greedy heuristic, a maximum flow calculation, and linear programming, respectively. Tests are performed on instances derived from real-world data and indicate that the VNS based on a greedy heuristic represents the best compromise for practice. In general the VNS yields good solutions and scales much better to larger instances than two mixed integer programming approaches.

Link to Repositum

2012
An Adaptive Layers Framework for Vehicle Routing Problems
Ruthmair, Mario, Raidl, Günther
Type: Presentation
Show Abstract
Current exact solution methods for vehicle routing problems are mostly based on set partitioning formulations enhanced by strong valid inequalities. We present a different approach where resources, e.g., capacities or times, are modeled on a layered graph in which the original graph is duplicated for each achievable resource value. MIP models on this layered graph typically yield tight LP bounds. However, as the size of this graph strongly depends on the resource bounds, such models may be huge and impracticable. We propose a framework for approximating the LP bound of such a resource-indexed formulation by a sequence of much smaller models. Based on a strongly reduced node set in the layered graph we redirect arcs in a way to obtain lower and upper bounds to the LP value of the complete model. This reduced layered graph is iteratively extended, decreasing the gap. Moreover, a sequence of improving primal bounds can be provided. The final model extended by inequalities to ensure feasibility is solved by branch-and-cut. Obtained results, e.g., for the vehicle routing problem with time windows, look promising although we currently cannot compete with state-of-the-art methods

Link to Repositum

Balancing Bicycle Sharing Systems by Variable Neighborhood Search
Rainer-Harbach, Marian, Papazek, Petrina
Type: Presentation

Link to Repositum

Hybrid Metaheuristics and Matheuristics
Raidl, Günther
Type: Presentation

Link to Repositum

An Adaptive Layers Framework for Resource-Constrained Network Design Problems
Ruthmair, Mario, Raidl, Günther
Type: Presentation
Show Abstract
Network design problems with resource constraints defined on paths often can be efficiently modeled by using a layered graph approach, in which the original graph is duplicated for each achievable resource value and a resource-indexed mixed integer programming formulation is used. Such formulations are known to have typically tight linear programming relaxation bounds. However, as the number of variables strongly depends on the actual resource bounds, such models may contain huge numbers of variables. We propose a general framework for approximating the linear programming relaxation value of a resource-indexed formulation by a sequence of usually much smaller models. The framework starts with a strongly reduced node set in the layered graph redirecting arcs in a way to provide lower and upper bounds to the linear programming relaxation value of the complete resource-indexed formulation. According to the obtained solutions the layered graph is iteratively extended, decreasing the gap. Additionally, during this approximation procedure the framework can optionally provide a sequence of improving primal solutions. After certain stopping criteria are met the final formulation extended by valid inequalities necessary for guaranteeing feasibility is solved in the usual branch-and-cut way supported by the best obtained primal bound.

Link to Repositum

Balancing Bicycle Sharing Systems by Variable Neighborhood Search
Raidl, Günther, Causevic, Emir, Hu, Bin, Rainer-Harbach, Marian
Type: Presentation

Link to Repositum

Solving the post enrolment course timetabling problem by ant colony optimization
Nothegger, Clemens, Mayer, Alfred, Chwatal, Andreas, Raidl, Günther R.
Type: Article; In: Annals of Operations Research; Vol: 194; Issue: 1; Pages: 325-339
Show Abstract
In this work we present a new approach to tackle the problem of Post Enrolment Course Timetabling as specified for the International Timetabling Competition 2007 (ITC2007), competition track 2. The heuristic procedure is based on Ant Colony Optimization (ACO) where artificial ants successively construct solutions based on pheromones (stigmergy) and local information. The key feature of our algorithm is the use of two distinct but simplified pheromone matrices in order to improve convergence but still provide enough flexibility for effectively guiding the solution construction process. We show that by parallelizing the algorithm we can improve the solution quality significantly. We applied our algorithm to the instances used for the ITC2007. The results document that our approach is among the leading algorithms for this problem; in all cases the optimal solution could be found. Furthermore we discuss the characteristics of the instances where the algorithm performs especially well.

Link to Repositum

Applying (hybrid) metaheuristics to fuel consumption optimization of hybrid electric vehicles
Krenek, Thorsten, Ruthmair, Mario, Raidl, Günther, Planer, Michael
Type: Inproceedings; In: Applications of Evolutionary Computation - EvoApplications 2012; Pages: 376-385
Show Abstract
This work deals with the application of metaheuristics to the fuel consumption minimization problem of hybrid electric vehicles (HEV) considering exactly specified driving cycles. A genetic algorithm, a downhill-simplex method and an algorithm based on swarm intelligence are used to find appropriate parameter values aiming at fuel consumption minimization. Finally, the individual metaheuristics are combined to a hybrid optimization algorithm taking into account the strengths and weaknesses of the single procedures. Due to the required time-consuming simulations it is crucial to keep the number of candidate solutions to be evaluated low. This is partly achieved by starting the heuristic search with already meaningful solutions identified by a Monte-Carlo procedure. Experimental results indicate that the implemented hybrid algorithm achieves better results than previously existing optimization methods on a simplified HEV model.

Link to Repositum

A hybrid heuristic for multimodal homecare scheduling
Rendl, Andrea, Prandtstetter, Matthias, Hiermann, Gerhard, Puchinger, Jakob, Raidl, Günther
Type: Inproceedings; In: Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems; Pages: 339-355
Show Abstract
We focus on hybrid solution methods for a large-scale realworld multimodal homecare scheduling (MHS) problem, where the objective is to find an optimal roster for nurses who travel in tours from patient to patient, using different modes of transport. In a first step, we generate a valid initial solution using Constraint Programming (CP). In a second step, we improve the solution using one of the following metaheuristic approaches: (1) variable neighborhood descent, (2) variable neighborhood search, (3) an evolutionary algorithm, (4) scatter search and (5) a simulated annealing hyper heuristic. Our evaluation, based on computational experiments, demonstrates how hybrid approaches are particularly strong in finding promising solutions for large real-world MHS problem instances.

Link to Repositum

Robust variable selection for linear regression models with compositional data
Schroeder, F, Braumann, Andreas, Filzmoser, Peter
Type: Presentation

Link to Repositum

A Multilevel Heuristic for the Rooted Delay-Constrained Minimum Spanning Tree Problem
Berlakovich, Martin, Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 13th International Conference on Computer Aided Systems Theory: Part I; Pages: 256-263
Show Abstract
The rooted delay-constrained minimum spanning tree prob- lem is an NP-hard combinatorial optimization problem. The problem appears in practice for example when designing a distribution network with a guarantee of timely delivery. Another example is be a centralized broadcasting network where the delaybound represents a quality of service constraint. We introduce a multilevel-based construction heuristic which uses a new measurement for the suitability of edges to create a solution for the problem. In comparison to existing heuristics the main intention is not to create a minimum cost spanning tree, but a solution with a high potential for further improvement. Experimental results indicate that in most cases our approach produces solutions that after local improvement are of higher quality than those of other existing construction techniques.

Link to Repositum

Variable Neighborhood Search and GRASP for Three-Layer Hierarchical Ring Network Design
Schauer, Christian, Raidl, Günther
Type: Inproceedings; In: Parallel Problem Solving from Nature-PPSN XII; Pages: 458-467
Show Abstract
We introduce the Three-Layer Hierarchical Ring Network Design Problem, which arises especially in the design of large telecommunication networks. The aim is to connect nodes that are assigned to three different layers using rings of bounded length. We present tailored Variable Neighborhood Search (VNS) and GRASP approaches to solve large instances of this problem heuristically, and discuss computational results indicating the VNS´ superiority.

Link to Repositum

Automatic generation of 2-antwars players with genetic programming
Inführ, Johannes, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 13th International Conference on Computer Aided Systems Theory: Part I; Pages: 248-255
Show Abstract
In this work, we show how Genetic Programming can be used to create game playing strategies for 2-AntWars, a deterministic turnbased two player game with local information. We evaluate the created strategies against fixed, human created strategies as well as in a coevolutionary setting, where both players evolve simultaneously. We show that genetic programming is able to create competent players which can beat the static playing strategies, sometimes even in a creative way. Both mutation and crossover are shown to be essential for creating superior game playing strategies.

Link to Repositum

An evolutionary algorithm with solution archive for the generalized minimum spanning tree problem
Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 13th International Conference on Computer Aided Systems Theory: Part I; Pages: 287-294
Show Abstract
We propose a concept of enhancing an evolutionary algorithm (EA) with a complete solution archive that stores evaluated solutions during the optimization in a trie in order to detect duplicates and to e ciently convert them into yet unconsidered solutions. As an application we consider the generalized minimum spanning tree problem where we are given a graph with nodes partitioned into clusters and exactly one node from each cluster must be connected. For this problem there exist two compact solution representations that can be e ciently decoded, and we use them jointly in our EA. The solution archive contains two tries { each is based on one representation, respectively. We show that these two tries complement each other well. Test results on TSPlib instances document the strength of this concept and that it can match up with the leading state-of-the-art metaheuristic approaches from the literature.

Link to Repositum

An evolutionary algorithm with solution archives and bounding extension for the generalized minimum spanning tree problem
Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference - GECCO '12
Show Abstract
We consider the recently proposed concept of enhancing an evolutionary algorithm (EA) with a complete solution archive. It stores evaluated solutions during the optimization in order to detect duplicates and to e ciently transform them into yet unconsidered solutions. For this approach we introduce the so-called bounding extension in order to identify and prune branches in the trie-based archive which only contain inferior solutions. This extension enables the EA to concentrate the search on promising areas of the solution space. Similarly to the classical branch-and-bound technique, bounds are obtained via primal and dual heuristics. As an application we consider the generalized minimum spanning tree problem where we are given a graph with nodes partitioned into clusters and exactly one node from each cluster must be connected in the cheapest way. As the EA uses operators based on two dual representations, we exploit two corresponding tries that complement each other. Test results on TSPlib instances document the strength of this concept and that it can compete with the leading metaheuristics for this problem in the literature.

Link to Repositum

A variable neighborhood search approach for the two-echelon location-routing problem
Schwengerer, Martin, Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimisation - EvoCOP 2012; Pages: 13-24
Show Abstract
We consider the two-echelon location-routing problem (2ELRP), a well-known problem in freight distribution arising when establishing a two-level transport system with limited capacities. The problem is a generalization of the NP-hard location routing problem (LRP), involving strategic (location), tactical (allocation) and operational (routing) decisions at the same time. We present a variable neighborhood search (VNS) based on a previous successful VNS for the LRP, accordingly adapted as well as extended. The proposed algorithm provides solutions of high quality in short time, making use of seven di erent basic neighborhood structures parameterized with di erent perturbation size leading to a total of 21 speci c neighborhood structures. For intensi- cation, two consecutive local search methods are applied, optimizing the transport costs e ciently by considering only recently changed solution parts. Experimental results clearly show that our method is at least competitive regarding runtime and solution quality to other leading approaches, also improving upon several best known solutions.

Link to Repositum

Improved packing and routing of vehicles with compartments
Pirkwieser, Sandro, Raidl, Günther, Gottlieb, Jens
Type: Inproceedings; In: Proceedings of the 13th International Conference on Computer Aided Systems Theory: Part I; Pages: 392-399
Show Abstract
We present a variable neighborhood search for the vehicle routing problem with compartments where we incorporate some features speci cally aiming at the packing aspect. Among them we use a measure to distinguish packings and favor solutions with a denser packing, pro- pose new neighborhood structures for shaking, and employ best- t and best- t-decreasing methods for inserting orders. Our approach yields en- couraging results on a large set of test instances, obtaining new best known solutions for almost two third of them.

Link to Repositum

On Solving the Rooted Delay- and Delay-Variation-Constrained Steiner Tree Problem
Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 2nd International Symposium on Combinatorial Optimization; Pages: 225-236
Show Abstract
We present mixed integer programming approaches for optimally solving a combinatorial optimization problem arising in network design with additional quality of service constraints. The rooted delay- and delay-variation-constrained Steiner tree problem asks for a cost-minimal Steiner tree satisfying delay-constraints from source to terminals and a maximal variation-bound between particular terminal path-delays. Our MIP models are based on multi-commodity-flows and a layered graph transformation. For the latter model we propose some new sets of valid inequalities and an efficient separation method. Presented experimental results indicate that our layered graph approaches clearly outperform the flow-based model.

Link to Repositum

2011
Hybrid metaheuristics in combinatorial optimization: A survey
Blum, Christian, Puchinger, Jakob, Raidl, Günther R., Roli, Andrea
Type: Article; In: Applied Soft Computing; Vol: 11; Issue: 6; Pages: 4135-4151
Show Abstract
Research in metaheuristics for combinatorial optimization problems has lately experienced a noteworthy shift towards the hybridization of metaheuristics with other techniques for optimization. At the same time, the focus of research has changed from being rather algorithm-oriented to being more problem-oriented. Nowadays the focus is on solving the problem at hand in the best way possible, rather than promoting a certain metaheuristic. This has led to an enormously fruitful cross-fertilization of different areas of optimization. This cross-fertilization is documented by a multitude of powerful hybrid algorithms that were obtained by combining components from several different optimization techniques. Hereby, hybridization is not restricted to the combination of different metaheuristics but includes, for example, the combination of exact algorithms and metaheuristics. In this work we provide a survey of some of the most important lines of hybridization. The literature review is accompanied by the presentation of illustrative examples.

Link to Repositum

Solving the minimum label spanning tree problem by mathematical programming techniques
Chwatal, Andreas M., Raidl, Günther R.
Type: Article; In: Advances in Operations Research; Vol: 2011; Pages: 1-38
Show Abstract
We present exact mixed integer programming approaches including branch-and-cut and branch-and-cut-and-price for the minimum label spanning tree problem as well as a variant of it having multiple labels assigned to each edge. We compare formulations based on network flows and directed connectivity cuts. Further, we show how to use odd-hole inequalities and additional inequalities to strengthen the formulation. Label variables can be added dynamically to the model in the pricing step. Primal heuristics are incorporated into the framework to speed up the overall solution process. After a polyhedral comparison of the involved formulations, comprehensive computational experiments are presented in order to compare and evaluate the underlying formulations and the particular algorithmic building blocks of the overall branch-and-cut- (and-price) framework.

Link to Repositum

Branch-and-cut-and-price for capacitated connected facility location
Leitner, Markus, Raidl, Günther R.
Type: Article; In: Journal of Mathematical Modelling and Algorithms; Vol: 10; Issue: 3; Pages: 245-267
Show Abstract
We consider a generalization of the Connected Facility Location Problem (ConFL), suitable to model real world network extension scenarios such as fiber-to-the-curb. In addition to choosing a set of facilities and connecting them by a Steiner tree as in ConFL, we aim to maximize the resulting profit by potentially supplying only a subset of all customers. Furthermore, capacity constraints on potential facilities need to be considered. We present two mixed integer programming based approaches which are solved using branch-and-cut and branch-and-cut-and-price, respectively. By studying the corresponding polyhedra we analyze both approaches theoretically and show their advantages over previously presented models. Furthermore, using a computational study we are able to additionally show significant advantages of our models over previously presented ones from a practical point of view.

Link to Repositum

Improved packing and routing of vehicles with compartments
Pirkwieser, Sandro, Raidl, Günther, Gottlieb, Jens
Type: Inproceedings; In: Extended Abstracts of EUROCAST 2011 - 13th International Conference on Computer Aided Systems Theory; Pages: 302-304

Link to Repositum

A branch-and-cut-and-price algorithm for a fingerprint-template compression application
Chwatal, Andreas, Thöni, Corinna, Oberlechner, Karin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 2011 Federated Conference on Computer Science and Information Systems (FedCSIS); Pages: 239-246

Link to Repositum

A Layered Graph Model and an Adaptive Layers Framework to Solve Delay-Constrained Minimum Tree Problems
Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Fifteenth Conference on Integer Programming and Combinatorial Optimization (IPCO XV); Pages: 376-388
Show Abstract
We present a layered graph model for delay-constrained minimum tree problems with a polynomial number of constraints which can be solved well for instances with low- to medium-sized sets of achievable delay values and not too high bounds. Layered graph models have been recently shown to frequently yield tight bounds in the context of hopor delay-constrained network design problems. However, since the size of the layered graph heavily depends on the size of the set of achievable delay values and the corresponding delay bound the practical applicability of these models is limited. To overcome this problem we introduce an iterative strategy in which an initially small layered graph is successively extended in order to tighten lower and upper bounds until convergence to the optimal solution. Computational results show the synergetic effectiveness of both approaches outperforming existing models in nearly all cases.

Link to Repositum

Tackling the loading aspect of the vehicle routing problem with compartments
Pirkwieser, Sandro, Raidl, Günther, Gottlieb, Jens
Type: Inproceedings; In: Proceedings of the 9th Metaheuristics International Conference; Pages: 679-681

Link to Repositum

Stabilized column generation for the rooted delay-constrained Steiner tree problem
Leitner, Markus, Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Proceedings of the VII ALIO/EURO - Workshop on Applied Combinatorial Optimization; Pages: 250-253
Show Abstract
We consider the rooted delay-constrained Steiner tree problem which arises for example in the design of centralized multicasting networks where quality of service constraints are of concern. We present a path based integer linear programming formulation which has already been considered in the literature for the spanning tree variant. Solving its linear relaxation by column generation has so far been regarded as not competitive due to long computational times needed. In this work, we show how to significantly accelerate the column generation process using two different stabilization techniques. Computational results indicate that due to the achieved speed-up our approach outperforms so-far proposed methods.

Link to Repositum

A timeslot-filling based heuristic approach to construct high-school timetables
Pimmer, Michael, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 9th Metaheuristics International Conference; Pages: 349-358
Show Abstract
This work describes an approach for creating high-school timetables. To develop and test our algorithm, we used the international, real-world instances of the Benchmarking project for (High) School Timetabling. Contrary to most other heuristic approaches, we do not try to iteratively assign single meetings (events) to timeslots. Instead, we repeatedly choose a not entirely occupied timeslot and aim at simultaneously assigning the most suitable set of meetings. To improve and diversify the solutions, a heuristic that deletes and reassigns certain timeslots, events or resources is applied and combined with a hill-climbing procedure to find suitable parameters for grading constraints. Experimental results indicate the competitiveness of this new approach.

Link to Repositum

Using a Solution Archive to Enhance Metaheuristics for the Rooted Delay-Constrained Minimum Spanning Tree Problem
Ruthmair, Mario, Hubmer, Andreas, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of EUROCAST 2011 - 13th International Conference on Computer Aided Systems Theory; Pages: 285-287
Show Abstract
When designing a communication network with a central server broadcasting information to all the participants of the network, some applications, such as video conferences, require a limitation of the maximal delay from the server to each client. Beside this delay-constraint minimizing the cost of establishing the network is in most cases an important design criterion. This network design problem can be modeled as an NP-hard combinatorial optimization problem called rooted delay-constrained minimum spanning tree (RDCMST) problem. The objective is to find a minimum cost spanning tree of a given graph with the additional constraint that the sum of delays along the paths from a specified root node to any other node must not exceed a given delay-bound. More formally, we are given an undirected graph G = (V;E) with a set V of n nodes, a set E of m edges, a cost function c : E ! R+ 0 , a delay function d : E ! R+, a fixed root node s 2 V , and a delay-bound B > 0. An optimal solution to the RDCMST problem is a spanning tree T = (V;E0); E0 E, with minimum cost c(T) = Σ e2E0 c(e), satisfying the constraints Σ e2P(s;v) d(e) B; 8v 2 V , where P(s; v) denotes the unique path from root s to node v. Exact approaches to the RDCMST problem have been examined by Gouveia et al. in [1] based on the concept of constrained shortest paths utilized in column generation, Lagrangian relaxation methods and a flow-based reformulation of the problem on layered acyclic graphs. All these methods can only solve small instances with significantly less than 100 nodes to proven optimality in reasonable time when considering complete graphs. A constructive heuristic approach based on Prim´s algorithm to find a minimum spanning tree is described by Salama et al. in [6]. In [4] we present a more de-centralized approach by applying the basic concept of Kruskal´s minimum spanning tree algorithm to the RDCMST problem. Two metaheuristics based on GRASP and variable neighborhood descent (VND) improve the constructed solution. In [5] we reuse this VND as the local search component in a general variable neighborhood search (GVNS) and a MAX 􀀀 MIN ant system (MMAS). The MMAS mostly obtains the best results in the performed tests.

Link to Repositum

Automatic generation of 2-AntWars players with Genetic Programming
Inführ, Johannes, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of EUROCAST 2011 - 13th International Conference on Computer Aided Systems Theory; Pages: 244-246
Show Abstract
AntWars is a competitive two-player game with local information that was introduced as part of a competition accompanying the Genetic and Evolutionary Computation Conference 2007 [1,3]. Both players control an ant in a toroidal world and have to collect randomly placed pieces of food. The player who collects more food wins. 2-AntWars is an extension of AntWars. In 2-AntWars, each player controls two ants in a rectangular world four times the size of the AntWars world. Controlling two ants increases the complexity of the problem considerably because now each player has to decide which ant to move in addition to selecting the direction of the move, and he has to keep the location of the ants in mind because moving an ant into the boundary of the world would make it immovable. Furthermore, the decision to battle with an ant of the opponent (by moving an ant to a location that is occupied by an ant of the enemy) requires more nesse than in AntWars. In AntWars, the aggressor wins instantly as the player who is attacked can not counteract. In 2-AntWars, the defending player has the possibility to move his second ant to the position of the battle to win. The complete description of 2-AntWars can be found in [2]. In this work we studied how Genetic Programming can be used to create competent 2-AntWars players.

Link to Repositum

A Memetic Algorithm and a Solution Archive for the Rooted Delay-Constrained Minimum Spanning Tree Problem
Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 13th International Conference on Computer Aided Systems Theory: Part I; Pages: 351-358
Show Abstract
We present a memetic algorithm for a combinatorial optimization problem called rooted delay-constrained minimum spanning tree problem arising for example in centralized broadcasting networks where quality of service constraints are of concern. The memetic algorithm is based on a specialized solution representation and a simple and effective decoding mechanism. Solutions are locally improved by a variable neighborhood descent in two neighborhood structures. Furthermore, to tackle the problem of repeated examination of already visited solutions we investigate a simple hash-based method to only detect duplicates or, alternatively, a trie-based complete solution archive to additionally derive new unvisited solutions. Experimental results show that our memetic algorithm outperforms existing heuristic approaches for this problem in most cases. Including the hash-based duplicate detection mostly further improves solution quality whereas the solution archive can only rarely obtain better results due to its operational overhead.

Link to Repositum

Stabilized branch-and-price for the rooted delay-constrained Steiner tree problem
Leitner, Markus, Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Network Optimization: 5th International Conference, INOC 2011; Pages: 124-138
Show Abstract
We consider the rooted delay-constrained Steiner tree problem which arises for example in the design of centralized multicasting networks where quality of service constraints are of concern. We present a mixed integer linear programming formulation based on the concept of feasible paths which has already been considered in the literature for the spanning tree variant. Solving its linear relaxation by column generation has, however, been regarded as computationally not competitive. In this work, we study various possibilities to speed-up the solution of our model by stabilization techniques and embed the column generation procedure in a branch-and-price approach in order to compute proven optimal solutions. Computational results show that the best among the resulting stabilized branch-and-price variants outperforms so-far proposed methods.

Link to Repositum

Variable neighborhood search for capacitated connected facility location
Leitner, Markus, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of EUROCAST 2011 - 13th International Conference on Computer Aided Systems Theory; Pages: 261-263
Show Abstract
The Capacitated Connected Facility Location Problem (CConFL) is an NP- hard combinatorial optimization problem which arises in the design of last mile communication networks (fiber-to-the-curb scenarios) [1]. Formally, we are given an undirected, weighted graph G = (V,E), with edge costs ce 0, 8e 2 E. The node set V = {r}[F [T is the disjoint union of the root node r, potential facility locations F, and possible Steiner nodes T . Each facility i 2 F has associated opening costs fi 0 and a maximum assignable capacity Di 2 N. Furthermore, we are given a set of potential customers C, with individual capacity demands dk 2 N and prizes pk 0, 8k 2 C, the latter corresponding to the expected profit when supplying customer k. Each customer k 2 C may be assigned to one facility of a subset Fk F, with assignment costs ai,k 0, 8i 2 Fk. A solution to CConFL S = (RS, TS, FS,CS, S) consists of a Steiner Tree (RS, TS), RS V , TS E, connecting the set of opened facilities FS F and the root node r. CS C is the set of customers feasibly (i.e. respecting the capacity constraints) assigned to facilities FS, whereas the actual mapping between customers and facilities is described by S : CS ! FS. The objective value of a feasible solution S is given by c(S) = Pe2TS ce+Pi2FS fi+Pk2CS a S(k),k+Pk2C\CS pk, and we aim to identify a most profitable solution minimizing this function. This variant of CConFL has already been tackled by exact methods based on mixed integer programming [2] and hybrid approaches based on Lagrangian relaxation [1]. Here, we present the first pure metaheuristic approach, which computes high quality solution faster than existing approaches.

Link to Repositum

Introducing the virtual network mapping problem with delay, routing and location constraints
Inführ, Johannes, Raidl, Günther
Type: Inproceedings; In: Network Optimization: 5th International Conference; Pages: 105-117
Show Abstract
Network virtualization is a main paradigm of Future Internet research. It allows for automatic creation of virtual networks with application specific resource management, routing, topology and naming. Since those virtual networks need to be implemented by means of the underlying physical network, the Virtual Network Mapping Problem (VNMP) arises. In this work, we introduce the Virtual Network Mapping Problem with Delay, Routing and Location Constraints (VNMP-DRL), a variant of the VNMP including some practically relevant aspects of Virtual Network Mapping that have not been considered before. We describe the creation of a benchmark set for the VNMP-DRL. The main goal was to include VNMP-DRL instances which are as realistic as possible, a goal we met by using parts of real network topologies to model the physical networks and by using different classes of virtual networks to model possible use-cases, instead of relying on random graphs. As a first approach, we solve the VNMP-DRL benchmark set by means of a multicommodity flow integer linear program.

Link to Repositum

An evolutionary algorithm with solution archive for the generalized minimum spanning tree problem
Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of EUROCAST 2011 - 13th International Conference on Computer Aided Systems Theory; Pages: 256-258
Show Abstract
Attaching a solution archive to a metaheuristic for a combinatorial optimization problem in order to completely avoid evaluating duplicate solutions is a relatively novel approach [7]. When using a classical Evolutionary Algorithm (EA), for example, frequent re-evaluation of duplicate solutions cannot be avoided. This wastes valuable computation time which could have been spent in a more meaningful way otherwise. The solution archive takes advantage of this observation and stores already considered solutions in an appropriate data structure, allowing a fast detection of duplicates and e cient conversion of them into similar yet unvisited solutions. This concept has been successfully applied on two problems where solutions are encoded as binary strings [7]. Similar methods exist where solutions are cached by hash tables [4] or stored in k-d trees [9]. However, these approaches either do not support e cient conversion of duplicates or they are applied to problems with rather simple solution representations. In this paper we describe an archive-enhanced EA for the Generalized Minimum Spanning Tree Problem (GMSTP) which is de ned as follows: Given an undirected weighted complete graph G = hV;E; ci with node set V partitioned into r pairwise disjoint clusters V1; V2; : : : ; Vr, edge set E and edge cost function c : E ! R+, a solution S = hP; Ti is de ned as P = fp1; p2; : : : ; prg V containing exactly one node from each cluster, i.e. pi 2 Vi; i = 1; : : : ; r, and T E being a tree spanning the nodes in P. The costs of S are the total edge costs, i.e. C(T) = P (u;v)2T c(u; v) and the objective is to identify a solution with minimum costs. The GMSTP was introduced in [5] and has been proven to be NP-hard. In recent years, many successful metaheuristic approaches [1{3] were developed for this problem.

Link to Repositum

A Multilevel Heuristic for the Rooted Delay-Constrained Minimum Spanning Tree Problem
Berlakovich, Martin, Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of EUROCAST 2011 - 13th International Conference on Computer Aided Systems Theory; Pages: 247-249

Link to Repositum

Variable neighborhood and greedy randomized adaptive search for capacitated connected facility location
Leitner, Markus, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 13th International Conference on Computer Aided Systems Theory: Part I; Pages: 295-302
Show Abstract
The Connected Facility Location problem combining facility location and Steiner trees has recently gained stronger scientific interest as it can be used to model the extension of last mile communication networks in so-called fiber-to-the-curb scenarios. We consider a generalization of this problem which considers capacity constraints on potential facilities and aims at maximizing the resulting profit by potentially supplying only a subset of all customers. In this work, we discuss two metaheuristic approaches for this problem based on variable neighborhood search and greedy randomized adaptive search. Computational results show that both approaches allow for computing high quality solutions in relatively short time.

Link to Repositum

2010
Hybrid metaheuristics
Blum, Christian, Puchinger, Jakob, Raidl, Günther, Roli, Andrea
Type: Book Contribution; In: Hybrid Optimization -The Ten Years of CPAIOR; Pages: 305-336

Link to Repositum

The multidimensional knapsack problem: Structure and algorithms
Puchinger, Jakob, Raidl, Günther, Pferschy, Ulrich
Type: Article; In: INFORMS Journal on Computing; Vol: 22; Issue: 2; Pages: 250-265
Show Abstract
We study the multidimensional knapsack problem, present some theoretical and empirical results about its structure, and evaluate different integer linear programming (ILP)-based, metaheuristic, and collaborative approaches for it. We start by considering the distances between optimal solutions to the LP relaxation and the original problem and then introduce a new core concept for the multidimensional knapsack problem (MKP), which we study extensively. The empirical analysis is then used to develop new concepts for solving the MKP using ILP-based and memetic algorithms. Different collaborative combinations of the presented methods are discussed and evaluated. Further computational experiments with longer run times are also performed to compare the solutions of our approaches to the best-known solutions of another so-far leading approach for common MKP benchmark instances. The extensive computational experiments show the effectiveness of the proposed methods, which yield highly competitive results in significantly shorter run times than do previously described approaches.

Link to Repositum

Strong lower bounds for a survivable network design problem
Leitner, Markus, Raidl, Günther
Type: Inproceedings; In: ISCO 2010 - International Symposium on Combinatorial Optimization; Pages: 295-302

Link to Repositum

A brief survey on hybrid metaheuristics
Blum, Christian, Puchinger, Jakob, Raidl, Günther, Roli, Andrea
Type: Inproceedings; In: Proceedings of BIOMA 2010 - 4th International Conference on Bioinspired Optimization Methods and their Applications; Pages: 3-16
Show Abstract
The combination of components from different algorithms is currently one of the most successful trends in optimization. The hybridization of metaheuristics such as ant colony optimization, evolutionary algorithms, and variable neighborhood search with techniques from operations research and artificial intelligence plays hereby an important role. The resulting hybrid algorithms are generally labelled hybrid metaheuristics. The rising of this new research field was due to the fact that the focus of research in optimization has shifted from an algorithm-oriented point of view to a problem-oriented point of view. In this brief survey on hybrid metaheuristics we provide an overview on some of the most interesting and representative developments.

Link to Repositum

Solving the minimum label spanning tree problem by ant colony optimization
Chwatal, Andreas, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 2010 International Conference on Genetic and Evolutionary Methods; Pages: 91-97
Show Abstract
The Minimum Label Spanning Tree Problem is a well known combinatorial optimization problem, having applications in telecommunication network design and data compression. The problem is NP-hard and cannot be approximated within a constant factor. In this work we present the application of ant colony optimization to this problem. Different pheromone models and construction mechanisms are introduced and local improvement methods are considered. An experimental investigation of the outlined components and a comparison to existing work are presented.

Link to Repositum

Matheuristics for the periodic vehicle routing problem with time windows
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Proceedings of Matheuristics 2010: Third International Workshop on Model-Based Metaheuristics,; Pages: 83-95
Show Abstract
We investigate two matheuristic strategies using the periodic vehicle routing problem with time windows as a testbed. Two di erent metaheuristics are suitably combined with parts of a developed column generation approach: On the one hand a variable neighborhood search (VNS) acts as the sole provider of columns for a set covering model, hence realizing a pure metaheuristic column generation. Hereby, the VNS and the resolving of the model are performed in an intertwined way. On the other hand the solution to the linear programming (LP) relaxation of the set covering model, i.e. the columns (routes) and their respective (accumulated) LP values, found by a classical column generation approach are successfully exploited in a subsequent evolutionary algorithm. Both matheuristics often yield signi cantly better results than their pure metaheuristic counterparts. These approaches are applicable to other classes of combinatorial optimization problems as well.

Link to Repositum

A memetic algorithm with population management for the generalized minimum vertex-biconnected network problem
Pagacz, Anna, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: 2nd International Conference on Intelligent Networking and Collaborative Systems, Workshop on Information Network Design; Pages: 356-361

Link to Repositum

Multilevel variable neighborhood search for periodic routing problems
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimisation - EvoCOP 2010; Pages: 226-238
Show Abstract
In this work we present the extension of a variable neighborhood search (VNS) with the multilevel re nement strategy for periodic routing problems. The underlying VNS was recently proposed and performs already well on these problems. We apply a path based coarsening scheme by building xed (route) segments of customers accounting for the periodicity. Starting at the coarsest level the problem is iteratively re ned until the original problem is reached again. This re nement is smoothly integrated into the VNS. Further a suitable solution-based recoarsening is proposed. Results on available benchmark test data as well as on newly generated larger instances show the advantage of the multilevel VNS compared to the standard VNS, yielding better results in usually less CPU time. This new approach is especially appealing for large instances.

Link to Repositum

Enhancing genetic algorithms by a trie-based complete solution archive
Raidl, Günther, Hu, Bin
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimisation - EvoCOP 2010; Pages: 239-251
Show Abstract
Genetic algorithms (GAs) share a common weakness with most other metaheuristics: Candidate solutions are in general revisited multiple times, lowering diversity and wasting precious CPU time. We propose a complete solution archive based on a special binary trie struc- ture for GAs with binary representations that e ciently stores all eval- uated solutions during the heuristic search. Solutions that would later be revisited are detected and e ectively transformed into similar yet un- considered candidate solutions. The archive's relevant insert, nd, and transform operations all run in time O(l) where l is the length of the so- lution representation. From a theoretical point of view, the archive turns the GA into a complete algorithm with a clear termination condition and bounded run time. Computational results are presented for Royal Road functions and NK landscapes, indicating the practical advantages.

Link to Repositum

Variable neighborhood search and ant colony optimization for the rooted delay-constrained minimum spanning tree problem
Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Parallel Problem Solving from Nature - PPSN XI; Pages: 391-400
Show Abstract
The rooted delay-constrained minimum spanning tree problem is an NP-hard combinatorial optimization problem arising for example in the design of centralized broadcasting networks where quality of service constraints are of concern. We present two new approaches to solve this problem heuristically following the concepts of ant colony optimization (ACO) and variable neighborhood search (VNS). The ACO uses a fast construction heuristic based on node delays and local improvement exploiting two different neighborhood structures. The VNS employs the same neighborhood structures but additionally applies various kinds of shaking moves. Experimental results indicate that both metaheuristics outperform existing approaches whereas the ACO produces mostly the best solutions.

Link to Repositum

Hybrid Metaheuristics, 7th Int. Workshop, HM 2010
Authors not available
Type: Proceedings

Link to Repositum

The generalized minimum edge biconnected network problem: Efficient neighborhood structures for variable neighborhood search
Hu, Bin, Leitner, Markus, Raidl, Günther
Type: Article; In: Networks; Vol: 55; Issue: 3; Pages: 257-275

Link to Repositum

Similarity searching in sequences of complex events
Obweger, Hannes, Suntinger, Martin, Schiefer, Josef, Raidl, Günther
Type: Inproceedings; In: Proceedings of the Fourth International Conference on Research Challenges in Information Science - RCIS 2010; Pages: 631-639
Show Abstract
In this paper we present a generic similarity model for time-stamped sequences of complex business events. It builds upon the idea of deriving similarity from deviations between the pattern sequence and its best-possible representation in the candidate sequence. Which representation is considered optimal solely depends on the analyst's current focus and interest; the model thus foresees highest configurability to adequately balance aspects such as single-event similarities, order, timing, and missing events. The model is furthermore applicable for both sub-sequence searching and full-sequence matching. As an extension to the base model, we discuss enhanced patternmodeling facilities, e.g., to ensure a maximal time interval between two or more candidate events. The proposed tree-search algorithm allows for a seamless integration of such extensions.

Link to Repositum

Trend-based similarity search in time-series data
Suntinger, Martin, Obweger, Hannes, Schiefer, Josef, Limbeck, Philip, Raidl, Günther
Type: Inproceedings; In: Proceedings of the Second International Conference on Advances in Database, Knowledge, and Data Applications - DBKDA 2010; Pages: 97-106
Show Abstract
In this paper, we present a novel approach towards time-series similarity search. Our technique relies on trends in a curve´s movement over time. A trend is characterized by a series´ values channeling in a certain direction (up, down, sideways) over a given time period before changing direction. We extract trend-turning points and utilize them for computing the similarity of two series based on the slopes between their turning points. For the turning point extraction, well-known techniques from financial market analysis are applied. The method supports queries of variable lengths and is resistant to different scaling of query and candidate sequence. It supports both subsequence searching and full sequence matching. One particular focus of this work is to enable simple modeling of query patterns as well as efficient similarity score updates in case of appending new data points.

Link to Repositum

A memetic algorithm for reconstructing cross-cut shredded text documents
Schauer, Christian, Prandtstetter, Matthias, Raidl, Günther
Type: Inproceedings; In: Proceedings of Hybrid Metaheuristics - Seventh International Workshop, HM 2010; Pages: 103-117
Show Abstract
The reconstruction of destroyed paper documents became of more interest during the last years. On the one hand it (often) occurs that documents are destroyed by mistake while on the other hand this type of application is relevant in the elds of forensics and archeology, e.g., for evidence or restoring ancient documents. Within this paper, we present a new approach for restoring cross-cut shredded text documents, i.e., documents which were mechanically cut into rectangular shreds of (almost) identical shape. For this purpose we present a genetic algorithm that is extended to a memetic algorithm by embedding a (restricted) variable neighborhood search (VNS). Additionally, the memetic algorithm's nal solution is further improved by an enhanced version of the VNS. Computational experiments suggest that the newly developed algorithms are not only competitive with the so far best known algorithms for the reconstruction of cross-cut shredded documents but clearly outperform them.

Link to Repositum

Variable neighborhood search coupled with ILP-based large neighborhood searches for the (periodic) location-routing problem
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Proceedings of Hybrid Metaheuristics - Seventh International Workshop, HM 2010; Pages: 174-189
Show Abstract
This work deals with the application of a variable neighborhood search (VNS) to the capacitated location-routing problem (LRP) as well as to the more general periodic LRP (PLRP). For this, previous successful VNS algorithms for related problems are considered and accordingly adapted as well as extended. The VNS is subsequently combined with three very large neighborhood searches (VLNS) based on integer linear programming: Two operate on whole routes and do a rather coarse, yet powerful optimization, with the more sophisticated one also taking the single customers into account, and the third operates on customer sequences to do a more ne-grained optimization. Several VNS plus VLNS combinations are presented and very encouraging experimental results are given. Our method clearly outperforms previous PLRP approaches and is at least competitive to leading approaches for the LRP.

Link to Repositum

Fitting multi-planet transit models to photometric time-data series by evolution strategies
Chwatal, Andreas, Raidl, Günther, Zöch, Michael
Type: Inproceedings; In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation; Pages: 7
Show Abstract
In this paper we present the application of an evolution strategy to the problem of detecting multi-planet transit events in photometric time-data series. Planetary transits occur when a planet regularly eclipses its host star, reducing stellar luminosity. The transit method is amongst the most successful detection methods for exoplanets and is presently performed by space telescope missions. The goal of the presented algorithm is to find high quality fits of multi-planet transit models to observational data, which is a challenging computational task. In particular we present a method for an effective objective function evaluation and show how the algorithm can be implemented on graphics processing units. Results on artificial test data with three artificial planets are reported.

Link to Repositum

2009
Solving a k-node minimum label spanning arborescence problem to compress fingerprint templates.
Chwatal, Andreas M., Raidl, Günther R., Oberlechner, Karin
Type: Article; In: Journal of Mathematical Modelling and Algorithms; Vol: 8; Issue: 3; Pages: 293-334
Show Abstract
We present a novel approach for compressing relatively small unordered data sets by means of combinatorial optimization. The application background comes from the field of biometrics, where the embedding of fingerprint template data into images by means of watermarking techniques requires extraordinary compression techniques. The approach is based on the construction of a directed tree, covering a sufficient subset of the data points. The arcs are stored via referencing a dictionary, which contains "typical" arcs w.r.t. the particular tree solution. By solving a tree-based combinatorial optimization problem we are able to find a compact representation of the input data. As optimization method we use on the one hand an exact branch-and-cut approach, and on the other hand heuristics including a greedy randomized adaptive search procedure (GRASP) and a memetic algorithm. Experimental results show that our method is able to achieve higher compression rates for fingerprint (minutiae) data than several standard compression algorithms.

Link to Repositum

Fitting multi-planet transit models to corot time-data series by evolutionary algorithms
Chwatal, Andreas, Wuchterl, Günther, Raidl, Günther
Type: Presentation

Link to Repositum

Combining Metaheuristics with Mathematical Programming Techniques for Solving Difficult Network Design Problems
Raidl, Günther
Type: Presentation
Show Abstract
While mathematical programming techniques are powerful approaches for solving hard combinatorial optimization problems appearing in the area of network design, their application is usually limited to instances of small or moderate size due to their exhaustive running time and memory requirements. In practice, one usually has to turn to heuristic and metaheuristic approaches for approximately solving larger instances. Over the last years so-called hybrid optimization techniques have become more popular. They combine concepts of different optimization strategies and try to exploit their individual benefits. Various examples illustrate that often such hybrids are indeed able to outperform their "pure" counterparts. This talk gives an overview on successful hybridization techniques with a particular focus on combinations of metaheuristics and integer linear programming (ILP) methods. We will then consider a few case studies, including approaches where very large neighborhoods are searched by means of ILP methods, a Lagrangian decomposition approach collaborates with variable neighborhood search, and a column generation algorithm is sped up by means of metaheuristics. These examples document the usefulness and large potential such combinations have.

Link to Repositum

Kombinationen von Metaheuristiken und Methoden der mathematischen Programmierung zur Lösung schwieriger Netzwerkdesign-Probleme
Raidl, Günther
Type: Presentation

Link to Repositum

Innovative Lösungen für Routenplanung, Packungsprobleme und Lagerlogistik
Raidl, Günther
Type: Presentation
Show Abstract
Die Berechnung effizienter Transportrouten unter Berücksichtigung spezieller Kundenwünsche ist im Allgemeinen eine sehr komplexe Aufgabe. Dies trifft ebenso für die Verschnittoptimierung in der Produktion oder eine effiziente Einsatzplanung in einem großen Lager zu. Diese Aufgaben können durch Algorithmen der kombinatorischen Optimierung berechnet werden. An der TU Wien wird an diesen Methoden und deren Kombinationsmöglichkeiten geforscht, die eine effektive Lösung betrieblicher Optimierungsprobleme ermöglichen.

Link to Repositum

Combining Metaheuristics with Mathematical Programming Techniques for Solving Difficult Network Design Problems
Raidl, Günther
Type: Presentation

Link to Repositum

Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference
Authors not available
Type: Proceedings

Link to Repositum

Solving the Euclidean Bounded Diameter Minimum Spanning Tree Problem by Clustering-Based (Meta-)Heuristics
Gruber, Martin, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2009, 12th International Conference on Computer Aided Systems Theory; Pages: 665-672
Show Abstract
The bounded diameter minimum spanning tree problem is an NP-hard combinatorial optimization problem arising in particular in network design. There exist various exact and metaheuristic approaches addressing this problem, whereas fast construction heuristics are primarily based on Prim´s minimum spanning tree algorithm and fail to produce reasonable solutions in particular on large Euclidean instances. In this work we present a method based on hierarchical clustering to guide the construction process of a diameter constrained tree. Solutions obtained are further refined using a greedy randomized adaptive search procedure. Especially on large Euclidean instances with a tight diameter bound the results are excellent. In this case the solution quality can also compete with that of a leading metaheuristic.

Link to Repositum

Boosting a variable neighborhood search for the periodic vehicle routing problem with time windows by ILP techniques
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 8th Metaheuristic International Conference (MIC 2009); Pages: 10

Link to Repositum

A Kruskal-Based Heuristic for the Rooted Delay-Constrained Minimum Spanning Tree Problem
Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2009, volume 5717 of LNCS; Pages: 713-720
Show Abstract
The rooted delay-constrained minimum spanning tree problem is an NP-hard combinatorial optimization problem arising for example in the design of centralized broadcasting networks where quality of service constraints are of concern. We present a construction heuristic based on Kruskal´s algorithm for finding a minimum cost spanning tree which eliminates some drawbacks of existing heuristic methods. To improve the solution we introduce a greedy randomized adaptive search procedure (GRASP) and a variable neighborhood descent (VND) using two different neighborhood structures. Experimental results indicate that our approach produces solutions of better quality in shorter runtime when having strict delay-bounds compared to an existing centralized construction method based on Prim´s algorithm. Especially when testing on Euclidian instances our Kruskal-based heuristic outperforms the Prim-based approach in all scenarios. Moreover our construction heuristic seems to be a better starting point for subsequent improvement methods.

Link to Repositum

Computing optimized stock (re-)placements in last-in, first-out warehouses
Ritzinger, Ulrike, Prandtstetter, Matthias, Raidl, Günther
Type: Inproceedings; In: Logistik Management; Pages: 279-298
Show Abstract
Within this work, we focus on the optimization of storage location assignments arising in warehouses with storage locations applying a last-in, rst-out throughput policy. The sequence of goods to be stored is, however, not entirely known such that for each item the currently best storage location has to be identi ed almost immediately. Caused by this imperfect data and by stock removals concurrently performed it is necessary to apply relocation operations from time to time, which might range from a few operations to relocations lasting a working day depending on the workload of the warehousemen. For this purpose we propose an ad hoc stocking strategy as well as a storage relocation strategy based on variable neighborhood descent. Supported by experimental tests we compare variants of our approaches with each other and with formerly used stocking strategies showing that the number of con icts could be significantly reduced by the proposed approach. Furthermore, an application of the relocation strategy can signi cantly improve warehouse states obtained due to imperfect stocking strategies, concurrently performed stock removals and insu cient information on production sequences.

Link to Repositum

A column generation approach for the periodic vehicle routing problem with time windows
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Proceedings of the International Network Optimization Conference 2009; Pages: 6
Show Abstract
We present a column generation approach for obtaining strong lower bounds to the periodic vehicle routing problem with time windows (PVRPTW) where customers must be served several times during a planning period. For this a set-covering model is introduced whose linear programming relaxation is solved. The pricing subproblem, responsible for generating new columns, is an elementary shortest path problem with resource constraints. The latter is solved by a label correcting dynamic programming algorithm for which we introduce appropriate label resources, extension functions, and dominance rules. Different settings for this algorithm are suggested and applied in combination to tune its behavior. We further propose a greedy randomized adaptive search procedure (GRASP) to solve the pricing subproblem. Experimental results on test instances differing in size, time windows and period length indicate strong lower bounds for many instances and the advantage of applying the metaheuristic in combination with the dynamic programming algorithm to save computation time on larger instances.

Link to Repositum

A lagrangian decomposition based heuristic for capacitated connected facility location
Leitner, Markus, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 8th Metaheuristic International Conference (MIC 2009)

Link to Repositum

Solving a multi-constrained network design problem by lagrangean decomposition and column generation
Chwatal, Andreas, Musil, Nina, Raidl, Günther
Type: Inproceedings; In: Proceedings of the International Network Optimization Conference 2009; Pages: 7
Show Abstract
In this paper we describe two approaches to solve a real-world multi-constrained network-design problem. The objective is to select the cheapest subset of links in a given network which enables to feasibly route messages from respective source to target nodes regarding various constraints. These constraints include particular capacity and delay constraints for each message, as well as a global delay constraint. Furthermore some messages may only be routed on connections supporting a secure protocol. The problem is strongly NP-hard and larger instances cannot be solved to provable opti- mality in practice. Hence, we present two heuristic approaches based on Lagrangean Decomposition and Column Generation, which turned out to be well suited. From these methods we obtain lower bounds as well as feasible solutions.

Link to Repositum

Accelerating column generation for a survivable network design problem
Leitner, Markus, Raidl, Günther, Pferschy, Ulrich
Type: Inproceedings; In: Proceedings of the International Network Optimization Conference 2009; Pages: 8
Show Abstract
We consider a network design problem occurring in the extension of ber optic networks on the last mile which generalizes the (Price Collecting) Steiner Tree Problem by introducing redundancy requirements on some customer nodes. In this work we present a formulation for this problem based on exponentially many variables and solve its linear relaxation by column generation. Using alternative dual-optimal solutions in the pricing problem we are able to signi cantly reduce the e ects of typical e ciency issues of simplex based column generation. Computational results clearly show the advantages of our proposed strategy with respect to the number of pricing iterations needed as well as by means of required running times.

Link to Repositum

Cluster-based (meta-)heuristics for the Euclidean bounded diameter minimum spanning tree problem
Gruber, Martin, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the Twelfth International Conference on Computer Aided Systems Theory (EUROCAST 2009); Pages: 228-231

Link to Repositum

Fitting rectangular signals to time series data by metaheuristic algorithms
Chwatal, Andreas, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the Twelfth International Conference on Computer Aided Systems Theory (EUROCAST 2009); Pages: 222-225
Show Abstract
In this work we consider the application of two metaheuristics, namely evolution strategies and scatter search, to the problem of tting rectangular signals to time-data series. The application background is to search for exoplanet-transit signals in stellar photometric observation data.

Link to Repositum

A Kruskal-based heuristic for the rooted delay-constrained minimum spanning tree problem
Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Extended Abstracts of the Twelfth International Conference on Computer Aided Systems Theory (EUROCAST 2009); Pages: 244-246

Link to Repositum

(Meta-)heuristic separation of jump cuts in a branch&cut approach for the bounded diameter minimum spanning tree problem
Gruber, Martin, Raidl, Günther
Type: Book Contribution; In: Matheuristics - Hybridizing Metaheuristics and Mathematical Programming,volume 10 of Annals of Information Systems; Pages: 209-230
Show Abstract
The bounded diameter minimum spanning tree problem is an NP-hard combinatorial optimization problem arising, for example, in network design when quality of service is of concern. We solve a strong integer linear programming formulation based on so-called jump inequalities by a Branch&Cut algorithm. As the separation subproblem of identifying currently violated jump inequalities is difficult, we approach it heuristically by two alternative construction heuristics, local search, and optionally tabu search. We also introduce a new type of cuts, the center connection cuts, to strengthen the formulation in the more difficult to solve odd diameter case. In addition, primal heuristics are used to compute initial solutions and to locally improve incumbent solutions identified during Branch&Cut. The overall algorithm performs excellently, and we were able to obtain proven optimal solutions for some test instances that were too large to be solved so far.

Link to Repositum

MetaBoosting: Enhancing Integer Programming Techniques by Metaheuristics
Puchinger, Jakob, Raidl, Günther, Pirkwieser, Sandro
Type: Book Contribution; In: Matheuristics - Hybridizing Metaheuristics and Mathematical Programming,Volume 10 of Annals of Information Systems; Pages: 71-102

Link to Repositum

Meta-heuristics for reconstructing cross cut shredded text documents
Prandtstetter, Matthias, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 11th annual conference on Genetic and evolutionary computation; Pages: 8
Show Abstract
In this work, we present two new approaches based on variable neighborhood search (VNS) and ant colony optimization (ACO) for the reconstruction of cross cut shredded text documents. For quickly obtaining initial solutions, we consider four di erent construction heuristics. While one of them is based on the well known algorithm of Prim, another one tries to match shreds according to the similarity of their borders. Two further construction heuristics rely on the fact that in most cases the left and right edges of paper documents are blank, i.e. no text is written on them. Randomized variants of these construction heuristics are applied within the ACO. Experimental tests reveal that regarding the solution quality the proposed ACO variants perform better than the VNS approaches in most cases, while the running times needed are shorter for VNS. The high potential of these approaches for reconstructing cross cut shredded text documents is underlined by the obtained results.

Link to Repositum

Multiple variable neighborhood search enriched with ILP techniques for the periodic vehicle routing problem with time windows
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Proceedings of Hybrid Metaheuristics - Sixth International Workshop; Pages: 45-59
Show Abstract
In this work we extend a VNS for the periodic vehicle routing problem with time windows (PVRPTW) to a multiple VNS (mVNS) where several VNS instances are applied cooperatively in an intertwined way. The mVNS adaptively allocates VNS instances to promising areas of the search space. Further, an intertwined collaborative cooperation with a generic ILP solver applied on a suitable set covering ILP formulation with this mVNS is proposed, where the mVNS provides the exact method with feasible routes of the actual best solutions, and the ILP solver takes a global view and seeks to determine better feasible route combinations. Experimental results were conducted on newly derived instances and show the advantage of the mVNS as well as of the hybrid approach. The latter yields for almost all instances a statistically signi - cant improvement over solely applying the VNS in a standard way, often requiring less runtime, too.

Link to Repositum

A memetic algorithm for the generalized minimum vertex-biconnected network problem
Hu, Bin, Raidl, Günther
Type: Inproceedings; In: 9th International Conference on Hybrid Intelligent Systems - HIS 2009; Pages: 6
Show Abstract
The generalized minimum vertex-biconnected network problem plays an important role in the design of survivable backbone networks that should be fault tolerant to single component outage. When given a graph where the nodes are partitioned into clusters, the goal is to find a subgraph of minimum costs that connects exactly one node from each cluster in a vertex-biconnected way. We present a memetic algorithm that uses fast local improvement methods to produce high quality solutions and an intelligent crossover operator which controls the balance between diversity and intensity in the population. Tests on Euclidean TSPlib instances with up to 442 nodes show that this approach is highly efficient.

Link to Repositum

Exploiting hierarchical clustering for finding bounded diameter minimum spanning trees on Euclidean instances
Gruber, Martin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation; Pages: 8
Show Abstract
The bounded diameter minimum spanning tree problem is an NP-hard combinatorial optimization problem arising, for example, in network design when quality of service is of concern. There exist various exact and metaheuristic ap- proaches addressing this problem, whereas fast construction heuristics are primarily based on Prim's minimum spanning tree algorithm and fail to produce reasonable solutions in particular on large Euclidean instances. A method based on hierarchical clustering to guide the construction process of a diameter constrained tree is pre- sented. Solutions obtained are further re ned using a greedy randomized adaptive search procedure. Based on the idea of clustering we also designed a new neighborhood search for this problem. Especially on large Euclidean instances with a tight diameter bound the results are excellent. In this case the solution quality can also compete with that of a leading metaheuristic, whereas the computation only needs a fraction of the time.

Link to Repositum

A hybrid algorithm for computing tours in a spare parts warehouse
Prandtstetter, Matthias, Raidl, Günther, Misar, Thomas
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization - EvoCOP 2009; Pages: 25-36
Show Abstract
We consider a real-world problem arising in a warehouse for spare parts. Items ordered by customers shall be collected and for this purpose our task is to determine efficient pickup tours within the warehouse. The algorithm we propose embeds a dynamic programming algorithm for computing individual optimal walks through the warehouse in a general variable neighborhood search (VNS) scheme. To enhance the performance of our approach we introduce a new self-adaptive variable neighborhood descent used as local improvement procedure within VNS. Experimental results indicate that our method provides valuable pickup plans, whereas the computation times are kept low and several constraints typically stated by spare parts suppliers are fulfilled.

Link to Repositum

Solving a video-server load re-balancing problem by mixed integer programming and hybrid variable neighborhood search
Walla, Jakob, Ruthmair, Mario, Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics 2009, volume 5818 of LNCS; Pages: 84-99
Show Abstract
A Video-on-Demand system usually consists of a large number of independent video servers. In order to utilize network resources as e ciently as possible the overall network load should be balanced among the available servers. We consider a problem formulation based on an estimation of the expected number of requests per movie during the period of highest user interest. Apart from load balancing our formulation also deals with the minimization of reorganization costs associated with a newly obtained solution. We present two approaches to solve this problem: an exact formulation as a mixed-integer linear program (MIP) and a metaheuristic hybrid based on variable neighborhood search (VNS). Among others the VNS features two special large neighborhood structures searched using the MIP approach and by e ciently calculating cyclic exchanges, respectively. While the MIP approach alone is only able to obtain good solutions for instances involving few servers, the hybrid VNS performs well especially also on larger instances.

Link to Repositum

2008
A Lagrangian decomposition/evolutionary algorithm hybrid for the knapsack constrained maximum spanning tree problem
Pirkwieser, Sandro, Raidl, Günther, Puchinger, Jakob
Type: Book Contribution; In: Recent Advances in Evolutionary Computation for Combinatorial Optimization; Pages: 69-85

Link to Repositum

Exact methods and metaheuristic approaches for deriving high quality fully resolved consensus trees
Pirkwieser, Sandro, Ruiz-Torrubiano, Ruben, Raidl, Günther
Type: Presentation

Link to Repositum

Heuristic Jump Cut Separation in a Branch&Cut Approach for the Bounded Diameter Minimum Spanning Tree Problem
Gruber, Martin
Type: Presentation

Link to Repositum

Cooperative Hybrids for Combinatorial Optimization
Raidl, Günther
Type: Presentation

Link to Repositum

Combining Metaheuristics with Mathematical Programming Techniques for Solving Difficult Network Design Problems
Raidl, Günther
Type: Presentation

Link to Repositum

Combining variable neighborhood search with integer linear programming for the generalized minimum spanning tree problem
Hu, Bin, Leitner, Markus, Raidl, Günther
Type: Article; In: Journal of Heuristics; Vol: 14; Issue: 5; Pages: 473-499

Link to Repositum

Bringing order into the neighborhoods: Relaxation guided variable neighborhood search
Puchinger, Jakob, Raidl, Günther
Type: Article; In: Journal of Heuristics; Vol: 14; Issue: 5; Pages: 457-472

Link to Repositum

An integer linear programming approach and a hybrid variable neighborhood search for the car sequencing problem
Prandtstetter, Matthias, Raidl, Günther
Type: Article; In: European Journal of Operational Research; Vol: 191; Issue: 3; Pages: 1004-1022

Link to Repositum

Heuristic cut separation in a branch&cut approach for the bounded diameter minimum spanning tree problem
Gruber, Martin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 2008 International Symposium on Applications and the Internet; Pages: 261-264

Link to Repositum

Solving the post enrolment course timetabling problem by ant colony optimization
Mayer, Alfred, Nothegger, Clemens, Chwatal, Andreas, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 7th International Conference on the Practice and Theory of Automated Timetabling; Pages: 13

Link to Repositum

A variable neighborhood search for the periodic vehicle routing problem with time windows
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 9th EU/MEeting on Metaheuristics for Logistics and Vehicle Routing

Link to Repositum

(Meta-)heuristic separation of jump cuts for the bounded diameter minimum spanning tree problem
Gruber, Martin, Raidl, Günther
Type: Inproceedings; In: Proceedings of Matheuristics 2008: Second International Workshop on Model Based Metaheuristics

Link to Repositum

Solving the railway traveling salesman problem via a transformation into the classical traveling salesman problem
Hu, Bin, Raidl, Günther
Type: Inproceedings; In: 8th International Conference on Hybrid Intelligent Systems; Pages: 73-77

Link to Repositum

Effective neighborhood structures for the generalized traveling salesman problem
Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimisation - EvoCOP 2008; Pages: 36-47

Link to Repositum

Variable neighborhood search for a prize collecting capacity constrained connected facility location problem
Leitner, Markus, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 2008 International Symposium on Applications and the Internet; Pages: 233-236

Link to Repositum

Combining (integer) linear programming techniques and metaheuristics for combinatorial optimization
Raidl, Günther, Puchinger, Jakob
Type: Book Contribution; In: Hybrid Metaheuristics. An Emergent Approach for Combinatorial Optimization; Vol: 114; Pages: 31-62

Link to Repositum

Combining forces to reconstruct strip shredded text documents
Prandtstetter, Matthias, Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics 2008; Pages: 175-189

Link to Repositum

Finding consensus trees by evolutionary, variable neighborhood search, and hybrid algorithms
Pirkwieser, Sandro, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 10th annual conference on Genetic and evolutionary computation; Pages: 323-330

Link to Repositum

Lagrangian decomposition, metaheuristics, and hybrid approaches for the design of the last mile in fiber optic networks
Leitner, Markus, Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics 2008; Pages: 158-174

Link to Repositum

A Lagrangian Relax-and-Cut Approach for the Bounded Diameter Minimum Spanning Tree Problem
Raidl, Günther, Gruber, Martin
Type: Inproceedings; In: Proceedings of the International Conference on Numerical Analysis and Applied Mathematics; Pages: 446-449

Link to Repositum

2007
Applying branch-and-cut for compressing fingerprint templates
Chwatal, Andreas, Raidl, Günther
Type: Presentation
Show Abstract
We consider the problem of encoding fingerprint minutiae data in a highly compact way. An application is the embedding as watermarks in images of ID-cards. Our new approach is based on connecting a subset of the given data points by an arborescence whose arcs can be efficiently encoded by indices into a small codebook. For achieving highest possible compression, the connected points, the arborescence, and the codebook are simultaneously optimized. We transform this problem into an extended variant of the minimum label spanning tree problem and solve an ILP formulation by branch-and-cut.

Link to Repositum

Reconstructing Sheets of Manually Torn Paper
Prandtstetter, Matthias, Raidl, Günther, Schüller, Peter
Type: Presentation

Link to Repositum

Determining orbital elements of extrasolar planets by evolution strategies.
Chwatal, Andreas, Raidl, Günther
Type: Inproceedings; In: EUROCAST 2007 Conference Proceedings, 2007

Link to Repositum

Determining orbital elements of extrasolar planets by evolution strategies.
Chwatal, Andreas, Raidl, Günther
Type: Inproceedings; In: Computer Aided Systems Theory - EUROCAST 2007; Pages: 870-877

Link to Repositum

Compressing fingerprint templates by solving an extended minimum label spanning tree problem.
Chwatal, Andreas, Raidl, Günther, Dietzel, Olivia
Type: Inproceedings; In: Proceedings of the Seventh Metaheuristics International Conference (MIC); Pages: 3

Link to Repositum

Combining Lagrangian decomposition with an evolutionary algorithm for the knapsack constrained maximum spanning tree problem
Pirkwieser, Sandro, Raidl, Günther, Puchinger, Jakob
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization: 7th European Conference, EvoCOP 2007, Valencia, Spain, April 11-13, 2007, Proceedings; Pages: 176-187

Link to Repositum

Variable Neighborhood Search for the Generalized Minimum Edge Biconnected Network Problem
Leitner, Markus, Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the International Network Optimization Conference 2007

Link to Repositum

A Directed Cut Model for the Design of the Last Mile in Real-World Fiber Optic Networks
Wagner, Daniel, Pferschy, Ulrich, Mutzel, Petra, Raidl, Günther, Bachhiesl, Peter
Type: Inproceedings; In: Proceedings of the International Network Optimization Conference 2007

Link to Repositum

Fingerprint template compression by solving a minimum label $k$-node subtree problem
Raidl, Günther, Chwatal, Andreas
Type: Inproceedings; In: International Conference of Numerical Analysis and Applied Mathematics; Pages: 444-447

Link to Repositum

Models and algorithms for three-stage two-dimensional bin packing
Puchinger, Jakob, Raidl, Günther
Type: Article; In: European Journal of Operational Research; Vol: 183; Issue: 3; Pages: 1304-1327
Show Abstract
We consider the three-stage two-dimensional bin packing problem (2BP) which occurs in real-world applications such as glass, paper, or steel cutting. We present new integer linear programming formulations: models for a restricted version and the original version of the problem are developed. Both only involve polynomial numbers of variables and constraints and effectively avoid symmetries. Those models are solved using CPLEX. Furthermore, a branch-and-price (B&P) algorithm is presented for a set covering formulation of the unrestricted problem, which corresponds to a Dantzig-Wolfe decomposition of the polynomially-sized model. We consider column generation stabilization in the B&P algorithm using dual-optimal inequalities. Fast column generation is performed by applying a hierarchy of four methods: (a) a fast greedy heuristic, (b) an evolutionary algorithm, (c) solving a restricted form of the pricing problem using CPLEX, and finally (d) solving the complete pricing problem using CPLEX. Computational experiments on standard benchmark instances document the benefits of the new approaches: The restricted version of the integer linear programming model can be used to quickly obtain near-optimal solutions. The unrestricted version is computationally more expensive. Column generation provides a strong lower bound for 3-stage 2BP. The combination of all four pricing algorithms and column generation stabilization in the proposed B&P framework yields the best results in terms of the average objective value, the average run-time, and the number of instances solved to proven optimality.

Link to Repositum

CyMATE: A new tool for methylation analysis of plant genomic DNA after bisulfite sequencing
Hetzl, Jennifer, Foerster, Andrea M., Raidl, Günther, Mittelsten Scheid, Ortrun
Type: Article; In: The Plant Journal; Vol: 51; Issue: 3; Pages: 526-536

Link to Repositum

A new approximation algorithm for bend minimization in the kandinsky model.
Yildiz, Canan, Mutzel, Petra, Barth, Wilhelm
Type: Inproceedings; In: Graph Drawing - 14th International Symposium, GD 2006 Karlsruhe, Germany, September 18-20, 2006 Revised Papers
Show Abstract
This book constitutes the thoroughly refereed post-proceedings of the 14th International Symposium on Graph Drawing, GD 2006, held in Karlsruhe, Germany in September 2006. The 33 revised full papers and 5 revised short papers presented together with 2 invited talks, 1 system demo, 2 poster papers and a report on the graph drawing contest were carefully selected during two rounds of reviewing and improvement from 91 submissions. All current aspects in graph drawing are addressed ranging from foundational and methodological issues to applications for various classes of graphs in a variety of fields.

Link to Repositum

A Multi-Commodity Flow Approach for the Design of the Last Mile in Real-World Fiber Optic Networks.
Wagner, Daniel, Raidl, Günther, Pferschy, Ulrich, Mutzel, Petra, Bachhiesl, Peter
Type: Inproceedings; In: Operations Research Proceedings 2006, Karlsruhe

Link to Repositum

2006
Two Integer Linear Programming Approaches for Solving the Car Sequencing Problem
Prandtstetter, Matthias, Raidl, Günther
Type: Presentation

Link to Repositum

Combining Variable Neighborhood Search with Integer Linear Programming for the Generalized Minimum Spanning Tree Problem
Raidl, Günther, Hu, Bin, Leitner, Markus
Type: Presentation
Show Abstract
We consider the generalized version of the classical Minimum Spanning Tree problem where the nodes of a graph are partitioned into clusters and exactly one node from each cluster must be connected. We present a general Variable Neighborhood Search (VNS) approach which uses three different neighborhood types. Two of them work in complementary ways in order to maximize the effectivity. Both are large in the sense that they contain exponentially many candidate solutions, but efficient polynomial-time algorithms are used to identify best neighbors. The third neighborhood type uses Integer Linear Programming to solve parts of the problem to provable optimality. Tests on Euclidean and random instances with up to 1280 nodes indicate especially on instances with many nodes per cluster significant advantages over previously published metaheuristic approaches.

Link to Repositum

Large Neighborhoods in Variable Neighborhood Search Approaches for Generalized Network Design Problems
Hu, Bin, Leitner, Markus, Raidl, Günther
Type: Presentation

Link to Repositum

Metaheuristics for Solving a Scheduling Problem in Car Manufacturing
Raidl, Günther
Type: Presentation

Link to Repositum

Cleaning of raw peptide ms/ms spectra: Improved protein identification following deconvolution of multiply charged peaks, isotope clusters, and removal of background noise.
Mujezinovic, Nedim, Raidl, Günther, Hutchins, J.R.A., Peters, J.M., Mechtler, K., Eisenhaber, F.
Type: Article; In: Proteomics; Vol: VOL 6; Issue: 19; Pages: 5117-5131
Show Abstract
The dominant ions in MS/MS spectra of peptides, which have been fragmented by low-energy CID, are often b-, y-ions and their derivatives resulting from the cleavage of the peptide bonds. However, MS/MS spectra typically contain many more peaks. These can result not only from isotope variants and multiply charged replicates of the peptide fragmentation products but also from unknown fragmentation pathways, sample-specific or systematic chemical contaminations or from noise generated by the electronic detection system. The presence of this background complicates spectrum interpretation. Besides dramatically prolonged computation time, it can lead to incorrect protein identification, especially in the case of de novo sequencing algorithms. Here, we present an algorithm for detection and transformation of multiply charged peaks into singly charged monoisotopic peaks, removal of heavy isotope replicates, and random noise. A quantitative criterion for the recognition of some noninterpretable spectra has been derived as a byproduct. The approach is based on numerical spectral analysis and signal detection methods. The algorithm has been implemented in a stand-alone computer program called MS Cleaner that can be obtained from the authors upon request.

Link to Repositum

Biased Mutation Operators for Subgraph-Selection Problems
Raidl, Günther, Koller, Gabriele, Julstrom, Bryant
Type: Article; In: IEEE Transactions on Evolutionary Computation; Vol: 10; Issue: 2; Pages: 145-156

Link to Repositum

Variable neighborhood descent with self-adaptive neighborhood-ordering.
Hu, Bin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 7th EU/MEeting on Adaptive, Self-Adaptive, and Multi-Level Metaheuristics

Link to Repositum

Evolutionary approach to constrained minimum spanning tree problem.
Pagacz, Anna, Raidl, Günther, Zawislak, Stan
Type: Inproceedings; In: Evolutionary Computation and Global Optimization 2006, Murzasichle, Poland, 2006.; Pages: 331-341

Link to Repositum

Neighborhood searches for the bounded diameter minimum spanning tree problem embedded in a VNS, EA, and ACO.
Gruber, Martin, van Hemert, Jano, Raidl, Günther
Type: Inproceedings; In: Proceedings of the Genetic and Evolutionary Computation Conference - GECCO 2006; Pages: 1187-1194
Show Abstract
The paper examines various applicability issues of the negative selection algorithms (NSA). Recently, concerns were raised on the use of NSAs, especially those using real-valued representation. In this paper, we argued that many reported issues are either due to improper usage of the method or general difficulties which are not specific to negative selection algorithms. On the contrary, the experiments with synthetic data and well-known real-world data show that NSAs have great flexibility to balance between efficiency and robustness, and to accommodate domain-oriented elements in the method, e.g. various distance measures. It is to be noted that all methods are not suitable for all datasets and data representation plays a major role.

Link to Repositum

A unified view on hybrid metaheuristics.
Raidl, Günther
Type: Inproceedings; In: Hybrid Metaheuristics - Third International Workshop, HM 2006, Gran Canaria, Spain, October 13-14, 2006, Proceedings; Pages: 1-12
Show Abstract
This book constitutes the refereed proceedings of the Third International Workshop on Hybrid Metaheuristics, HM 2006, held in Gran Canaria, Spain, in October 2006. The 13 revised full papers presented together with 1 invited paper were carefully reviewed and selected from 42 submissions. Topics of this new emerging field addressed by the papers are novel combinations of components from different metaheuristics, hybridization of metaheuristics and Al/OR techniques, low-level hybridization, high-level hybridization, portfolio techniques, expert systems, co-operative search, taxonomy, terminology, classification of hybrid metaheuristics, co-evolution techniques, automated parameter tuning, empirical and statistical comparison, theoretic aspects of hybridization, parallelization, and software libraries.

Link to Repositum

The core concept for the multidimensional knapsack problem.
Puchinger, Jakob, Raidl, Günther, Pferschy, Ulrich
Type: Inproceedings; In: Evolutionary Computation in Combinatorial Optimization - EvoCOP 2006, volume 3906 of LNCS; Pages: 195-208

Link to Repositum

Evolutionary Computation in Combinatorial Optimization - 6th European Conference, EvoCOP 2006, Budapest, Hungary, April 10-12, 2006. Proceedings.
Authors not available
Type: Proceedings
Show Abstract
This book constitutes the refereed proceedings of the 6th European Conference on Evolutionary Computation in Combinatorial Optimization, EvoCOP 2006, held in Budapest, Hungary in April 2006. The 24 revised full papers presented were carefully reviewed and selected from 77 submissions. The papers cover evolutionary algorithms as well as various other metaheuristics, like scatter search, tabu search, memetic algorithms, variable neighborhood search, greedy randomized adaptive search procedures, ant colony optimization, and particle swarm optimization algorithms. The papers deal with representations, heuristics, analysis of problem structures, and comparisons of algorithms. The list of studied combinatorial optimization problems includes prominent examples like graph coloring, knapsack problems, the traveling salesperson problem, scheduling, graph matching, as well as specific real-world problems.

Link to Repositum

Models and algorithms for three-stage two-dimensional bin packing.
Puchinger, Jakob, Raidl, Günther
Type: Report

Link to Repositum

Combining variable neighborhood search with integer linear programming for the generalized minimum spanning tree problem.
Hu, Bin, Leitner, Markus, Raidl, Günther
Type: Report

Link to Repositum

Bringing order into the neighborhoods: Relaxation guided variable neighborhood search.
Puchinger, Jakob, Raidl, Günther
Type: Report

Link to Repositum

2005
Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification.
Puchinger, Jakob, Raidl, Günther
Type: Inproceedings; In: Artifical Intelligence and Knowledge Engineering Applications: A Bioinspired Approach; Pages: 41-53
Show Abstract
In this survey we discuss different state-of-the-art approaches of combining exact algorithms and metaheuristics to solve combinatorial optimization problems. Some of these hybrids mainly aim at providing optimal solutions in shorter time, while others primarily focus on getting better heuristic solutions. The two main categories in which we divide the approaches are collaborative versus integrative combinations. We further classify the different techniques in a hierarchical way. Altogether, the surveyed work on combinations of exact algorithms and metaheuristics documents the usefulness and strong potential of this research direction.

Link to Repositum

Relaxation guided variable neighborhood search.
Puchinger, Jakob, Raidl, Günther
Type: Inproceedings; In: Proceedings of the XVIII Mini EURO Conference on VNS

Link to Repositum

Variable neighborhood search for the bounded diameter minimum spanning tree problem.
Gruber, Martin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 18th Mini Euro Conference on Variable Neighborhood Search

Link to Repositum

A new 0-1 ILP approach for the bounded diameter minimum spanning tree problem.
Gruber, Martin, Raidl, Günther
Type: Inproceedings; In: Proceedings of the 2nd International Network Optimization Conference; Pages: 178-185

Link to Repositum

A variable neihborhood search approach for solving the car sequencing problem.
Prandtstetter, Matthias, Raidl, Günther
Type: Inproceedings; In: Proceedings of the XVIII Mini EURO Conference on VNS

Link to Repositum

Computing generalized minimum spanning trees with variable neighborhood search.
Hu, Bin, Leitner, Markus, Raidl, Günther
Type: Inproceedings; In: Proceedings of the XVIII Mini EURO Conference on VNS

Link to Repositum

Cooperating memetic and branch-and-cut algorithms for solving the multidimensional knapsack problem.
Puchinger, Jakob, Raidl, Günther, Gruber, Martin
Type: Inproceedings; In: Proceedings of MIC2005, the 6th Metaheuristics International Conference
Show Abstract
Recent work in combinatorial optimization indicates the high potential of combining metaheuristics with integer linear programming (ILP) techniques. We study a hybrid system in which a memetic algorithm (MA) and a general purpose ILP solver based on branch-and-cut (B&C) are executed in parallel and continuously exchange information in a bidirectional, asynchronous way. As target problem, we consider the multidimensional knapsack problem (MKP). The memetic algorithm uses a direct binary encoding of candidate solutions and repair and local improvement strategies that are steered by pseudo-utility ratios. As B&C framework we use the commercial ILP-solver CPLEX. The information exchanged between the two heterogenous algorithms are so-far best primal solutions and promising dual variable values of solutions to certain linear programming relaxations. These dual variable values are used in the MA to update the pseudo-utility ratios of local improvement and repair. We will see that this combination of a metaheuristic and an exact optimization method is able to benefit from synergy: Experimental results document that within the same limited total time, the cooperative system yields better heuristic solutions than each algorithm alone. In particular, the cooperative system also competes well with today's best algorithms for the MKP, needing substantially shorter total running times.

Link to Repositum

Evolutionary Computation in Combinatorial Optimization, 5th European Conference, EvoCOP 2005, Lausanne, Switzerland, March 30 - April 1, 2005, Proceedings (LNCS 3448)
Authors not available
Type: Proceedings
Show Abstract
This book constitutes the refereed proceedings of the 5th European Conference on Evolutionary Computation in Combinatorial Optimization, EvoCOP 2005, held in Lausanne, Switzerland in March/April 2005. The 24 revised full papers presented were carefully reviewed and selected from 66 submissions. The papers cover evolutionary algorithms as well as related approaches like scatter search, simulated annealing, ant colony optimization, immune algorithms, variable neighborhood search, hyperheuristics, and estimation of distribution algorithms. The papers deal with representations, analysis of operators and fitness landscapes, and comparison algorithms. Among the combinatorial optimization problems studied are graph coloring, quadratic assignment, knapsack, graph matching, packing, scheduling, timetabling, lot-sizing, and the traveling salesman problem.

Link to Repositum

GECCO: Genetic And Evolutionary Computation Conference Conference Proceedings
Authors not available
Type: Proceedings

Link to Repositum

Empirical analysis of locality, heritability and heuristic bias in evolutionary algorithms: A case study for the multidimensional knapsack problem.
Raidl, Günther, Gottlieb, Jens
Type: Article; In: Evolutionary Computation; Vol: 13; Issue: 4; Pages: 441-475
Show Abstract
Our main aim is to provide guidelines and practical help for the design of appropriate representations and operators for evolutionary algorithms (EAs). For this purpose, we propose techniques to obtain a better understanding of various effects in the interplay of the representation and the operators. We study six different representations and associated variation operators in the context of a steady-state evolutionary algorithm for the multidimensional knapsack problem. Four of them are indirect decoder-based techniques, and two are direct encodings combined with different initialization, repair, and local improvement strategies. The complex decoders and the local improvement and repair strategies make it practically impossible to completely analyze such EAs in a fully theoretical way. After comparing the general performance of the chosen EA variants for the multidimensional knapsack problem on two benchmark suites, we present a hands-on approach for empirically analyzing important aspects of initialization, mutation, and crossover in an isolated fashion. Static, inexpensive measurements based on randomly created solutions are performed in order to quantify and visualize specific properties with respect to heuristic bias, locality, and heritability. These tests shed light onto the complex behavior of such EAs and point out reasons for good or bad performance. In addition, the proposed measures are also examined during actual EA runs, which gives further insight into dynamic aspects of evolutionary search and verifies the validity of the isolated static measurements. All measurements are described in a general way, allowing for an easy adaption to other representations and problems.

Link to Repositum

A variable neihborhood search approach for solving the car sequencing problem.
Prandtstetter, Matthias, Raidl, Günther
Type: Report

Link to Repositum

Solving the prize-collecting Steiner tree problem to optimality.
Ljubic, Ivana, Weiskircher, René, Pferschy, Ulrich, Klau, Gunnar Werner, Mutzel, Petra, Fischetti, Matteo
Type: Inproceedings; In: Proceedings of the Seventh Workshop on Algorithm Engineering and Experiments and the Second Workshop on Analytic Algorithmics and Combinatorics (ALENEX/ANALCO)

Link to Repositum

An algorithmic framework for the exact solution of the prize-collecting Steiner tree problem.
Ljubic, Ivana, Weiskircher, René, Pferschy, Ulrich, Klau, Gunnar Werner, Mutzel, Petra, Fischetti, Matteo
Type: Article; In: Mathematical Programming; Vol: 105; Issue: 2-3; Pages: 427-449

Link to Repositum

2004
Generation as Method for Explorative Learning in Computer Science Education
Kerren, Andreas
Type: Presentation

Link to Repositum

Maximum Planar Subgraphs and Crossing Minimization in Graph Drawing
Mutzel, Petra
Type: Presentation

Link to Repositum

Die SPQR-Baum Datenstruktur im Graph Drawing
Mutzel, Petra
Type: Presentation

Link to Repositum

Hybrid Estimation of Distribution on Algorithm for Multiobjective Knapsack Problems
Raidl, Günther
Type: Presentation

Link to Repositum

Some Thoghts on How to Make Memetic Algorithms for Multiobjective Knapsack Problems
Raidl, Günther
Type: Presentation

Link to Repositum

Anwendung der Computional Intelligence in der Kombinatorischen Optimierung
Mutzel, Petra
Type: Presentation

Link to Repositum

An Evolutionary Algorithm for Column Generation in Integer Programming: An Effective Approach for 2D Bin Packing
Raidl, Günther
Type: Presentation

Link to Repositum

An Evolutionary Algorithm for the Maximum Weight Trace Formulation of the Multiple Sequence Alignment Problem
Raidl, Günther
Type: Presentation

Link to Repositum

New ILP Approaches for 3-Staged Two-Dimensional Bin Packing
Raidl, Günther
Type: Presentation

Link to Repositum

Solving a real-world glass cutting problem
Puchinger, Jakob, Raidl, Günther, Koller, Gabriele
Type: Presentation

Link to Repositum

An Evolutionary Algorithm for the Maximum Weight Trace Formulation of the Multiple Sequence Alignment Problem
Koller, Gabriele
Type: Presentation

Link to Repositum

Automatic Graph Drawing
Weiskircher, René
Type: Presentation

Link to Repositum

DGCVis: An Exploratory 3D Visualization of Graph Pyramids
Kerren, Andreas
Type: Presentation

Link to Repositum

Empirical analysis of locality, heritability and heuristic bias in evolutionary algorithms: A case study for the multidimensional knapsack problem.
Raidl, Günther, Gottlieb, Jens
Type: Report

Link to Repositum

Models and algorithms for three-stage two-dimensional bin packing.
Puchinger, Jakob, Raidl, Günther
Type: Report

Link to Repositum

Biased mutation operators for subgraph-selection problems.
Raidl, Günther, Koller, Gabriele, Julstrom, Bryant
Type: Report

Link to Repositum

Bend minimization in planar orthogonal drawings using integer programming.
Mutzel, Petra, Weiskircher, René
Type: Report

Link to Repositum

An evolutionary algorithm for the maximum weight trace formulation of the multiple sequence alignment problem.
Koller, Gabriele, Raidl, Günther
Type: Inproceedings; In: Parallel Problem Solving from Nature - PPSN VIII; Vol: 3242; Pages: 302-311

Link to Repositum

Optimal Robust Non-Unique Probe Selection Using Integer Linear Programming
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Optimal Robust Non-Unique Probe Selection Using Integer Linear Programming
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Probe Selection by Integer Linear Programming
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Solving the prize-collecting Steiner tree problem to optimality.
Ljubic, Ivana, Weiskircher, René, Pferschy, Ulrich, Klau, Gunnar Werner, Mutzel, Petra, Fischetti, Matteo
Type: Report

Link to Repositum

Optimal Robust Non-Unique Probe Selection Using Integer Linear Programming
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Optimale Probenauswahl für Microarrays bei mehrdeutigen Proben
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Structural Alignment of Two RNA Sequences with Lagrangian Relaxation
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Human-Guided Search: System, Current and Future Work
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Mathematik in der Bioinformatik
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Robustness and Resilience
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Optomal Robust Non-Unique Probe Selection Using Integer Linear Programming
Klau, Gunnar Werner
Type: Presentation

Link to Repositum

Non-planar orthogonal drawings with fixed topology
Chimani, Markus, Klau, Gunnar Werner, Weiskircher, René
Type: Report

Link to Repositum

An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem
Raidl, Günther
Type: Presentation

Link to Repositum

2003
SPQR-Trees in Automatic Graph Drawing
Mutzel, Petra
Type: Presentation

Link to Repositum

Neue heuristische Lösungsansätze für das Multiple Sequence Alignment Problem
Raidl, Günther
Type: Presentation

Link to Repositum

Evolutionary Computation for Combinatorial Optimization
Raidl, Günther
Type: Presentation

Link to Repositum

Effizientes Zählen von Schnittpunkten
Mutzel, Petra
Type: Presentation

Link to Repositum

An O(n log n) Algorithm for the Fractional Prize Collecting Steiner Tree Problem on Trees
Weiskircher, René
Type: Presentation

Link to Repositum

Neue Anwendungen von SPQR-Bäumen im Graphenzeichnen
Weiskircher, René
Type: Presentation

Link to Repositum

On the Hybridization of Evolutionary Algorithms
Raidl, Günther
Type: Presentation

Link to Repositum

Recent Advances on ArchEd
Mutzel, Petra
Type: Presentation

Link to Repositum

The fractional prize-collecting Steiner tree problem on trees
Weiskircher, René, Klau, Gunnar Werner, Ljubic, Ivana, Mutzel, Petra, Pferschy, Ulrich
Type: Report

Link to Repositum

Force-based label number maximization
Ebner, Dietmar, Klau, Gunnar Werner, Weiskircher, René
Type: Report

Link to Repositum