CSD8 @ Mons

Program

The booklet of CSD8 (containing the abstracts and other information) is still available for download here. Below, you can access to the list of talks and abstracts. For most of the talks, you will find also the slides.

List of plenary talks (click on a title to show / hide an abstract)

  • Hertz, Alain (GERAD and Ecole Polytechnique de Montréal, Québec, Canada)
    [Download slides]
    Computers versus human brains : a cooperative game for scientific discoveries

    Many of my scientific discoveries would have been impossible without the help of computers. During this presentation, I will present three of them, showing for each one how computers have been a supportive partner for proving new theorems or characterizing graph families.

    I will first consider carbon chemistry. Alkanes are organic compounds which are exclusively composed of carbon and hydrogen atoms. They can be represented by a carbon graph where each carbon atom is a vertex, and chemical bonds are represented by edges. Chemists are interested in determining which alkanes optimize Adriatic indices. Using my brains, I could prove necessary and sufficient conditions for the existence of a carbon graph with given numbers mij of edges with end-degrees i and j. These conditions can be formulated using a linear integer program. Computers are therefore helpful to optimize Adriatic indices under the aforementioned conditions, and this allowed me to characterize, for example, which alkanes have a minimum Randić index.

    The second result in on forbidden subgraph characterizations (FSC). Given a class of graphs F, an FSC is a set of graphs H such that a graph G belongs to F if and only if no graph of H is isomorphic to an induced subgraph of G. I will show how it is possible to automate the generation of conjectures on FSCs. With this technique, using computers, I could generate conjectures on the domination, independence and irredundance of graphs. Using my humain brains, I could then prove the correctness of these conjectures.

    With the help of computer systems having interactive functions that allow to add or remove vertices or edges, while indicating the impact on the value of graph invariants, it is possible to come up with conjectures stating that among a family H of graphs, H* contains all those that minimize a given invariant. In order to prove the correctness of such conjectures, one can use a proof technique called proof by transformation. It consists in defining a set of transformations and in showing that for each graph G in H which is not in H*, there exists at least one of these transformations which allows to obtain a new graph G' in H with a better value of the considered invariant. I will illustrate this proof technique with several examples.

  • Kardoš, František (LABRI, Université de Bordeaux, France)
    Hamilton cycles in cubic planar graphs

    Tait conjectured in 1884 that each cubic planar graph contains a Hamilton cycle. Had the conjecture been true, it would have implied the Four Color Theorem. However, it was disproved by Tutte in 1946. Later on, other counterexamples with different structural properties were found. On the other hand, for several subclasses of cubic planar graphs hamiltonicity was proven. In general, the problem of founding a Hamilton cycle in a cubic planar graph turned out to be NP-complete.

    All known conterexamples to Tait's conjecture contain (i) odd cycles and (ii) faces of large size. That's why Barnette formulated in the 60s two conjectures in the form of sufficient conditions for the hamiltonicity of cubic planar graphs: He conjectured that bipartite cubic planar graphs, as well as cubic planar graphs with faces of size at most 6, are hamiltonian. We will present results, methods and ideas leading to a computer-assisted proof of the latter.

  • Nagar, Alessandro (Centro Fermi and INFN, Torino, Italy)
    Gravitational waves from binary black hole coalescence: interfacing supercomputer calculations with analytical methods

    The direct detection by the LIGO/Virgo collaboration of the gravitational waves emitted by three coalescing black hole binaries (GW150914, GW151226 and GW170401) has marked the beginning of gravitational wave (GW) astronomy. The correct identification of these gravitational signals and the deduction of the physical properties of the two objects as well as of the final black hole crucially relies on gravitational waveform models. In this talk I will outline the complementarity between numerical relativity (supercomputer) simulations and analytical waveform models (based on the effective-one-body description of the two-body problem in Einstein's general theory of relativity) to compute the (many thousands of) analytical waveform templates needed for the actual LIGO/Virgo data analysis.

  • Radziszowski, Stanisław (Rochester Institute of Technology, New York, USA)
    [Download slides]
    Computers in Ramsey Theory

    Ramsey theory is often regarded as the study of how order emerges from randomness. While originated in mathematical logic, it has applications in geometry, number theory, game theory, information theory, approximation algorithms, and other areas of mathematics and theoretical computer science.

    Ramsey theory studies the conditions of when a combinatorial object necessarily contains some smaller given objects. The central concept in Ramsey theory is that of arrowing, which in the case of graphs describes when colorings of larger graphs necessarily contain monochromatic copies of given smaller graphs. The role of Ramsey numbers is to quantify some of the general existential theorems in Ramsey theory, always involving arrowing. The determination of whether this arrowing holds is notoriously difficult, and thus it leads to numerous computational challenges concerning various types of Ramsey numbers and closely related Folkman numbers.

    This talk will overview how computers are increasingly used to study the bounds on Ramsey and Folkman numbers, and properties of Ramsey arrowing in general. This is happening in the area where traditional approaches typically call for classical computer-free proofs. It is evident that now we understand Ramsey theory much better than a few decades ago, increasingly due to computations. Further, more such progress and new insights based on computations should be anticipated.

  • Remacle, Françoise (Department of Chemistry, Université de Liège, Belgium)
    [Download slides]
    Optical and electrical parallel molecular and nanoscale computing

    A complete description of the state of a system requires in quantum mechanics measuring a complete set of observables. Typically this is prohibitive but we will discuss how it can be implemented in two experimental set ups where only a finite number of levels are accessible (i) Optically in 2Dimensional Photon-Echo Spectroscopy which allows resolving dynamical pathways on electronic and vibrational molecular quantum states. (ii) Electrically by addressing the electronic states of dopant structures in Si using voltage pulses. In both cases, the physical interaction between the perturbation and the state of the system is binary which is thus a natural setting for implementing bilinear logic. The logic operations that can be implemented are molecular decision trees and logic function decomposition, logic operations where all possible values of inputs are processed in parallel and the outputs are read simultaneously.

    We will discuss the implementation of these logic operations both optically and electrically. Optically by probing the laser-induced dynamics of populations and coherences in a Rhodamine dye mounted on a short DNA duplex. The inputs are provided by the bilinear interactions between the molecule and the laser pulses, and the output values are read from the 2Dimensionnal molecular response at specific frequencies. We will then discuss the implementation of logic function decomposition on dopant molecules embedded in Si using electric addressing by pulses of gate voltages and reading the output by scanning tunneling spectroscopy.

  • Szöllősi, Ferenc (School of Electrical Engineering, Aalto University, Finland)
    [Download slides]
    New Results on Equiangular Lines

    A set of n lines, spanned by the unit vectors v1, v2, ..., vn in the d-dimensional Euclidean space Rd is called equiangular, if there is a common angle α ≥ such that for all ij we have |⟨ vi, vj ⟩ | = α. The concept of equiangular lines was introduced by J. Haantjes, and its basic theory was developed by J.J. Seidel and his coauthors during the 1970s.

    The problem of determining the maximum number of equiangular lines in dimension d, denoted by N(d) was a central problem in the past, and has received considerable attention also recently. In the 1970s it was already observed by Gerzon that N(d) grows at most quadratically in d, in particular, N(d) ≤ d(d+1)/2; but it took nearly 30 years until D. de Caen come up with the first quadratic lower bound on N(d) in early 2000. B. Bukh in a breakthrough paper proved a linear bound on the number of lines when the common angle α is fixed (i.e. when it does not depend on d).

    While the problem has now been settled in the asymptotic sense, the exact value of N(d) is only known for a few relatively small values of d. For years there has been a consensus within the community that the cases d ≤ 18 are completely understood, however, certain recently discovered gaps in the literature revealed that several cases thought to be solved earlier are in fact actually open.

    The following table summarizes to the best of my current knowledge the situation for d ≤ 23:

    d 2 3-4 5 6 7-13 14 15 16 17 18 19 20 21 22 23
    N(d) 3 6 10 16 28 28-29 36 40-41 48-50 54-60 72-75 90-95 126 176 276

    In this talk I will give an overview on some of these exciting new developments, and as a case study I will explain how computer-aided experiments lead to the recent rather surprising discovery of a system of 54 equiangular lines in R18.

List of contributed talks (click on a title to show / hide an abstract)

  • Arndt, Joerg (Technische Hochschule Nuernberg, Germany)
    [Download slides] [Download paper]
    Plane-filling curves on all uniform grids

    We describe a search for plane-filling curves traversing all edges of a grid once. The curves are given by Lindenmayer systems with only one non-constant letter. All such curves for small orders on three grids have been found. For all uniform grids we show how curves traversing all points once can be obtained from the curves found. Curves traversing all edges once are described for the four uniform grids where they exist.

  • Brinkmann, Gunnar (Gent Universiteit, Belgium)
    [Download slides]
    Goldberg operations as a formal approach to operations on polyhedra

    Local operations on polyhedra that preserve the symmetries of the polyhedron have a long history. If you e.g. consider the Archimedean solids, already their names -- e.g. truncated cube or snub dodecahedron -- say how they were obtained: by applying an operation to a Platonic solid. Nevertheless it is not quite clear whether the names date back to Archimedes or were introduced by Keppler, who rediscovered the Archimedean solids. When asked what a "local symmetry preserving operation" is, a typical answer would not be a definition, but "for example truncation". Without a proper definition properties of the class of such operations cannot be studied of course. In this talk we will describe approaches of Goldberg, Caspar, Klug and Coxeter to decorate polyhedra, show that the famous Goldberg/Coxeter operation was in fact first proposed by Caspar and Klug, show that the often cited paper by Goldberg has an -- obvious -- error and finally use the (corrected) approach by Goldberg for a formal approach to local operations on polyhedra that preserve symmetries.

  • Camby, Eglantine (Université Libre de Bruxelles, Belgium)
    [Download slides]
    The software GraphsInGraphs and applications

    Studying invariants depending on graphs and induced subgraphs is currently a growing research topic. Indeed, perfect graphs are well-known examples since the chromatic number and the size of the largest clique must be equal in every induced subgraph.

    However, some authors investigated particular classes of graphs determined by a characterization of the induced subgraphs. For instance, cograph is a graph such that the path on 4 vertices is not one of its induced subgraph.

    In this talk, we present a new computer software dedicated to the study of graphs and their induced subgraphs, called GraphsInGraphs. We explain the structure of our system and how it works. In the second part of the talk, some applications of the software are presented.

    (With Gilles Caporossi)

  • Caporossi, Gilles (HEC Montréal and GERAD, Canada)
    Exploration of distances in graphs

    When distances in graphs are involved, geodesic distance is usually considered. If it is certainly the easiest to compute and to study, it turns out that other distance measures are worth being considered.

    Klein and Randic (1993) proposed the resistance distance which is based upon an analogy between the graph and a system of resistors. From a technical point of view, this distance involves the laplacian of the graph (also named Kirchhoff matrix), and its pseudo inverse. The properties of the pseudo inverse of the laplacian makes it a good candidate for spacial representation of graphs.

    Random walks through a graph is a process associated to the same mathematical model, (Stephenson and Zelen, 1989; Newman, 2005) and may be used for another distance definition, the expected random walk distance (Camby et al., 2016).

    In this talk, we will explore various distance measures in graphs derived from different contexts that are meaningful in the context of complex networks analysis and some of their respective porperties are discussed. Finally, some conjectures obtained by AutoGraphiX-III will be proposed concerning their bounds and the graphs for which they are attained.

    (With Eglantine Camby)

  • Chauvin, Remi (CNRS and Université de Toulouse, France)
    [Download slides]
    Total and partial matching polynomials of polycyclic graphs: canonic hyper-complex numbers of Cayley-Dickson algebra in the quantification of chemical aromaticity

    The proposed talk is a follow-up to a former presentation at CSD7 in Richmond in 2015, where the chemical interpretation of the matching (acyclic) polynomial of a chemical graph serving to define the topological resonance energy of its cycles, namely their quantitative aromaticity, was developed.

    Within this context, the nth algebra of the Cayley-Dickson process, 𝔸n, of vectorial dimension 2n, is formally associated with the class of polycyclic graphs Gn with n fundamental circuits: 2n is the number of hyper-generalized circuits in Gn (including the empty circuit). Indeed, 𝔸n is multiplicatively generated from the canonical basis set of 𝔸n-1 and a new hyper-complex element associated to the fundamental circuit added to Gn-1 to define Gn. The canonical basis set of 𝔸n contains 1 = e0 and 2n-1 hyper-complex numbers er, r = 1,…, 2n –1 such that er2 = -1 (more generally, the multiplication table of the canonical basis elements and their negative shows that they constitute an a loop Ln). The generating set of this basis contains only n of the hyper-complex numbers er, r = 0,…, n–1. For examples:

    • {e0 = 1} for 𝔸0 = ℝ = ordered field of real numbers, associated to acyclic graphs;
    • {e0 = 1, e1 = i} for 𝔸1 = ℂ = field of complex numbers, associated to unicyclic graphs;
    • {e0 = 1, e1 = i, e2 = j} for 𝔸2 = ℚ = quaternion skew-field, associated to bicyclic graphs (e3 = k = ij);
    • {e0 = 1, e1 = i, e2 = j, e4 = l} for 𝔸3 = 𝕆 = octonion algebra, associated to tricyclic graphs (e3 = k = ij);
    • {e0 = 1, e1 = i, e2 = j, e4 = l , e8 = o} for 𝔸4 = 𝕊 = sedenion algebra, associated to tetracyclic graphs (e3 = k = e1e2, e5 = m = e1e4, e6 = n = e2e4, e7 = p = e5e2).

    The hyper-complex number er is actually the normalized edge-weighting coefficient allowing "acyclization" of circuits containing the edge of Gn weighted by er, with all the other edges weighted by +1 (or –1). Edge weighting by -1 corresponds to the Moebius-twisting of circuits containing this edge. The process is a generalization of a method proposed by A. Graovac et al. for bicyclic graphs with quaternionic weights.

    All the partial matching polynomials of Gn are found to be calculated from the corresponding formal secular determinants by assuming that the standard multiplication rules (commutativity, associativity, existence of an iso-normed inverse) apply to those particular canonical elements of the algebra (which however quickly becomes non-commutative for n ≥ 2, not associative for n ≥ 3...). Though related to the loop structure of Ln, the numerical observation is not proven in the general case, but its generalization is proposed as a conjecture.

  • De Boeck, Jérôme (Université Libre de Bruxelles, Belgium)
    Dynamic programming approach for bidding problems on day-ahead markets

    In several markets, such as the electricity market, spot prices are determined via a bidding system involving an oligopoly of producers and a system operator. Once time-dependent price-quantity bids are placed by each producer for its production units, the system operator determines a production schedule that meets demand at minimal cost. The spot price charged to the customers is then set to the marginal production cost.

    Fampa et al. have considered the problem faced by a profit-maximizing producer, whose bids depend on the behaviour of the system operator, as well as the stochastic nature of final demand, and that can be cast within the framework of stochastic bilevel programming.

    In this presentation, we consider an enhanced model that embeds two key features, namely the uncertainty related to competitors' bids, as well as the impact of spot prices on demand. Our aim is to develop efficient solution algorithms for addressing instances involving a large number of scenarios.

    Under the assumptions that production costs are linear and that demand is piecewise constant, the bilevel model can be reformulated as a large mixed integer program. Although this problem becomes numerically intractable as the number of scenarios increases, it becomes much simpler when producers are allowed to place different price-quantity bids for a given generator. This relaxation of the original problem can than be solved in polynomial time by a dynamic programming algorithm. This algorithm can then be adapted to heuristically solve the original problem, yielding very quickly feasible solutions characterized by small optimality gaps. The performance of the method has been tested on instances inspired from the Brazilian Electric System National Operator.

    (With Martine Labbé, Etienne Marcotte, Patrice Marcotte and Gilles Savard)

  • Devillez, Gauvain (Université de Mons, Belgium)
    [Download slides]
    Transproof : Computer assisted graph transformations

    Over the years, several softwares and tools have appeared in order to help in to find conjectures in Extremal Graph Theory. One of these tools is PHOEG which is currently being developed in our team as a sucessor to GraPHedron. The conjectures usually consist in the definition of a class of graphs, called extremal graphs, that realize a bound for some graph invariant given some constraints. These constraints can vary from fixing the value of another invariant to restricting the graphs to a specific class.

    An example of a graph invariant is the average eccentricity. Let G = (V,E) be a graph and v in V be a vertex of G, we define the eccentricity of v as the maximal distance between v and any other vertex of G. The average eccentricity as well as the average distance are used in a conjecture (Aouchiche, PhD Thesis, 2006) stating that among all connected graphs on n vertices, the graph that maximizes the difference between the average eccentricity and the average distance is the path on n vertices.

    This conjecture is one among many others generated by the existing tools. Indeed, while some of them provide support to filter trivial or false conjectures, they still produce an important number of non trivial ones.

    To help researchers in handling this quantity of conjectures, we develop a PHOEG module called Transproof. This module enables its users to study graph transformations. These transformations can then be used in proving the extremality of a class of graphs. This is done by proving that any non extremal graph concerned by the conjecture can be transformed into a new graph whose value for the studied invariant is strictly closer to the conjectured bound. Such a proof technique is called a proof by transformation and is widely used in Extremal Graph Theory.

    Transproof uses a graph database to store graph transformations in the form of a metagraph with graphs as its vertices and applications of transformations as its arcs. It uses an exact approach on small graphs and provides the researchers with means to explore graph transformations as well as to test ideas to prove the conjectures.

    In this talk, we explain the difficulties encountered when building and exploiting this data and the ideas used to handle them. Using the conjecture explained above, we also illustrate how Transproof can help to raise ideas in the design of a proof by transformation.

    (With Pierre Hauweele and Hadrien Mélot)

  • Fowler, Patrick (The University of Sheffield, United Kingdom)
    Models for molecular conduction

    In the simple Hückel version of Ernzerhof’s Source-and-Sink Potential (SSP) model of molecular conduction, the transmission of electrons through an unsaturated organic molecule is calculated by modifying the adjacency matrix of the molecular graph, G. Two extra vertices, the source and the sink, are connected to the graph, and carry complex weights that embody the scattering-type boundary conditions of the conduction problem. In the formulation that we use, a general solution for the transmission as a function of energy can be obtained in terms of four characteristic polynomials, of G itself and three subgraphs. Use of these polynomials and their spectral decompositions allows formulation of selection rules for conduction, and definition of generic classes of molecular conductors and insulators in terms of graph nullity. Progress will also be reported on adding the effects of electron interaction to the Hückel ‘empty molecule’ picture, in particular on the construction of a model that accounts for the statistical effect of the Pauli Exclusion Principle on electron flow through occupied orbitals.

    (With Barry Pickup, Irene Sciriha and Martha Borg)

  • Gismondi, Stephen (University of Guelph, Canada)
    Using Matching to Detect Infeasibility of Some Integer Programs

    A novel matching based heuristic algorithm designed to detect specially formulated infeasible {0,1} IPs is presented. It either detects an infeasible IP or exits undecided. It does not solve an IP. We call it the triple overlay matching based closure algorithm. Input to the algorithm is a set of nested doubly stochastic subsystems and a set E of instance defining variables set at zero level. We view the set of all solutions of the IP as an n2 x n2 block permutation matrix Q whose components are {0,1} variables. Each n x n block (u,i) is n x n permutation matrix P where block (u,i) contains pu,i in position (u,i). Individual solutions are n2 x n2 block permutation matrices, each with block structure P. The algorithm deduces additional variables at zero level until either a constraint is violated and the IP is infeasible, or no more variables can be deduced at zero level and the IP is undecided. All feasible IPs, and all infeasible IPs that fail to be detected infeasible are undecided. The algorithm is successfully applied to small test set of specially formulated infeasible {0,1} IP instances of the Hamilton cycle decision problem. Models of both the graph and subgraph isomorphism decision problems are also presented. They help illustrate ease of applicability. The algorithm is also easy to modify. Increased levels of nested doubly stochastic subsystems can be implemented dynamically. The algorithm is designed for parallel processing and inclusion of techniques in addition to matching.

    (With Edward Swart)

  • Goedgebeur, Jan (Gent Universiteit, Belgium)
    [Download slides]
    Generation of hypohamiltonian graphs

    We will present a new algorithm to generate all non-isomorphic hypohamiltonian graphs of a given order. (A graph G is hypohamiltonian if G is non-hamiltonian and G-v is hamiltonian for every v ∈ V(G)).

    Using this algorithm, we were able to generate complete lists of hypohamiltonian graphs of much larger orders than what was previously possible. This allowed us amongst others to find the smallest hypohamiltonian graph of girth 6 and to show that the smallest planar hypohamiltonian graph has order at least 23.

    (With Carol Zamfirescu)

  • Goetschalckx, Pieter (Gent Universiteit, Belgium)
    [Download slides]
    Local Symmetry-preserving Operations

    Cubic polyhedra with icosahedral symmetry where all faces are pentagons or hexagons have been studied in chemistry and biology as well as mathematics. Parameterized operations to construct all such polyhedra were first described by Goldberg in 1937. But the idea of applying operations to seed polyhedra goes much further back, to Euclid, Kepler, and others. Some of these operations are classified by Conway's polyhedron notation.

    We generalized Goldberg's approach to a systematic one encompassing all local symmetry-preserving operations on polyhedra. This can be further expanded in order to allow operations that only preserve orientation-preserving symmetries, including all of Goldberg's and Conway's operations. Using this new framework, we can generate all of these operations, and answer several questions which were previously impossible to ask.

  • Hauweele, Pierre (Université de Mons, Belgium)
    [Download slides]
    PHOEG Helps Obtaining Extremal Graphs

    Extremal graph theory aims to determine bounds for graph invariants as well as the graphs that attain those bounds. For example, one can study the eccentric connectivity index, ECI, a discriminating topological descriptor in biochemistry. The eccentric connectivity index of a given simple, undirected, connected, graph G, is defined as the sum (among all vertices) of the product of the degree and the eccentricity of each vertex.

    Some invariants (including ECI) may be hard to compute for a human and it might be difficult for one to develop intuitions about how they meld with graph structure.

    There is thus a need for tools to help researchers explore the intricacies of these invariants. This talk will present some aspects of the PHOEG system, an ecosystem of tools we are currently developing to help us with our research in Extremal Graph Theory. We will focus on the feature of querying a "big" database of graphs in order to check conjectures or infer new ones. It can be used to quickly confirm/infirm/obtain a conjecture on the problem of maximizing ECI under the constraints of fixing the graphs order and size, as well as quickly lighten or kill ideas in the process of seeking a proof for it.

    (With Gauvain Devillez and Hadrien Mélot)

  • Itani, Sarah (Université de Mons, Belgium)
    Specifics of Medical Data Mining for Neurological Diagnosis Prediction

    Over the last ten years, the culture of medical data sharing has been widely expanded to constitute databases enough representative. This worthy initiative encouraged researchers from various fields of expertise to consider medical issues, and notably, to provide clinicians with support in making a diagnosis. Accessible online, the open medical datasets may include, for each subject, clinical data (e.g. age, gender), images (e.g. magnetic resonance images), signals (e.g. electrocardiography, electroencephalography) and/or other biological data (e.g. genotypes).

    In particular, for children mental disorders, there is a lack of pathophysiological bases universally recognized. Hence, the diagnosis is based on subjective observations collected from the environment of the children (parents, teachers, etc.). Moreover, kids’ disorders present common features, making these troubles difficult to dissociate. Incidentally, on a given trouble diagnosis, the agreement among practitioners, measured by the Kappa statistics, may still be improved. The development of predictive models based on physiological data could help the clinical neuroscience to make assessments more objective. To that end, several studies considered the application of Data Mining techniques on medical data. Yet beyond a simple data analysis, it is critically important to integrate the questioning and requirements specific to the medical domain in the process of Data Mining.

    The foremost requirement of practitioners is to be able to justify and validate the diagnosis based on the recommendation provided by Data Mining. Besides, the predictive model should be able, on a per-patient basis, to provide a readable relationship between the input data and the diagnosis through a decision chain. For such purposes, the recommendation must be based on an interpretable (and preferably, interpreted) model. Thus, it remains essential to set the properties of interpretability in the sense of medical Data Mining. Furthermore, in case of a neurological diagnosis, should predictive models make errors, it must be ensured that these wrong predictions have the lowest impact on patients. Indeed, a model having a low ability to detect healthy patients is far from being cautious, in the sense that they are exposed to the risk of being prescribed an unnecessary medication. The opposite situation is less restrictive: if a model has a low ability to detect pathological patients, the detection of the trouble may be just delayed in time. Finally, medical data include several sources of heterogeneities, e.g. the harmonization of protocols is missing; cultural and social factors influence the etiology and epidemiology of troubles. In view of these specifics of medical Data Mining, we propose a critical consideration on learning schemes, algorithms and features that should be used to provide the most appropriate aid-in-diagnosis models for neurological disorders.

    (With Fabian Lecron and Philippe Fortemps)

  • Mourisse, Dieter (Gent Universiteit, Belgium)
    [Download slides]
    The generation of nanojoins

    The study of fullerenes is an important part within Chemistry. Fullerenes are carbon molecules that can be modelled by a 3-regular plane graph where all faces are pentagons or hexagons. From Euler's formula it can be proven that fullerenes have exactly 12 pentagons. Nanotubes are a type of fullerene where the 12 pentagons are distributed over two patches with 6 pentagons each. Those patches are called nanocaps and they are connected by a long tube of hexagons.

    In a fullerene, where outside of the caps you have only hexagons, the boundary of the caps has the same structure. Nanojoins are structures that connect two or even more caps with (possibly) different parameters with each other. These structures also contain heptagons and possibly extra pentagons.

    This talk will describe an algorithm that was developed to generate the possible structures of these nanojoins. It takes as input the cap parameters and generates all possible structures with a given amount of heptagons up to a certain size.

  • Myrvold, Wendy (University of Victoria, Canada)
    Polyhedra with Positive Combinatorial Curvature

    The curvature at a vertex v of an embedding is defined to be 1 - deg(v)/2 + the sum over the faces f incident at v of 1/size(f). Prisms and anti-prisms are polyhedra that can be arbitrarily large with strictly positive curvature at every vertex. A PCC-graph is a graph embedded in the plane that is not a prism or an anti-prism that has strictly positive curvature at every vertex. One open question is to determine the maximum number of vertices in a PCC-graph. We present some new results for this problem.

    (With Patrick Fowler and Paul Oldridge)

  • Sciamannini, Fabio (Université Libre de Bruxelles, Belgium)
    A Column Generation Approach for the b-Coloring problem in graphs

    We present a branch-and-price framework for solving the b-coloring problem, i.e. a vertex coloring problem where each color class contains at least one vertex, called b-vertex, adjacent to every other color. We suggest an approach based on an integer programming formulation, called independent set formulation, that involves an exponential number of variables. In order to deal with them, we solve the problem via column generation, where variables are dynamically introduced into the model by iteratively solving a pricing problem.

    We took advantage of the structure of the problem to reformulate the pricing problem as a generic version of the weighted independent dominating set problem. We exploit the fact we can decompose our pricing problem in many small subproblems in order to accelerate the solution methods, instead of solving a unique large problem.

    Although requiring the solution of a difficult subproblem as well as needing sophisticated branching rules, this approach based on a Dantzig-Wolfe decomposition scheme, provides tighter LP relaxation and eliminates the symmetry that usually affects the formulation of the b-coloring.

    We also developed a compact formulation for the b-coloring based on the representatives formulation introduced by Campêlo et al. by adding constraints that ensure the b-coloring property, in order to compare the quality of our approach in terms of elapsed time to solve the problem and time to find a feasible solution. Despite the importance of the problem, to date, there exist in literature no contributions concerning exact methods exploiting a column generation paradigm embedded in a Branch-and-Bound scheme (i.e. a Branch-and-Price algorithm) to solve the b-Coloring problem. We show that such approach appears efficient to solve moderate to big scale instances and it turns out to be useful for obtaining good lower bounds that can be used to speed up the procedure. Some implementation details and computational experience are presented.

    (With Marcio Costa Santos, Martine Labbé and Bernard Fortz)

  • Sedlar, Jelena (University of Split, Croatia)
    Wiener index and the inverse interval problem for trees

    The Wiener index W(G) of a simple connected graph G is defined as the sum of distances over all pairs of vertices in a graph. We denote by W[Tn] the set of all values of Wiener index for a graph from the class Tn of trees on n vertices. The largest interval of contiguous integers contained in W[Tn] is denoted by Wint[Tn]. The following conjectures were posed in literature regarding the cardinality of sets W[Tn] and Wint[Tn].

    Conjecture 1. (Knor, Skrekovski and Tepeh, 2016) The cardinality of W[Tn] equals (1/6)n3 + O(n2).

    Conjecture 2. (Krnc and Skrekovski, 2016) The cardinality of Wint[Tn] equals O(n3).

    We first prove that the value of the Wiener index of a tree for odd n must be even number. Given the upper and lower bound for Wiener index on Tn (the value for path and star respectively), this result implies that in the case of odd n the set Wint[Tn] has to be redefined as the largest interval of contiguous even integers contained in W[Tn] and that Conjecture 1 has to be reformulated by stating that in the case of odd n the cardinality of W[Tn] equals (1/12)n3 +O(n2). Given these reformulations we prove that both conjectures are true. Moreover, we prove the strongest version of Conjecture 2 in terms of O(n3).

  • Troestler, Christophe (Université de Mons, Belgium)
    [Download slides]
    A computer assisted proof of the symmetry of solutions to a PDE

    The Curie symmetry principle, which asserts that the symmetries of the cause must also be contained in the effects, no longer holds in general for non-linear partial differential equations (PDE) with possibly infinitely many solutions. In this context, one has to restrict to “low energy” solutions. Solutions with the lowest energy (ground states) usually — though not always — possess all the symmetries of the problem. For solutions with the lowest energy among sign-changing ones, only part of the symmetries is preserved. In this talk, I will consider the simple problem -Δ u = |u|p-2 u with zero Dirichlet boundary conditions on a square domain and show how numerical computations help to discover the residual symmetry of the least-energy sign-changing solutions as well as to prove that what the simulation tells is rigorously valid.

  • Van Cleemput, Nico (Universiteit Gent, Belgium)
    [Download slides]
    Non-Hamiltonian and Non-Traceable Regular 3-Connected Planar Graphs

    By Euler's formula, there are k-regular 3-connected planar graphs for three values of k: 3, 4, or 5. Denote by ck, resp. pk, the order of the smallest non-hamiltonian, resp. non-traceable, k-regular 3-connected planar graph.

    Tait conjectured in 1884 that every 3-regular 3-connected planar graph is hamiltonian. This conjecture became famous because it implied the Four Colour Theorem (which at that time was still the Four Colour Problem). However, Tait's conjecture turned out to be false and the first to construct a counterexample was Tutte in 1946. The smallest counterexample is due to Lederberg (and independently, Bosák and Barnette) and has order 38. That this is indeed the smallest possible counterexample was shown by Holton and McKay after a long series of papers by various authors, e.g., Butler, Barnette and Wegner, and Okamura. This settles that c3=38. Combining work of Knorr and T. Zamfirescu, it has been shown that 54 ≤ p3 ≤ 88.

    Non-hamiltonian 4-regular 3-connected planar graphs have been known for a long time. Following work of Walther and Sachs, Zaks proved that there exists a non-hamiltonian 4-regular 3-connected planar graph of order 209. Applying a technique by Sachs to transform a non-hamiltonian 3-regular 3-connected planar graph into a non-hamiltonian 4-regular 3-connected planar graph, Bosák showed that c4 ≤ 171. Using the same technique by Sachs and the non-traceable 3-regular 3-connected planar graph on 88 vertices by T. Zamfirescu, it can be shown that p4 ≤ 396.

    Zaks showed for the 5-regular case that c5 ≤ 532 and that p5 ≤ 1232. Owens strongly improved these bounds for non-hamiltonian and non-traceable 5-regular 3-connected planar graphs. More specifically, he showed that c5 ≤ 76 and that p5 ≤ 128.

    The main focus of the talk will be the 4-regular case for which we show that there exists a non-hamiltonian 4-regular 3-connected planar graph with 39 vertices. We also present the result of computations that show that every non-hamiltonian 4-regular 3-connected planar graph has at least 33 vertices. This implies that 33 ≤ c_4 ≤ 39. Using similar building blocks, we also construct a non-traceable 4-regular 3-connected planar graph on 90 vertices which improves the upper bound of p4 to p4 ≤ 90. Furthermore, we discuss a small improvement to the 5-regular non-traceble case by showing that p5 ≤ 108. For the 3-regular case, we turn our attention towards cyclically-4-edge-connected planar graphs.

    (With Carol Zamfirescu)