Everyone who likes to participate has to apply via the small interface
Deadline for Registration is May 23, 2019.
The meeting will take place at:
Institute of Computer Science
04109 Leipzig, Germany
Room: P702, Paulinum 7. Floor
In this talk, I will introduce a general framework for algebraic dynamic program- ming. In this framework, computations are described as monotone computations in certain semi-rings. Essentially, any dynamic programming algorithm can fit well into this framework, including computational problems on transformational grammars, sequences, rooted binary trees and walks in directed graphs. I will show that decision, optimization and counting problems can be described and implemented in computer programs in a unified way. While keeping the dynamic programming schemes, only the semi-rings have to be changed. Semi-rings for special computations, like computing the sum of additive functions over an en- semble of combinatorial objects, finding the maximum capacity path, etc. will be also introduced.
Beyond its capacity to ease the implementation for naturally expressible DP schemes, algebraic dynamic-programming represents the algorithmic counter- part of a research program, championed by the late Philippe Flajolet, connect- ing formal languages with enumerative combinatorics, through formal calculus. In this talk , I will show how this deep connection can be exploited to provide generic tricks allowing to reveal statistical properties of search spaces. These includes formal grammar derivatives (aka pointing operators) to compute the moments, correlations, etc. of additive features, or perform random generation in where probabilities of individual elements are expressed depend on multi- plicative and additive features simultaneously. We will also mention the use of the Discrete Fourier Transform to speed up the computation of classified partition functions. If time allows, we will turn to the random generation of elements of the search space, and will present efficient meta-algorithms for the classic and non-redundant generation, and efficient rejection schemes (multidimensional Boltzmann sampling) to provide an alternative to classified DP.
In this talk I will present a framework that allows for a high-level, and high-performance encoding of dynamic programming algorithms in the functional programming language Haskell.
This will include how to handle different index structures, which in turns allows encoding algorithms based on many different inputs. Currently available are string, set, tree, and position-specific scoring model input structures (and more structures may be easily added by a user).
In addition, I will discuss how to handle interleaved symbols. Interleaved symbols provide the machinery for ``interlocking'' symbols. Simple applications are constraining interior loop lengths, while more complex applications allow dealing with pseudoknotted structures.
Finally, I will discuss how semiring-generic algorithms can be written which yields much better ``re-use'' of each dynamic programming grammar with no running time consequences.
A lot of the efficiency of the presented work depends on features of the Haskell programming language implementation in the Glasgow Haskell compiler. Such features will be introduced succinctly, as required. However, their knowledge is not a requirement to use the framework.
Some data types admit a complementation operation that leaves index structures intact. Examples are complements of intervals (w.r.t. to given input interval), and complements of sets. In this setting, outside objects are complements of inside objects. As a consequence it os possible to derive outside recursions in a systematic manner from the inside grammar. This greatly facilitates the design of partition function versions e.g. of alignment algortihms.
In this presentation I will briefly review Classified Dynamic Programming (cDP), whose development was strongly facilitated by ADP. I will present well known use cases of cDP, like shape abstraction for RNA folding. The main part of the presentation will deal with my thoughts on the future of cDP, how the classification "trick" eases algorithm development and especially how this relies on and relates to functionalities provided by ADP frameworks.
In this talk, I will give a review of the known limitations of the algebraic dynamic programming approach. I will focus on three topics: 1. Asymptotic optimality Even if the adp approach runs in polynomial time, it is not necessarily asymotitically optimal. An example for this is the problem to count the different subsequences of a sequence. It can be easily done in quadratic time using adp, on the other hand, the problem can be solved in linear time if subtraction is allowed. 2. Known lower bounds on complexity It is known that there is no polynomial running time algebraic dynamic programming algorithm for counting the spanning trees in a graph, though it can be computed in polynomial time with the celebrated matrix-tree theorem by Kirchhoff. Similarly, there is no polynomial running time adp algorithm on permu- tations, though the determinant of a matrix, which is formally a sum over all permutations, can be computed efficiently. 3. Complexity theory barriers There are counting problems which are surprisingly #P-complete, although very similar counterparts can be com- puted in polynomial time with adp. An example for that is the problem to count the sequences of a given length that a regular grammar can gen- erate. This problem is #P-complete despite the fact that counting the possible generations of a regular grammar yielding sequences of a given length admits a polynomial running time adp algorithm.
Registation is closed!
|#||Last name||First name||Affiliation|
|3||Hoener zu Siederdissen||Christian||Uni Leipzig|
|9||Nono Saha||Cyrille Merleau||Uni Leipzig|