COMBINATORIAL ALGORITHMS GROUP

If you would like to give a talk in our seminar series, please contact Wendy Myrvold (wendym@cs.uvic.ca).
Date  Place  Time  Speaker  Abbreviated Title 

Sept. 13  EOW 430  3:30  Hongbing Fan  On the Computation of Minimal Nondecomposable Regular Graphs 
Sept. 20  EOW 430  3:30  Amir benAmram  Backing Up in SinglyLinked Lists 
Sept. 27  Cle A 211  3:30  Moshe Rosenfeld  Variations on Hamiltonian Themes 
Oct. 11  EOW 430  3:30  Stirling Chow  Using the Transfermatrix Method to Count Polyominoes and Hexanimals 
Oct. 17  DSB C 103  7:30pm  Carla Savage  One, Two, Three: New Ways (and New Things) to Count 
Oct. 18  EOW 430  3:30  Carla Savage  Venn Diagrams and Symmetric Chain Decompositions in the Boolean Lattice 
Oct. 21  DSB C 116  1:30  Ran Canetti  Universally Composable Security: A New Paradigm for Cryptographic Protocols 
Oct. 22  CIT 110  1:30  Ronitt Rubinfeld  What can we do in sublinear time? 
Oct. 25  Elliott 160  3:30  No CAG due to visitors Mon./Tues.  
Nov. 8  EOW 430  3:30  No CAG: The potlatch is Nov. 9  
Nov. 15  CIT 110  1:30  Bernard Chazelle  Nonmonotonicity in Geometric Searching 
Nov. 22  EOW 430  3:30  Paul Vitanyi  The Similarity Metric 
Nov. 29  EOW 230  3:30  Elena Prieto  The Complexity Class MINI[1] 
Dec. 6  EOW 430  3:30  Amir benAmram  ReadOnly Programs over Binary Trees: Expressivity Questions 
The complexity class MINI[1] is a parameterized class laying between FPT and W[1]. Traditionally, to prove that a problem is not in FPT we reduced to it from a problem known to be hard for W[1] such as Independent Set. In many cases (for example, Dominating Set) the reductions are quite crafty and complex. The complexity class MINI[1] seems to offer easier combinatorial proofs for parametric intractability. It also provides a new theoretical framework for proving FPT optimality results.
A readonly program processes an input structure, which it cannot modify, using pointers into the structure as the only type of variables. When the input structure is a linear list, it is known that having the list doublylinked, allowing for 2way scanning, does not give additional expressivity over the 1way case  which means that the same computations can be performed (but double linking can save processing time, as discussed in a previous talk). The same is true for the capability of testing two pointers for equality  checking whether they point at the same node. It can be simulated (for a price) by programs lacking it.
What if the input structure is a binary tree instead of a linear list (as is natural in several programming languages, from LISP to Prolog)? In this talk we'll see that in this context, doublelinking makes a difference in expressivity  not just in complexity. This appears to be the case also with respect to the equality test for pointers, but not all the questions have been answered yet  the talk will give a map of what is known and unknown in this ongoing research, and include proofs for the simpler separation results.
A regular hypergraph (multiple edges are allowed) is said to be decomposable if its edge set can be partitioned into two subsets such that each of them induces a spanning regular hypergraph. For example, the hypergraph induced by edge set {{1,2,3}, {1,2,3}, {1,4}, {1,4}, {4,2}, {4,3}, {2,3}} is a 4regular hypergraph as each vertex i, i = 1,2,3,4, is covered by exactly four edges. It can be decomposed into a spanning 3regular hypergraph {{1,2,3}, {1,2,3},{1,4}, {4,2}, {4,3}} and a spanning 1regular hypergraph {{1,4}, {2,3}}.
We consider the problem of computing all minimal nondecomposable regular hypergraphs with fixed number of vertices. This problem is related to some other topics such as graph factorization, combinatorial designs, wellquasiordering set, and Hilbert base of linear Diophantine equations. We will talk about some new progresses on this problem, including a complete solution of computing all minimal nondecomposable regular graphs.
The main topic for this talk will be work by Holger Petersen (Stuttgart University) and myself, whose purpose was to verify the popular belief that doubly linked lists are more efficient than singly linked lists in a readonly situation.
The main result is a method to implement twoway access to a singlylinked list in O(n^{epsilon}) per operation, for any epsilon > 0, without making use of storage other than a finite number of counters and pointers into the list. We prove that our result is optimal. Moreover we achieve a tight tradeoff between the number of points of access into the list (pointers) and the value of epsilon, first in an amortized setting and then in a worstcase operation time setting.
Time permitting, I will discuss some open questions and extensions  done or waiting to be done.
The hunt for Hamilton Cycles in graphs started in the 18th century. Most probably Euler's search for a knight tour of the chess board was the first Hamilton Cycle problem considered. Nowadays we are flooded by theorems, conjectures, refuted conjectures, web sites whose common characteristic is that if a graph satisfies P then it is Hamiltonian. It is well known that the problem of finding a Hamilton Cycle in a graph is "certifiably" difficult. As common among Mathematician, when the going gets rough, the rough "change" the problem. Current trends are to "refine" (mutilate) the definition of Hamilton Cycle. The first very minimal modification was to replace Hamilton Cycle by Hamilton Path. A more drastic modification is to allow vertices to be visited more than once. A 2walk is a closed walk in a graph in which every vertex is visited but not more than twice. A 3tree is a tree with maximum degree 3. Clearly, if G is Hamiltonian, it has a 2walk but not vice versa. Thus, instead of searching for Hamiltonian cycles, several researchers, like M. Ellingham, N. Wormald, F. Ruskey and others, proposed searching for 2walks, or even kwalks. These modifications create a hierarchy among graph properties:
Hamiltonian \subset Traceable \subset 2walk \subset 3tree \subset 3walk...
In our work, we chose to retain the Hamilton Cycle and modify the graph. Instead of searching for Hamilton Cycles in G we look for Hamilton Cycles in the prism over G. This property is "sandwiched" between "Hamiltonian" and 2walk. With this in mind, we revisit many theorems, conjectures, complexity problems etc. and test them for Hamilton Cycles in the prism over G.
For instance, in the 19th century it was "believed" that all cubic, 3connected planar graphs are Hamiltonian. In 1946, Tutte constructed the first counter example (with 46 vertices...). Is it true that the prisms over these graphs are Hamiltonian? YES! On the other hand, NashWilliams conjectured that all 4connected, 4regular graphs are Hamiltonian. Infinitely many counter examples are known today. Such graphs clearly have a 2walk. Do they have a Hamiltonian Prism? The complexity of determining whether a 4connected, 4regular graph is Hamiltonian is in NPC. On the other hand, a 2walk can be constructed in Polynomial time. What about a Hamiltonian Cycle in the Prism?
In this talk I'll survey a sample of problems of this nature, and recent results.
This is joint work with:
Daniel Kral (Charles University, Prague)
Tomas Kaiser and Zdenek Ryjacek (Western Bohemia, University, Pilzen)
HeintzJuergen Voss (Technical University, Dresden).
The transfermatrix method can be used to derive generating functions for various properties of digraphs (e.g., number of paths of length n) and is wellsuited for implementation in computer algebra systems such as Maple. By expressing combinatorial enumeration problems as digraphs and applying the transfermatrix method, we can use computers to automatically derive the respective generating functions.
In this talk, I will describe a technique used by Zeilberger to count columnconvex polyominoes (edgeconnected squares whose columns are single strips). I will then present the problem of counting columnconvex hexanimals (edgeconnected hexagons), and show how a simple mapping from hexanimals to polyominoes allows Zeilberger's technique to be adapted for this alternate problem.
This is joint work with M. Mohammed, Rutgers University.
It's easy to count things like the number of ways to order the tasks on a a "todo" list or the number of clinks you would hear if five people celebrate with a champagne toast. But how about the total number of combinations of integers that sum to 1 million? or 2 million? Or, how can you tell that two sets have the same number of elements if you don't know how many elements are in either set? In this talk we will illustrate two powerful counting tools, recursive definitions and bijections. Our examples range from a classical theorem of Euler to recent results.
About the speaker:
It's easy to count up the items on a shopping list, but what about calculating the total combination of numbers that add up to one million? Or determining if two committees have the same number of members when you don't know the number of members on each committee? Dr. Carla Savage is a world famous authority on Gray codes, Venn diagrams and discrete mathematics and discrete computation, the mathematics behind computer operation. She has published more than 50 papers in distinguished journals, is on the editorial board of the SIAM Journal of Discrete Mathematics, and is a member of the steering committee of the Symposium on Discrete Algorithms. Her presentation is suitable for secondary students and will be of particular interest to grade and secondary math teachers.
Rigorously capturing the security requirements of cryptographic tasks is a notoriously subtle and tricky business. One major stumbling point is coming up with a notion of security that is robust enough so as to guarantee security even when the protocol is run as a component of an unknown larger system. So far, no such general notions were known.
We propose a new paradigm for defining security of cryptographic protocols, called Universally Composable Security. The salient property of universally composable notions of security is that they guarantee security even when a protocol is used as a component within an arbitrary system. In particular, universally composable notions guarantee security when an unbounded number of protocol instances are running concurrently in an adversarially controlled manner, they guarantee nonmalleability with respect to arbitrary protocols, and more. Such properties are crucial for arguing the security of cryptographic protocols in complex and unpredictable environments such as the Internet.
The talk will provide a highlevel motivation and introduction to universally composable security, and will survey recent results. No prior knowledge in cryptography will be assumed.
About the Speaker:
Ran Canetti graduated from the Weizmann Institute of science, Israel, in 1995. He has been a Research Fellow at MIT and is currently at the Cryptography group, IBM T.J. Watson Research Center. His research interests lie in cryptography and network security, with emphasis on cryptographic protocols and their security analysis. He is an editor of the Journal of Cryptology, and is chairing two standardization groups within the Internet Engineering Task Force.
We have long considered showing the existence of a linear time algorithm for a problem to be the gold standard of achievement. Indeed, it is hard to imagine doing any better than that, since for any nontrivial problem, any algorithm must consider all of the input in order to make a decision. However, as extremely large data sets grow more prevalent in a wide variety of settings, it is natural to wonder what one can do in *sublinear* time. In fact, there has been a lot of recent interest in this direction.
This talk will contain a brief, nonexhaustive survey of the types of problems that can be solved in sublinear time. Since any sublinear time algorithm can only view a small portion of the input, for most natural problems the answer must necessarily be approximate in some sense. We concentrate on such algorithms. We will see that there are classical optimization problems whose values can be approximated in sublinear time. In addition, property testing, an alternative notion of approximation for decision problems, has been applied to give sublinear algorithms for a wide variety of problems.
I will spend the rest of the talk describing joint work with Bernard Chazelle and Luca Trevisan, in which we consider sublinear time algorithms for approximating the weight of a minimum spanning tree. We give a probabilistic algorithm for estimating the weight of a minimum spanning tree in time O(dw log(w)), where d is the average degree of the graph and w is a bound on the ratio of the maximum weight edge to the minimum weight edge. In particular, for constant degree graphs in which the ratio w is constant, our running time will also be constant. We also show that our running time is nearly tight..
About the Speaker:
Ronitt Rubinfeld is a senior research scientist at NECI in Princeton. She received her Ph.D. from U. C. Berkeley in 1990. After graduation, Ronitt spent a year as a DIMACS posdoctoral researcher at Princeton University followed by a year at the Hebrew University in Jerusalem. In 1992, Ronitt joined the faculty of the Cornell Computer Science department. At Cornell, she held a tenured associate professorship and was an ONR Young Investigator and a Sloan Research Fellow. She was awarded teaching awards from the Cornell School of Engineering and the undergraduate computer science society. During this time, she also held visiting positions at MIT and IBM Almaden. Ronitt came to NECI in 1999 and has been focusing her efforts on the sublinear time algorithms, property testing and randomized algorithms.
If there is one lesson to be learned from a quartercentury of research in multidimensional searching, it is that subtraction does not help: Whatever we can do with subtractions, we can do nearly as well with additions alone. It is temptingmaybe selfservingto believe that this is not simply an accumulation of evidence but the actual truth, because this would essentially close the book on the subject. Indeed, nearly all nonmonotone bounds are tight: a triumph of complexity theory!
In this talk, I will explain why the celebration might be a little premature. I will give new evidence of some natural instances of multidimensional searching where the monotone and nonmonotone complexities provably differ. How much of our current picture of multidimensional searching will eventually unravel is the main open question.
I will briefly review some of the standard lower bound techniques in the field, some of which are mathematically very beautiful. This talk will require no knowledge of computational geometry.
A new class of metrics appropriate for measuring effective similarity relations between sequences, say one type of similarity per metric, is studied. We propose a new ``normalized information distance'', based on the noncomputable notion of Kolmogorov complexity, and show that it minorizes every metric in the class (that is, it is universal in that it discovers all effective similarities). We demonstrate that it too is a metric and takes values in $[0,1]$; hence it may be called the {\em similarity metric}. This is a theory foundation for a new general practical tool. We give two distinctive applications in widely divergent areas (the experiments by necessity use just computable approximations to the target notions).
First, we computationally compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we give fully automatically computed language tree of 52 different languages based on translated versions of the ``Universal Declaration of Human Rights''.
This is joint work with Ming Li, Xin Chen, Xin Li, Bin Ma, and wil be presented at the 14th ACMSIAM Symposium on Discrete Algorithms, Baltimore, 2003. It can be accessed at http://www.cwi.nl/~paulv/selection.html