COMBINATORIAL ALGORITHMS GROUP

If you would like to give a talk in our seminar series, please contact Wendy Myrvold (wendym@cs.uvic.ca).
Date  Place  Time  Speaker  Abbreviated Title 

May 16  EOW 430  3:30  Hugues Mercier  Randomized Communication Complexity Of Correlated Data 
May 30  EOW 430  3:00  Stephanie van Willigenburg  Enumerative properties of Ferrers graphs 
June 6  EOW 430  3:30  Gord Brown  Estimating transcriptome size by sampling expressed sequence tags 
June 13  EOW 430  3:30  Hamish Carr  Contour Trees and Flexible Isosurfaces 
June 20  EOW 430  1:30  Dr. Krys Cios  Bioinformatics and Proteomics 
July 11  EOW 430  3:30  Ulrike Stege  Algorithms for Distance Domination Problems in Graphs 
July 18  DSB C 122  1:30  Richard Cleve  Quantum Algorithms and Communication Complexity 
July 24  EOW 430  1:30  Sudhakar Ganti  Virtual Private Networks  Progress and Issues 
A Virtual Private Network (VPN) is an emulation of a private network facility using a public network infrastructure. VPNs have come a long way from using expensive circuit switched networks for connectivity to using virtual circuit ATM networks and more recently towards packet based virtual routers for the IP networks. Multi Protocol Label Switching (MPLS) based networks, like ATM networks, bring true multiservice capability to packet based networks and provide virtually any transport capability.
This talk's focus is on the latest developments in MPLS based VPNs and more specifically onVirtual Private LAN Service (VPLS). VPLS connects the geographically dispersed locations with a single bridged domain using MPLS Label Switched Paths (LSPs). For these networks, we will look at the service models, some of the mechanisms developed to provide endtoend QualityofService (QoS), path setup and resiliency, and constraint based routing, as well ashighlight some of the network and equipment design issues
The subject of VPNs spans Computer Science, Communications and System Engineering. Finding an optimal solution for constraint based routing is a computationally complex problem of interest to Computer Science. New trends in equipment design use multicore engines called Network Processors (NPs) for packet processing. Packet and processor scheduling algorithms to achieve wirerate performance with multifield classification is another area of interest to Computer Science. Resource management for QoS, creation and management of VPNs, membership discovery, and protocol development are a challenge to communication and system engineering areas.
We consider the problem where two distant parties each possess a chain of bits, the goal being for one party to learn his encounter's input while minimizing the communication. Unlike the original communication complexity model, the function to compute is trivial [f(x,y)=x], but the difficulty arises because the inputs are correlated. This model is inherent to several practical applications including synchronisation of mobile data, reconciliation of sequences of symbols such as nucleotides sequences in DNA molecules, distributed computations, remote file storage and quantum key distribution.
In this talk, we will focus on the randomized version of the model, which has never been systematically studied before. We show that the deterministic and randomized models are equivalent for the applications previously mentioned. Furthermore, we state the conjecture that the models are equivalent for all the problems and show that it allows us to solve the direct sum problem. If there is enough time, we will present a few open problems related to errorcorrecting codes correcting insertions and deletions of bits.
This is joint work with Pierre McKenzie and Stefan Wolf, DIRO, Universite de Montreal.
The cardinality of a set of certain directed bipartite graphs is equal to that of a set of permutations that satisfy certain criteria. In this talk we decribe both the set of graphs and the set of permutations, what their cardinality has to do with the chromatic polynomial, and whether we can find a natural bijection between the sets. This is joint work with Richard Ehrenborg.
The transcriptome of a tissue is the set of transcripts (messenger RNA sequences) which are produced in that tissue. The size of the transcriptome (the number of distinct transcripts) is a good indicator of the complexity of the tissue. We can estimate the size of the transcriptome by sampling expressed sequence tags (EST's, which are fragments of mRNA sequences) and observing the frequency with which multiple copies of the same transcript are found. This talk
Many scientific fields generate large sampled or computational data sets representing the value of some function f over a region of space. One way of studying these functions is to compute a structure called the contour tree, which represents topological structure in the function f. We will describe how to compute the contour tree and several uses for it, including finding seeds for isosurface generation, abstract representation of the data set, and as a proxy for interactive manipulation of isosurfaces and individual contours.
This is joint work with Jack Snoeyink, UNCCH.
In this overview talk, we first define the field of bioinformatics, describe some of our algorithms, and present data mining results on cystic fibrosis data. Then, we will talk about networks of spiking neurons, plasticity rules and modeling of the somatosensory cortex of primates, as an introduction for showing how such networks can be used for solving other than modeling problems. Next, we will define computational proteomics, show our results for identification of contaminants, and initial work on finding quantification peptides. The talk will end with lessons learned from working with biological and medical data.
We consider the problem of distance domination in graphs. This problem arises in a variety of applied settings. In such settings, often three general goals can be formulated: (1) dominate a maximum number of vertices, (2) dominate vertices over a minimum distance, and (3) use a minimum number of vertices to dominate. Since it is not possible to satisfy all three requirements simultaneously, the applied researcher has to decide on how to tradeoff one goal for another. One possibility of doing this is to compute the distanced domination number of a graph. The decision problem associated with this graph property is called Distanced Dominating Set, which is known to be NPcomplete. The special case for d=1, also known as Dominating Set, is complete for W[2] when parameterized by the dominating set to determine. Further it is known that the distanced domination number is at least as large as the distance(d+1) domination number of a given graph. We introduce and investigate different parameterizations of Dominating Set and present the first nontrivial fixedparametertractable algorithm solving Dominating Set. We further present a refinement of this hierarchy by introducing two distance domination problems. We provide evidence that the natural parameterizations of Distanced Dominating Set and its variants described here are most likely not fixedparameter tractable. In contrast, we introduce profit parameterizations and discuss how fptalgorithms for these versions can facilitate distance domination in practice.
This is joint work with Iris van Rooij.
In this talk, I will first give an overview of quantum information theory and quantum algorithms, including some of my recent work that expands our repertoire of quantum algorithms. I will then explain the communication complexity scenario and show how it is affected when quantum information is available. Although it is known that n quantum bits (qubits) cannot convey more classical information than n bits, there are nevertheless information processing tasks for which using qubits instead of bits can yield substantial communication reductions. My work (and that of others) in this area highlights an interesting interplay between algorithms, communication complexity, and what physicist’s call “nonlocality” in the setting of quantum information.
Richard Cleve obtained his Ph.D. from the University of Toronto in 1989, working in cryptography and computational complexity. He was a Postdoctoral Fellow at Berkeley’s International Computer Science Institute during 198890 and has been a faculty member at the University of Calgary since 1990, where he was promoted to Professor in 2000 and awarded a University Professorship in 2002. In recent years, his work has been focused on quantum algorithms and quantum information theory. He played a major role in the development of quantum communication complexity, and, most recently, he helped develop a new class of algorithms based on the behavior of quantum walks (which are quantum analogs of classical random walks).
In 2001 he held a Killam Resident Fellowship, and in 2002 he was appointed as a Fellow of the CIAR Program on Quantum Information Processing. He is also a Founding Managing Editor of the journal Quantum Information & Computation.