COMBINATORIAL ALGORITHMS GROUP
|
If you would like to give a talk in our seminar series, please contact Wendy Myrvold (wendym@cs.uvic.ca).
Date | Place | Time | Speaker | Abbreviated Title |
---|---|---|---|---|
Jan. 17 | EOW 430 | 3:30 | Stefan Pantazi | The Simple Perceptrons: have we seen the last of it? |
Jan. 24 | EOW 430 | 3:30 | Thomas Britz | Extensions of the Critical Theorem |
Jan. 31 | EOW 430 | 3:30 | Romeo Rizzi | Permutation routing in POPS networks |
Feb. 21 | EOW 430 | 3:30 | Wendy Myrvold | Hunting for Torus Obstructions |
Mar. 13 | EOW 430 | 1:30 | Mohammad Salavatipour | Maximum Capacity Broadcast |
Mar. 14 | EOW 430 | 3:30 | Michael Molloy | Random constraint satisfaction problems |
Mar. 17 | Cornett A 121 | 1:30 | Peter Winkler | Optimality and Greed in Dynamic Allocation |
Apr. 4 | EOW 430 | 3:30 | Valerie King | Phylogenetic Tree Reconstruction and a Colouring Problem |
Apr. 11 | EOW 230 | 3:30 | Timothy Walsh | Generating Gray codes in O(1) worst-case time per word |
The simple perceptron was one of the first invented models of feed-forward
artificial neural networks. Together with its elegant learning algorithm,
the simple perceptron allows one to perform a classification into linearly
separable classes. More complex models (e.g. the multi-layer perceptron)
that evolved from it are considered able to approximate virtually any
arbitrary input-output relation. However, the learning and generalization
capacities of artificial neural networks are still affected by local minima
and by the quality of the training examples which, in real applications, are
noisy. Thus, an improved learning capacity, robustness to noise and
increased generalization performances are features which should characterize
a good classifier. The newly introduced support vector machines, by
combining kernel methods with the simple perceptron model, seem to attain
many of the goals of a good classifier. With the help of simulation examples
and considering some relatively recent results in the field machine
learning, I will try to argue that the days of the simple perceptron are not
over yet.
Historical reviews of matroid theory often begin with Hassler
Whitney's 1935 article. In it, Whitney defined and derived most of
the theory contained in the introductory chapters of any
exposition on matroids, in an attempt to obtain an axiomatic
description of linear dependence between vectors. As Whitney
pointed out himself, this attempt was not fully successful, for
matroids exist that do not correspond to any set of linear
dependencies between vectors. However, this suggested several
natural programmes of research, including that of determining how
much a matroid can say about the vectors that it represents.
A breakthrough in this programme was the Critical Theorem,
discovered in 1970 by Henry Crapo and Gian-Carlo Rota, which has
been celebrated for quite a different reason. One of the important
applications of this theorem was conducted by Curtis Greene in
1976. He demonstrated how the weight enumerator of a linear code
was obtained by evaluating in a certain way the Tutte polynomial
of the associated matroid. The MacWilliams identities for linear
codes is one of several immediate consequences.
In this talk, I will describe how the Critical Theorem may be
extended and generalised in a large number of ways. Each of these
ways also leads to generalisations of Greene's result, and these
may be combined to give an even larger number of generalisations
of the MacWilliams identity.
No prior matroid knowledge is necessary, since all objects will be
introduced, partly through a historical introduction.
University of Trento, Italy
Dipartimento di Informatica e Telecomunicazion
We show that a Partitioned Optical Passive Stars (POPS) network
with g groups and d processors per group can route
any permutation among the n = d * g processors in one slot when d=1
and 2 * ceiling[d/g] slots when d>1.
This is joint work with Alessandro Mei.
Keywords: Optical interconnections, passive stars, permutation routing.
A graph is embeddable on a surface if it can be drawn
on the surface with no crossing edges. This talk will
begin by introducing the basic theory and algorithmic
issues for embedding graphs on surfaces.
A torus is a surface shaped like a doughnut.
A topological obstruction for the torus is a graph
G with minimum degree three that is not embeddable on
the torus but for all edges e, G-e embeds on the torus.
A minor order obstruction has the additional property
that for all edges e, G contract e embeds on the torus.
The aim of our research is to find all the obstructions to the torus
(both minor order and topological). To date, we have found
239,322 topological obstructions and 16,629 minor order obstructions.
Previously, only a few thousand had been determined. This
talk discusses the techniques used to find these obstructions.
The contents of the talk will include joint work with
many of my students including Eugene Neufeld, John Chambers,
Matthew Skala, Hongmei Wang, Jianping Roth, John Boyer,
Angus Argyle, and Stephen Barker.
Keywords: topological graph theory, embedding graphs on the torus,
algorithms for graph embedding
Consider the following scenario: in a network N having n nodes
there is a broadcaster B which wants to send some big data streams to k
other nodes that are the receivers (k<=n-1). Each of the other nodes in the
network are routers and pass the streams they receive on to adjacent nodes.
Therefore, each data stream will traverse a tree which contains all the
receivers and starts at B. Due to the huge sizes of the streams, no two of
them can traverse a network link at the same time. Therefore, for each data
stream we have to find a different tree, disjoint from the others.
The goal is to send the maximum number of different data streams through
the network. These trees are known as Steiner trees and the problem of
packing Steiner trees is a generalization of two fundamental theorems in
graph theory. We talk about the first linear approximation algorithm for
this problem.
A constraint satisfaction problem is a generalization of a satisfiability
problem. We are given a set of variables, and a domain of values that
the variables can take. Some subsets of the variables have ``constraints''
which restrict the value-assignments that can be made to those variables.
A problem is satisfiable if there is an assignment of values to the
variables which does not violate any of the constraints.
For example, one special case is k-SAT. Here the domain only has two
values, {True and False}, and the constraints come in the form of
clauses, for example (x1 or not(x4) or x7) which says that
(x1, x4, x7) cannot be assigned the k-tuple (F,T,F). Another
special case is k-colourability of a graph. The domain is the set of
k colours, the variables are the vertices of the graph, and each
edge forms a constraint which says that the two vertices it is joining
cannot be assigned a pair of identical values.
In this talk, we discuss models of random constraint satisfaction problems.
The main issue is the following: if we form a random problem by adding
random constraints one-at-a-time, how does the probability of the
problem being satisfiable change? It is clear that this probability
is monotonically decreasing as the number of constraints increases,
but how suddenly
does it decrease? How many constraints must be added before the
decrease is significant? (In formal terms, these questions can
be rephrased as "does the problem have a sharp threshold of
satisfiability?" and "how many constraints must be added before it is
no longer almost surely satisfiable?") These questions generalize
the well-studied notions of the threshold of satisfiability for
a random instance of k-SAT and the threshold of k-colourability
for a random graph.
In dynamic storage allocation objects arrive according to some sort of
random process, are assigned space in a facility, and terminate randomly,
freeing space for further use. Constraints on space may sometimes force an
arrival to be rejected, costing money.
Often in such a problem intuition tells us where in the facility to place an
arrival, but it is notoriously difficult to prove that an on-line algorithm
is best possible unless the process is completely specified.
We will describe a new technique which can sometimes prove optimality
without a specified arrival distribution, provided one assumes that it's
wrong to reject an arrival when there is space for it. Problems to which
this technique can be applied have already risen twice at Lucent
Technologies; in each case we can show that if it's right to be greedy, then
it's right to be space-efficient.
Dr. Peter Winkler is the Director of Fundamental Mathematics Research,
Mathematical Sciences Research Centre, Bell Labs (Lucent Technologies),
Murray Hill, New Jersey. His research interests are in discrete mathematics
and the theory of computing; especially combinatorics, probability theory
and applications. Dr. Winkler received his Ph.D. in Mathematics from Yale
University in 1975. He is a member of the American Math Society, the
Association for Computing Machinery, the Computing Research Association and
the IEEE Computer Society.
A phylogenetic tree is an unrooted tree with weighted edges
whose leaves are labeled by distinct species. The distance between a pair
of species is the sum of the weights on the path between the species.
The phylogenetic tree reconstruction problem is to reconstruct this tree
given only an oracle which outputs the distance between a given pair of
species. We start the talk by proving a tight lower bound for
evolutionary trees of
degree k.
Now, we suppose an oracle in which there is some bounded error, and for a
parameter p, if the oracle outputs a distance greater than p, the distance
information is considered unreliable and useless. This model
more closely captures assumptions about distances between species as
determined by DNA comparisons, based on common models of evolution.
In particular, the longer the DNA sequence, the larger the parameter p.
We prove lower bounds, and
give the first o(n^2) algorithm to reconstruct trees.
(This is work with Li Zhang and Yunhong Zhou and appeared in SODA 03)
We give a definition of Gray code that, unlike the standard
"minimal-change" definition, is satisfied by the word-lists in the
literature referred to by their creators as Gray codes and we give
examples to illustrate the various concepts of minimality. We show that
a non-recursive generating algorithm can be obtained for any word-list
such that all the words with the same prefix (or, equivalently, suffix)
are consecutive and that the Bitner-Ehrlich-Reingold method of
generating each word in a time bounded by a constant works under the
additional condition that in the interval of words with the same prefix
or suffix the next letter assumes at least two distinct values. Finally
we generalize this method so that it works under a weaker condition
satisfied by almost all the Gray codes in the literature: if the next
letter assumes only one value, then the interval contains only one word.
Stefan Pantazi
Jan. 17, EOW 430, 3:30
Thomas Britz
Jan. 24, EOW 430, 3:30
Romeo Rizzi
Jan. 31, EOW 430, 3:30
Wendy Myrvold
Feb. 21, EOW 430, 3:30
Mohammad Salavatipour
Department of Computer Science
University of Toronto
Thursday March 13, EOW 430, 1:30
Mike Molloy
Microsoft Research
March 14, EOW 430, 3:30
Peter Winkler
Bell Labs
Monday March 17, Cornett A 121, 1:30
Valerie King
Friday April 4, EOW 430, 3:30
Timothy Walsh
Department of Computer Science, UQAM
Friday April 11, EOW 230, 3:30
CAG
Schedule of Talks: Spring 2003
/ maintained by
Wendy Myrvold /
wendym@cs.UVic.ca
/ revised April 30, 2003