Past Honours students, project students and coursework Masters students

  • Michela Castagnone (2017) wrote an Honours thesis on Algorithms for generating uniformly random graphs.
  • Dean Garden (2016 - 2017) wrote a coursework Masters thesis on Asymptotic enumeration of sparse graphs.
  • Barton Lee (2015) wrote an Honours thesis on The theory and application of graph limits.
  • Daniel Altman (2015) wrote an undergraduate project entitled The distribution of loose Hamilton cycles in random uniform regular hypergraphs.
  • Matthew Kwan (2014) wrote an Honours thesis on Stein's method. Matthew is now a PhD student at ETH Zürich, working with Benny Sudakov.
  • Arwa Al Munajam (2013) wrote a coursework Masters project report on Trees and their applications.
  • Matthew Kwan (2013) wrote an undergraduate project entitled On the number of spanning trees in random regular graphs.
  • David Wind (2012), an exchange student from Denmark, wrote an undergraduate project entitled On the distribution of spanning trees in random cubic graphs.
  • Corey Lewis (2011) wrote an Honours thesis on Analysis of the web graph.
  • Cris Koch (2006) wrote an Honours thesis on Term rewriting approach to polycyclic group collection.
  • Byron Ng (2005/2006) did his 4th year CSE project on Computer simulation for bounding the mixing time of Markov chains.
  • Le Anh Vinh (2005) did an Honours-level project on Ramsey theory.
  • Ka-shu Wong (2004) wrote an Honours thesis on The complexity of counting.
  • Michael Leeming (2004) did his 4th year CSE project on Analyzing a Markov chain for 7-colourings of 4-regular graphs.

Topics for Honours projects

Here is an outline of a couple of possible topics for Honours projects which I could supervise. In each case a fairly broad, survey-style project has been described. But this is just a jumping-off point - a student may find, over the course of the Honours year, that they have a particular interest in one part of the project and choose to focus on that.

(If you're worried that you don't have enough background in graph theory or in probability, then stop worrying. You can pick it up as you go along. That's the beauty of combinatorics, and for the most part the probability used is fairly basic.)

Of course if you have your own idea for a topic, feel free to run it by me.

You should also have a look at Honours topics in a different area of combinatorics, offered by Diana Combe. Diana is a member of the Department of Statistics who performs research in combinatorics and algebra.

Combinatorial Markov chains

A Markov chain is a simple stochastic process which is "memoryless", in that the next transition depends only on the current state, and not on the history of the chain. We consider discrete time Markov chains on finite state spaces which have exponentially large size (with respect to some parameter). If the Markov chain is ergodic then it converges to a unique stationary distribution. But how quickly does this convergence occur?

There are a few methods for bounding this convergence rate, either to show that the chain "mixes rapidly" and hence can be used for efficient sampling, or that the chain "mixes torpidly" and converges exponentially slowly to the stationary distribution. The aim of the project is to survey the topic, learning about some of the results which have been proved and the methods used to prove them.

A couple of references:

[1] Mark Jerrum, Counting, sampling and integrating: algorithms and complexity, Lectures in Mathematics - ETH Zürich, Birkhaüuser, Basel, 2003.

[2] Dana Randall, Rapidly mixing Markov chains with applications in computer science and physics, Computing in Science & Engineering 8 (2006), 30 - 41.

Contiguity and the small subgraph conditioning method

Suppose that Y is a non-negative integer valued random variable (such as a random variable that counts some interesting substructures in a random graph). The first moment method hinges on the idea that the probability that Y is positive is bounded above by the expected value of Y. If this expected value tends to zero (as some parameter in the problem tends to infinity) then with high probability, Y is zero (asymptotically). This method can be used to prove that the interesting substructures do not occur, with high probability.

The second moment method uses the Paley-Zygmund inequality to state that if the second moment is bounded above by some constant times the square of the first moment then the probability that Y is positive is bounded away from zero. This method can be used to prove that with some positive probability, the interesting substructures do occur.

In some cases, a more detailed analysis of variance can give the full asymptotic distribution of the variable Y. The method for doing this is known as the small subgraph conditioning method, and was introduced by Robinson and Wormald (1992, 1994) in order to prove that random d-regular graphs contain Hamilton cycles with high probability, for any constant d at least 3. The name refers to the important role that cycles play in the proof. Janson (1995) observed that the method also establishes a property from probability theory, called contiguity, which is a kind of "asymptotic qualitative equivalence" between two sequences of probability spaces on the same underlying sets.

The project would involve learning about the small subgraph conditioning method and contiguity, and the results which have been proved using this method.

A couple of references:

[1] Nicholas C. Wormald, Models of random regular graphs, in Surveys in Combinatorics 1999 (J. D. Lamb and D A. Preece, eds.), London Mathematical Society Lecture Note Series vol. 267, Cambridge University Press, Cambridge, 1999, pp. 239 - 298. (See Section 4.)

[2] Svante Janson, Random regular graphs: asymptotic distributions and contiguity, Combinatorics, Probability and Computing 4 (1995), 369--405.

Random graphs and random graph processes

The first random graph model was introduced by Erdős in 1947. The idea is as follows: to produce a random graph on n vertices, flip a fair coin (one that comes up heads with probability exactly 1/2) independently for each of the n(n-1)/2 potential edges. If the coin comes up heads, then the edge is chosen for the graph, and otherwise the (potential) edge is rejected. A generalisation of this is the binomial model G(n,p). This time we use a coin which comes up heads with probability exactly p, for a given value of p between 0 and 1 (which may depend on n). This model has been extremely well studied.

Other models are for random graphs with a fixed number of edges, or random regular graphs where every vertex has the same number of neighbours. The most recent models to emerge are the so-called Web graphs, which aim to model the growth and structure of the World Wide Web. A related area is that of random graph processes, where a discrete-time stochastic process is used to "grow" a random graph, typically by adding an edge (or set of edges) at each step, according to some rule.

In all cases, we are interested in asymptotic behaviour, which gives an idea of how these graphs behave as the number of vertices n tends to infinity. The properties which are studied include the connectivity, independence number, chromatic number and diameter, to name a few. The methods used are a mixture of combinatorial and probabilistic. The aim of the project is to survey the topic, learning about some of the results which have been proved and the methods used to prove them.

A couple of references:

[1] Svante Janson, Tomasz Luczak, Andrzej Rucinski, Random Graphs, Wiley, New York, 2000.

[2] Bela Bollobás, Random Graphs (2nd edn.), Cambridge University Press, Cambridge, 2001.

[3] Nicholas C. Wormald, Models of random regular graphs, in Surveys in Combinatorics 1999 (J. D. Lamb and D A. Preece, eds.), London Mathematical Society Lecture Note Series vol. 267, Cambridge University Press, Cambridge, 1999, pp. 239 - 298.

The differential equations method for analysing combinatorial algorithms

The differential equations method is a way of analysing a discrete-time combinatorial process. For instance, consider the following graph process which is used to form graphs with maximum degree d. The process starts at time 0 with n vertices and no edges. At each time step a pair of non-neighbouring vertices which both have current degree less than d is chosen uniformly at random, and this new edge is added to the graph. When analysing this process it is very useful to know the number Y of vertices of degree d at each step. (So Y is a non-negative integer valued random variable.) By finding the equation for the expected value of the change of Y over one time step, and treating this as the derivative of a related continuous function, we can approximate Y by the solution of a differential equation. (Similarly, a system of random variables can be approximated by the solution to a system of differential equations.)

Wormald (1995, 1999) has presented theorems which give conditions under which this approximation is guaranteed to be close. (Often there is no need to actually solve the differential equations, or else it is sufficient to solve them numerically.) The method has been applied to algorithms for random graphs and random SAT instances.

The aim of the project is to survey this area, learning about the method and how it has been applied. The history of the differential equations method (which goes back to Kurtz in 1970) could also be looked into. The focus is not on calculus, but on the interplay between discrete and continuous mathematics.

A couple of references:

[1] Nicholas C. Wormald, The differential equation method for random graph processes and greedy algorithms, in Lectures on Approximation and Randomization Algorithms (M. Karonski and H.-J. Prömel, eds.), PWN, Warsaw, 1999.

[2] Dimitris Achlioptas, Lower bounds for random 3-SAT via differential equations, Theoretical Computer Science 265 (2001), 159 - 185.