Louis group research interests: Soft and Biological Systems
Research in our group is interdisciplinary, on the border between
theoretical physics and chemistry, applied mathematics and biology. We
study how complex behaviour emerges from the interactions
between many individual objects.
We primarily use the tools of
statistical mechanics -- especially analytic theories and computer
simulations -- to better understand the behaviour of these fascinating
systems. "Coarse-graining", where a subset of the (microscopic)
degrees of freedom are integrated out to yield a simpler and more
tractable problem, is a common theme in these descriptions.
Here at Oxford, we
work closely with members of the
Theory of Soft and Biological Matter group,
with whom we share many research interests.
Research Projects in the Louis group
Here are some research projects that give a flavour of the work in our group. In practice, these projects always
evolve, in part because research advances rapidly, and in part because
we adapt the projects to the particular strengths and interests of the
DPhil candidates or postdocs.
Fundamental properties of deep neural networks for deep learning
The current revolution in artificial intelligence (AI) is driven by the amazing computational capacities
neural networks, which are simplified models of how our brains works. When linked together into multiple layer architectures,
this machine learning approach is called deep learning.
In spite of so much practical success, there is little fundamental understanding of why deep learning works so well.
For example, deep neural networks (DNNs) perform best in the overparameterised regime, with many more parameters than data points.
Physicists are taught from an early age to limit never to have too many parameters.
In Freeman Dyson's charming account of his meeting with Enrico Fermi ,
the latter
quotes John von Neumann's aphorism "with four parameters I can fit an elephant, and with five I can make him wiggle his trunk".
So given that DNNs have so many millions (or billions) of parameters, why do they work so well?
We have recently used concepts derived from AIT to argue that
Deep learning generalizes because
the parameter-function map is biased towards simple functions.
In other words, because in practice deep learning is typically learning relatively simple functions, this intrinsic exponential bias means that it naturally avoids overfitting and can make good predictions (generalise well) on new data that it hasn't seen yet. Without this strong intrinsic bias towards simplicity, deep neural networks would not really work. We are quite excited about this result. There are many further implications of this work that we are currently exploring.
Of course DNNs are normally trained by stochastic gradient descent. However, we have shown that
SGD acts almost as a Bayesian sampler , which means that it outputs functions with probabilities close to those predicted from random sampling
of the parameter-function map. Thus bias in the map explains bias in SGD trained DNNs, which in turn explains why DNNs generalise well, even though they
are overparameterised and highly expressive.
We formalise some of these ideas in our overview paper on Generalization bounds for deep learning
, where we also present a new marginal-likelihood bound that, we think, is the best performing bound currently on the market.
For physicists who are not familiar with them, such bounds are a staple of statistical learning theory. One of our longer term interests is to connect this field more closely to concepts from statistical mechanics.
We have many other interests connecting physics with AI in general, and deep learning in particular.
I can currently take on several DPhil students on these topics, and may entertain related questions in computational neuroscience.
A talk: "Deep neural networks have an inbuilt Occam's razor"
Self-Assembling DNA
The ability to design nanostructures which accurately self-assemble
from simple units is central to the goal of engineering objects and
machines on the nanoscale. Without self-assembly, structures must be
laboriously constructed in a step by step fashion. Double-stranded DNA
(dsDNA) has the ideal properties for a nanoscale building block, and
new DNA nanostructures are being published at an ever increasing rate.
Here in the Clarendon the world-leading experimental group of
Andrew Turberfield has created a number of intriguing
nanostructures using physical self-assembly mechanisms. We have
recently developed a new simplified theoretical model of DNA that
appears to capture the dominant physics involved. In this project you
would apply the model to study some simple nanostructures. You will
mainly be using Monte Carlo simulations and statistical mechanical
calculations to study these processes.
A potential new direction for this project could also be to extend our
new methods to study RNA nanostructures.
All DNA work is donein close collaboration with the group of Dr. Jonathan Doye.
The Physics Department Newsletter had a little popular piece on our work here.
"Nothing in Biology Makes Sense Except in the Light of Evolution."
wrote the great naturalist Theodosius
Dobzhansky , but to really understand evolution, a stochastic
optimization process in a very high dimensional space, will require
techniques from statistical physics. In this project you will use
theoretical tools and computer simulations to study a number of
simplified models we have been developing in our group to understand
the physics of evolution. We are studying the evolution of protein quaternary structure, RNA secondary structures, gene regulatory networks and more,
with a focus on how concepts from algorithmic information theory help explain how evolutionary search is so efficient at finding solutions.
Below a picture of interconnected space that evolution searches when evolving RNA structure, made by Steffen Schaper:
Applying algorithmic information theory (AIT)
While many results from AIT are beautiful and profound, they are often thought to be hard to apply in practice because 1) AIT's central concept, Kolmogorov-Chaitin complexity, is formally incomputable, 2) AIT depends crucially on universal Turing machines (UTMs) wile many practical systems are not universal, and 3) because many of its results are only valid in the asymptotic limit of long strings, while applications are often not in this limit. We are currently trying to circumvent these problems, see e.g. our recent paper in Nature Communications Input–output maps are strongly biased towards simple outputs .where we show applictions ranging from RNA folding to systems of differential equations to models of plant growth to stochastic models for finanancial trading. We are currently working on applications ranging from machine learning to evolution to string theory.
Examples of simplicity bias in RNA sequences, circadian rhythms, and financial models. The higher the complexity of an output, the lower the probability that the output will be generated.
The remarkable ability of biological matter to robustly self-assemble into
well defined composite objects excites the imagination. Viruses, for
example, can be reversibly dissolved and re-assembled from their component
parts simply by changing the solution pH. How does nature achieve this
feat? Can we uncover the "positive design rules" for inter-particle
interactions that allow the self-assembly of a particular desired
3-dimensional structure [1]? Can we use evolutionary algorithms to design
these particles? Such understanding could have important implications for
nanoscience, where objects are too small to directly manipulate, and
instead must come together by self-assembly mechanisms.
This project is in close collaboration with the group of Prof. Jonathan Doye.
Here is a movie that Iain made about virus self-assembly:
Iain G. Johnston, Ard A. Louis and Jonathan P.K. Doye
J. Phys.:
Condensed Matter, 22 , 104101 (2010)
| arxiv abstract
| pdf
reprint | (this paper was selected for journal cover and as one of the journal's Highlights of 2010)
The interplay of Brownian and hydrodynamic interactions in suspensions
When suspended particles move in a background fluid, they experience
random kicks (Brownian motion), but also generate long-ranged
hydrodynamic flows. This project would use a new computer simulation
method, stochastic rotation dynamics (SRD) that includes both these
effects, to study flow properties of complex fluids. There is also
scope for more mathematical approaches combining non-equilibrium
statistical mechanics and hydrodynamics to explain some of the
dramatic effects we are seeing. Examples of applications include
dynamic lane formation, fluctuations during sedimentation, colloidal nano-pumps, and colloidal "explosions".
To see the effect of hydrodynamic interactions, compare these two movies:
.
Or check out what happens when you have
aggregation and sedimentation:
An important source of progress in statistical mechanics comes from better coarse-grained models, that is descriptions that are simpler and more tractable, but nevertheless retain the fundamental underlying physics that one is interested in investigating. It is intuitively obvious that these simplified descriptions throw some information away and that compromises are made. After all, there is no such thing as a free lunch. But how much are we paying, and what can we get away with?
We have argued that good coarse-graining procedures are best interpreted in terms of the emergent properties that one is trying to model.