complexity Archive

0

Some stuff on the London riots

 

In the past few weeks our mathmo team: Toby Davies, Peter Baudains and myself, have been looking into some of the reasons why the London riots happened.

If we can suppose that there was some rationality behind the rioters actions, then we may construct a mathematical model from a series of simple rules about their behaviour.

If this model is capable of replicating some of the emerging features of the riots themselves, then, working backwards, we can start to say something about the causes of the riots, and perhaps even something about improvements which could be made to police strategy.

There is a fairly big if  on the rationality front,  so I refer you to a nice article called Riots and Rationality which discusses a similar argument, and puts it much more elegantly than I can.

However, if you’d like to hear more of my ramblings, I did an interview for the global lab podcast on this very topic, amongother things, which you can listen to here.

We will also be publishing our findings relatively soon, and I’ll post a link to the paper here when it’s ready.

Tags:
0

The importance of being discrete.

If we’re being accurate, the title should really be “The importance of using appropriate temporal spacing when applying a discretisation to a continuous time scale”. But I felt the above was a touch more catchy.

There’s been a fair amount of noise in the media recently about 3D printers and the exciting possibilities which they present: here’s a video of our resident compugeek Stevie G building a 3D printer in 24 hours, and a lovely video of some chaps at EADS innovation printing a 3D bike.

These printers are based on the very simple principle that a 3D object can be built from a series of 2D slices. Each new slice sits ontop of the previous one and as the slices all stack together in succession, the object forms.

It is exactly this principle which froms the basis of a time-marching algorithm, which is often used in modelling dynamic or evolving systems on a computer.

By chopping up the time period under consideration into lots of tiny slices (or steps), you can build up a solution by calculating what happens at the next step based on the system at the current slice. “Stacking” these solutions together leads to a dynamic and evolving model.

As every mathmo or physicist who’s ever done one of these time marching computer simulation will tell you, choosing a time step (which we call \delta t) which is small enough to allow you to capture the solution, but large enough to be computationally viable is lesson 1 in numerical modelling of dynamic systems. The idea is exactly the same for 3D printing: choose a slice thickness that’s too big and you’ll miss all the detail in your object, choose one too small, and your printer will needlessly be working away for hours.

Choosing the right time step size becomes especially important when one deals with messy non-linear and chaotic systems, such as the one I’ve been looking at recently. In that case, choosing too large a \delta t and you don’t just miss some detail – you can easily get knocked on to a completely different solution path altogether.

Demonstrating this concept is the motivation behind a quick visualisation I did on Monday, which shows just how far off the answer you can get if you pick too large a time step.

The model in the video is of a system of retail centres in a city. Customers choose to shop at a given centre based on how big it is, and how far away it is from their home. As shoppers choose shops and spend money, the profits (and losses) of each retail centre are calculated, and the retail centres change their size accordingly.

Incidentally, the derivation and equations – for those who are interested – can be found in an overview of the model I wrote a few months ago: An overview of the Boltzman-Lotka-Volterra retail model.

Each centre is arbitrarily labeled (x axis), and the log of the corresponding centre size, \ln Z_j is shown along the y axis.

The four types of circles in the video relate to four different choices of \delta t: royal blue is \delta t  =0.25, light blue has \delta t =0.125, red \delta t =0.025 and gold takes the far smaller (and hence more accurate) \delta t =0.0025, so that by t=10 the four separate simulations have been through 41,81,401,4001 time steps respectively. The dots are different sizes only to help in the visualisation.

Because it demonstrates the point a bit better, I’ve deliberately chosen a simulation which finds one winning centre: the “Westfield dominance” case, as one of my colleagues calls it. This can be clearly seen in the gold and red simulations, where the winning centre increases in size fairly rapidly at the beginning of the simulation, and all the other chaps slowly die to \ln Z_j = -\infty.

These accurate red and gold runs behave nice and smoothly, deviate little from one another, and show very little jumping around, even at large times. Bear in mind though, that gold has 10 times the number of calculations as the red (because the time step is 10 times smaller) but doesn’t offer us any more information.

Compare this however to the blue guys, who behave well in the early stages – all the circles begin as concentric, suggesting that all \delta ts give the same results. However, as time increases, the light and dark blue circles with larger values of Z_j begin to deviate from the more accurate red and gold simulations, accelerating upward. As time increases further, the blue circles leave the gold and red all together and do not return. They continue their jerky behaviour and end up becoming infinite.

The red case also has 10 times the calculations of the royal blue (and 5 times that of the light blue) but in this case, certainly does give us a lot more information. Picking a time step too large in this case doesn’t just give us a less accurate version of the solution – it doesn’t give us the solution at all.

Anyway. Enjoy the movie:

Tags: