Our society gets increasingly networked. They are everywhere, our technology is networked, our communication is networked, we work in networks, our money is taken care of in networks (I hope) , our industries are networked and so on and so on. Never has there been a time that individuals and organisations had access to so much knowledge and technology. And that thanks to networks.
But there is a drawback to networks. Networks are characterised by interacting nodes, allowing a lot of knowledge stored into the system. As nicely explained in RSA Animate’s video “The Power of Networks”,networks is a next step in handling complexity. Triggered by the 1979 Three Mile Island accident, where a nuclear accident resulted from an unanticipated interaction of multiple failures in a complex system, Charles Perrow was one of the first researchers that started rethinking our safety and reliability approaches for complex systems. In his “Normal Accident Theory”, Perrow refers to failures of complex systems as failures that are inevitable and therefore he uses the term “Normal Accidents”.
Complexity and Coupling Drive System Failure
Perrow identfies two major variables, complexity and coupling, that play a major role in his Normal Accident Theory.
Multiple and interacting controls
Few feedback loops
Single purpose, segregated controls
Delays in processing not possible
Only one method to achieve goal
Little slack possible in supplies,
Buffers and redundancies are
Substitutions of supplies, equipment,
personnel limited and designed-in
Processing delays possible
Order of sequences can be changed
Alternative methods available
Slack in resources possible
Buffers and redundancies fortuitously available
Substitutions fortuitously available
Although Perrow is very much focused on technical systems in 1984, his theory has a much broader application scope. Nowadays, his insights also apply to organizational, sociological and economical systems. Enabled by digitization and information technology, systems in our daily lives have grown in complexity. The enormous pressure in business to realize the value in products and services in the shortest possible time and at the lowest cost, has driven our systems into formalization, reducing the involvement of human beings and their adaptivity compared to the complexity of the systems. Going through the list Perrow provided, one can understand that this “fast-and-cheap” drive generates complex and tightly coupled systems, maximizing resource and asset utilization and doing so closing on on the maximum stress a system can cope with. From the Normal Accident Theory, we learn that complexity and tight coupling (in terms of time, money, etc…) increases the system’s vulnerability for system failure or system crises and collapses.
Building a Simple Simulation Tool to Simulate Networks Handling Stress
First I build a little simulator to play around with his models. It is based on MS EXCEL with some VBA-code added, simulating a network of max 20×20 nodes.. Each iteration, nodes (one cell with a color is a node) are added, but in each iteration also links between cells are added. Randomly each node receives a little bit of stress, which is added to the stress already built-up in the previous iterations. Every iteration., some stress is drained out of the system.. When there are a few nodes in the system, most stress is drained away, so no system stress is build-up. But after a while more and more nodes are alive in the system and at a certain moment more stress is generated than the system can absorb. When the stress in a cell exceeds a certain thresshold, the cell explodes and its’ stress is send to all the connected nodes. The distribution mechanism can be scaled from distributing the stress evenly over the connected nodes (Real life example: a flood) or transferring the amount of stress to each of the connected nodes (Real life example: information). With this little simulator, I had a nice opportunity to see whether a crises is predictable or not.
Start the video and click on the small square in the bottom right corner to enlarge.
In advance, I can set a number of parameters, like how many nodes are added per iteration, how many links are created per iteration, what is the maximum stress allowed, what is the stress distribution mechanism, how much stress is absorbed by the networks eco-system. Of course, not everything is fixed: the links are placed randomly the stress addition per iteration is randomized. What is intriguing however is that with the same settings, I get very different results:
| In this example, beyond 60 iterations, the system is no longer able to absorp all stress, keeping system stress at 0. Between 60-100 iteration the stress in the system grows. Around iteration 100, we get the first crises. A lot of nodes are overstressed and die. So the system stress shows a drawdown., but not completely, it quickly recovers and moves into growth-mode again until it reaches another serious drawdown around 170. Bumpy, but OK I guess.
|| In this example, the first part of the graph is identical to example 1. But,in the first100 iterations another network of nodes with other connections then in Example 1 has been created. Another network that as you see is behaving dramatically different. So, also around the 100th iteration, we get our first drawdown, but this is a serious one. It almost completely whipes out the network. A few nodes survived and the system rebuilds itself.
|The graph above shows how often drawdowns have occured in example 1. The more it is only a stochastic process, the more you may expect that the identified drawdowns (peak-to-valleys) stay close to the line. Only in the tail, there is sometimes a tendency to stay above the line.
|However, in the system of example 2, almost a complete collapse takes place and you can see, what Sornette calls a “Dragon-King”. And of-course with this value, we are in an avelanche process that swipes the stress overload through the network. According to Sornette, this is no longer a stochastic process, since it does not follow the trend line. This is his underpinning that there is something causal and thus predictable. But he is wrong. Yes it is a lot of domino stones that in a cascade destabilize each other and lead to a bigger drawdown then the verical axis indicates. But it is triggered by a stochastic process like all the others. As you can reason, this does not imply predictability. After the drawdown, one can identify the crash, not before. Sornette seems to use an old trick to hit the target. First shoot the arrow, then draw the bulls-eye around it and finally claim you never miss and your predictions are spot on.
In this graph, 10 simulation runs are depicted on top of each other. It shows that upto 100 iterations, the system behaves more or less identical in all the runs. However in one of the runs, during these 100 iterations a system has build up that is capable within boundaries to keep itself alive, while in all other runs, the build network is almost completely destroyed, sometimes rebuild, but in one occasion all nodes were dead. There is no way that the graph with the system stress contains enough information to predict any system behavior after iteration 100.
If we run the system not allowing it to create any links. the system stabilizes at around 3000-3500. Any nodes added above the 3000-3500 is creating a bubble and are compensated by nodes getting overstressed and killed. Since there are no links, no stress is transferred to other nodes. In other words the domino effect does not occur. The system is in a dynamic equilibrium. When the build-up of the links are allowed, and domino effects are possible, as soon as the stability line of 3000-3500 is exceeded, the system depending on its underlying network structure and the stochastic distribution of overstressed nodes determines the behavior and makes the system chaotic and no longer manageable.
Can We Predict Crises in Networks?
Recently Didier Sornette gave a TED-talk claiming he had found a way to predict crises in complex systems. See http://on.ted.com/Sornette. He shows examples from the financial world, health care and aerospace. This has the odeur of a grand achievement and it may bring us a step closer to managing or maybe even preventing crises. After my initial enthusiasm (since if this works, we can stress out stuff even further at a lower risk) I went deeper into Sornette’s work. However, I become less and less convinced that this is sound scientific stuff. But if he is right, there is a lot of potential in his theory. So, let’s dig deeper.
I also found a master thesis of mrs. Chen Li, University of Amsterdam, that in-depth tried to validate Sornette’s approach. Here is the conclusion:
“In sum, the bubble detection method is capable of identifying bubble-like (superexponential growth) price patterns and the model fits the historical price trajectory quite well in hindsight. However, fitting the model into historical data and using it to generate prediction in real-time are two different things and the latter is much more difficult. From test results, we conclude that bubble warnings generated by the model have no significance in predicting a crash/deflation.” Sornette’s claim of “a method that predicts crises” is falsified. To calculate whether a system is overstressed and to predict a crises are two different things.
The Answer Is No, But …..
So, if we can’t predict the drawdowns, we have to avoid creating bubbles. But in Techno-Human systems, we run into the phenomena of the tragedy of the commons (The tragedy of the commons is an economics theory by Garrett Hardin, according to which individuals, acting independently and rationally according to each one’s self-interest, behave contrary to the whole group’s long-term best interests by depleting some common resource). Who is prepared to respect boundaries, limit system stress and run the danger that others do not behave accordingly avoiding the system to become overstressed? Nobody wants to be a sucker, so this bubble game will probably continue, except for a few well-organized exceptions (see my blog in Dutch (Sorry) “Het managen van de groei”). Next to complicated social solutions, we can tmake the system more robust against crises by
- Avoiding tight coupling and complexity in the deisgn of a system
- Collecting and analyzing data on “close-call” incidents and improve the system
- Improving the interaction between operators (people working with or in the system) and the system and apply human factors engineering
as described by Charles Perrow in 1984 among others.
Ruud Gal, July 25 2014
Charles Perrow, “Normal Accidents: Living with High-Risk Technologies”, 1984
Gurkaynak, R.S., Econometric tests of asset price bubbles: taking stock. Journal of Economic Surveys, 22(1): 166-186, 2008
Jiang Z-Q., Zhou,W-X., Sornette, D., Woodard, R., Bastiaensen, K., Cauwels, P., Bubble Diagnosis and Prediction of the 2005-2007 and 2008-2009 Chinese stock market bubbles, http://arxiv.org/abs/0909.1007v2
Johansen, A., Ledoit, O. and Sornette, D., 2000. Crashes as critical points. International Journal of Theoretical and Applied Finance, Vol 3 No 1.
Johansen,A and Sornette, D., Log-periodic power law bubbles in Latin-American and Asian markets and correlated anti-bubbles in Western stock markets: An empirical study, International Journal of Theoretical and Applied Finance 1(6), 853-920
Johansen,A and Sornette, D., Fearless versus Fearful Speculative Financial Bubbles. Physica A 337 (3-4), 565-585 (2004)
[Kaizoji,T., Sornette, D., Market bubbles and crashes. http://arxiv.org/ftp/arxiv/papers/0812/0812.2449.pdf
Lin, L., Ren, R.-E., Sornette, D., A Consistent Model of ‘Explosive’ Financial Bubbles With Mean-Reversing Residuals, arXiv:0905.0128, http://papers.ssrn.com/abstract=1407574
Lin, L., Sornette, D. Diagnostics of Rational Expectation Financial Bubbles with Stochastic Mean-Reverting Termination Times. http://arxiv.org/abs/0911.1921
Sornette, D., Why Stock Markets Crash (Critical Events in Complex Financial Systems), Princeton University Press, Princeton NJ, January 2003
Sornette, D., Bubble trouble, Applying log-periodic power law analysis to the South African market, Andisa Securities Reports, Mar 2006
Sornette, D., Woodard, R., Zhou, W.-X., The 20062008 Oil Bubble and Beyond, Physica A 388, 1571-1576 (2009). http://arxiv.org/abs/0806.1170v3
Sornette, D., Woodard, R., Financial Bubbles, Real Estate bubbles, Derivative Bubbles, and the Financial and Economic Crisis. http://arxiv.org/abs/0905.0220v1
Dan Brahaa, Blake Stacey, Yaneer Bar-Yam, “Corporate competition: A self-organized network”, Social Networks 33 (2011) 219– 230,
Chen Li, “Bubble Detection and Crash Prediction”, Master Thesis, University of Amsterdam, 2010
M.A. Greenfield, “Normal Accident Theory, The Changing Face of NASA and Aerospace Hagerstown, Maryland”, 1998