The Crucial Role of Coffee Machines in Innovation

The Real Purpose of Brainstorms

Often people are convinced that innovation come from brainstorms and from brilliant insights by individuals. However research in the last decades indicates differently.
Brainstorms have gradually developed towards Creative Problem Solving. In my view, it is a professional way of problem solving.  The most crucial and therefore vital phase in brainstorming is the first step focusing on “What is the problem and what are the facts that underpin the problem definition?” The next step is to generate ideas on the defined problem. Nowadays, in the “generating ideas” proces high levels of sophistication can be achieved (for example: TRIZ). But the essence of TRIZ proves that it is not a creative process, but a methodology consisting of logical deductive and inductive process steps. In other words it mostly remains in the domain of the known. It fits perfectly the ideas of the ancient Greeks. There is no term in the ancient greek language corresponding to “to create” or “creator”. The association the ancient Greeks had was a proces of discovery. Discovery implies it is already there, you just have to find it. Brainstorming is an excellent tool for sophisticated problem solving (and there are problems enough in innovation), but not the most logical process step for creating a breakthrough.

An Idea is a Network

Brilliant insights were always considered to come as a lightning stroke.  As a lamp bulb that turns on and suddenly sheds light on a solution or insight, the so called  “Eureka” moment. There is a nice TED-talk “Where good ideas come from” by Steven Johnson. But nowadays psychologists challenge this view on creativity. As an example Johnson in his talk refers to Howard Grubers work on Darwin. Based on the analysis of Charles Darwin journals, Johnson states that all the basic concepts of Darwin’s “Theory of Natural Selection” were already described in his journal a long period before Darwin had his “Eureka” moment. His key statement “An idea is not a single thing popping-up in an illuminating moment, it is a network”.

Johnson concluded that new ideas and insights come from either interaction with people that have different points of view. This is one way of providing an experience that surprises people and intrigues them. In his talk, he discusses the important role of the first coffee house, “The Grand Cafe in Oxford”. In these places scientist from different disciplines and started discussions, which in the end  led to the birth of Enlightment. This basic insight I have often used to advice innovation managers on making their organisation more creative.

The Importance of Coffee Machines

AllenCurve

In essence you want to enhance the opportunity that people meet with different backgrounds and discuss in an unplanned, informal way. Location is an issue. In the late Seventies Thomas Allen already revealed the relation between the probability of unplanned, informal interaction and distance. See figure.

Suppose, you as a manager have a gut feel that there is a high probability of finding novel ideas if two groups of people would interact more. The first step would be to share this “strategic intent” to the people in the group and see whether some individuals from both groups are intrigued and show self-propelling behavior to explore this. Then apply Allen’s findings and co-locate the groups without trying to integrate organizationally. If you would integrate the groups, before you know, time and energy is spend on a classical group development process  – the storming, norming, forming and performing thing instead of exploration. An important underlying enabler is to do this via informal interaction. In this way a coffee machine and a nearby table where people can gather and interact are very instrumental for fostering innovation in organizations.

Evaluating Deverticalisation via M&A in the Semiconductor Industry

Mergers & Acquisitions Amongst The Top 20 In The Semiconductor Industry (1987-2013)

In this post, revenue development of companies with and without Mergers & Acquisitions are analysed amongst the top 20 in the Semiconductor Industry in the period (1987-2013). Often Mergers & Acquisitions are evaluated based on “short-term” financial impact. Here we try to get an impression of the longer term effects focused on growth in revenues. We are more interested in the long term revenue development of companies with an M&A somewhere in the period 1987-2013. So, we measure an integral effect of pre-M&A performance, M&A and post M&A performance.

Mergers & Acquisitions (M&A)  is a firm’s activity that deals with the buying, selling, dividing and combining of different companies and similar entities without creating a subsidiary, other child entity or using a joint venture. The dominant rationale used to explain M&A is that acquiring firms seek improved financial performance. Due to the scope of this analysis, financial performance amongst the top 20 are likely most improved by  (1) increasing economy of scale, (2) increasing economy of scope, (3)  increasing revenue or market share, (4) increasing synergy.

The restriction of M&A amongst the top 20 will hide a lot of acquisitions. Acquisitions are a normal element in this industry. “Eat or be eaten”. For example: from 2009 until 2013 Intel acquired Wind River Systems,McAfee,Infineon (Wireless),Silicon Hive,Telmap,Mashery, Aepona, SDN, Stonesoft Corporation,Omek Interactive, Indisys, Basis,Avago Technologies. Only Infineon will be mentioned in this analysis since it is a top 20 player.

The revenue dynamics in the top 20 in the period 1987-2013

The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest
Chart 1. Please Click to Enlarge. The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest

The semiconductor industry has seen quite some changes in the last decades. Chart 1 shows the changes in revenues, but also in ranks and is used to visualize the stability in the industry

The chart was constructed in the following way.  For every year, the revenues of the top 20  are stacked from nr. 20 at the bottom  to  nr.  1 at the top. The green top line for example in 2013 is the total revenue of the top 20 earned that year.  The difference between the green and the orange line in 2013 (second from the top) is Intel’s revenue. The difference between the orange and the black line represents the revenue of Samsung. 
Sometimes between the years cross-overs are present These cross-overs occur, when companies change position in the revenue ranking. For example the third line from the top (Qualcomm) in 2013 took over that position from Texas Instruments between 2011 and 2012.

In the period 1987-1995 the lines are quite parallel, ranking is more or less stable and growth is exponential. The only disruption of the ranking order  is Intel (green line), in 1992 taking over the nr. 1 position  from NEC Semiconductors. In 1996, the growth curve ends in a downturn. From now on the industry has to get used to a cyclic revenue pattern. Patterns that are typical for exponential growth systems hitting a constraint.  (See my blog on failing network systems)  or as business analists call it, the industry is in a consolidation phase.

Chart 2. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues
Chart 2. Please Click to Enlarge. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues

Also the number of changes in ranking (the number of cross-overs)  increase as we are entering the 21th century. An alternative of the graph above is that we only consider the changes in ranks and keep the revenue data out. The picture then looks like Chart 2. The chart looks even a bit more messy, but this is caused by the many crossovers in the lower ranks. Although changes in ranks count as 1, the change in revenue under these lower ranks may be quite small and therefore less dramatic as may appear from Chart 2. For readibility reasons, we will use this chart to plot M&A.

A Measure of Industry Stability

Chart 3. The stability index for the ranking in revenues in the semiconductor industry in 1987-2013, including the normalized revenue development for the top 20

One can express the level of stability in a distributed key parameter (i.e. revenue) over a set of actors over a period (i.e. one year) by a single parameter, let ‘s call it the stability index.

The revenue stability index is in this analysis defined as the correlation between two ranked and sorted revenues of the top 20 in consecutive years. The stability index is 1 in case the revenues of all players is equal to the previous year. It equals 0 in case the ranking is fully random, which hopefully will never happen.

In Chart 3, the revenue stability index is shown over the period 1987-2013. One can observe, a cyclic pattern that suggests that the companies are influencing each other dynamically. Furthermore, also a trend is visible towards increasing dynamics. This should be no surprise for the incumbants of course. Additionally, the total revenue of the top 20 is plotted (red line). The correlation between the two curves is -0.45. The more revenue is generated from a consolidating market, the higher the instability. This phenomena is called “The Tragedy Of The Commons” (See my Blog (unfortunately in Dutch) “Het Managen van Grenzen aan de Groei” (“Managing Growth Boundaries”). In this case during the stage of exponential growth (in general a sellers market), the competitive interaction at the supply side is limited. As soon as, the market becomes a buyers market, competitive interaction at the supply side increases and this interaction together with the constrained market leads to instability.

Underlying Drivers of M&A in the Semiconductor Industry

Transitions
Chart 4. Please click to enlarge. All red lines indicate transitions, like mergers, spin-off, LBO’s that DIRECTLY affect the revenues of the companies involved. Small, intellectual property / innovation related transitions are not presented here.

The semiconductor industry shows a lot of mergers and acquistions. Partial acquisitions, where we lack data on the revenue of this acquisition, are kept out of the quantitative analysis. In Chart 4, the red lines indicate the M&A, that will be discussed. We have identified the following underlying drivers in our analyses.

  • The semiconductor industry is The Enabler of the electronification and digitization of the world. There are nowadays not many data processing products or systems that do not contain semiconductors. This market has grown from US 29.4 B$ in 1987 to a US 213 B$ in 2013. In the growth phase the market size increased with 15-16% per year in the consolidation phase with 7-8% per year. 
  • The semiconductor industry is a capital intensive industry. The high capital intensity causes the business to have a cyclic character. This also implies that for the bad times, quite some cash is available during the good times in order to be able to survive the next downturn.  The cash-to-sales ratio of some companies was sometimes higher than 30%. As we will see this is attractive for private equity firms.
  • Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013
    Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013

    Scale is important. Only the top 3 have increased market share compared to the rest in the top 20. (See Chart 4).

  • Reduction of vertical integration. Initially, many corporations had their own semiconductor activities. Since the industry during the consolidation phase had a cyclic character with strong (financial) ups and downs, the parent companies were less and less prepared to absorp these cycles in their corporate results.
    • Semiconductor companies became independent from their parents and were no longer restricted by the parent’s  corporate strategies
    • An additional step in deverticalization was the introduction of the fabless business model, companies that design and sell semiconductors without having a “fab”. In 2010 7 out of the top 12 were fabless. In this way, fixed assets were substantially reduced, making the companies less vulnerable during the downturns.
    • The reduction of vertical integration also makes it easier to re-allocate (parts of) the activities to other companies / network structures and in this way achieve economy of scale or acquire new competences.

The M&A categories amongst the top 20 in the semiconductor industry

Reshuffeling in the semiconductor industry took place applying different M&A categories, (1) spin-off , (2) horizontal merger, (3) LBO (Leveraged Buyout), (4) acquisition. For comparison, we also analyse (5) new entrants.

  1. Spin-off

    Spin-off
    Chart 5. Please click to enlarge. The spin-offs in the semiconductor industry (Siemens, Infineon, Qimonda and AT&T, Lucent Technologies, Agere)

    Spin-off is defined here as a type of corporate transaction forming a new company or entity.

    The following spin-offs are considered. Infineon Technologies AG, the former Siemens Semiconductor, was founded on April 1, 1999, as a spinoff of Siemens AG. In 1988, as a first step Lucent Technologies was composed out of AT&T Technologies and spun off in 1996. In 2000, the semiconductor activity of Lucent Technologies was spun off and became a new legal entity called Agere. Agere in 2007 was acquired by LSI in 2007. ON Semiconductor was founded in 1999. The company was a spinoff of Motorola’s Semiconductor Products Sector. It continued to manufacture Motorola’s discrete, standard analog and standard logic devices. (Relative small spin-off not represented in the chart)

  2. Mergers
    Chart 6. Click to Enlarge. The main mergers in the chip industry (Hitachi-Mishubishi-Renesas-NEC, SGS-Thomson-SGS Thomson-ST Microelectronics, Fujitsu-AMD-Spansion, Hyundai-LG-Hynix-SK Hynix)

    The mergers in this analysis are horizontal mergers. These mergers are difficult to implement.  Integration, often means higher efficiencies and laying of personnel, but also reallocation of management responsibilities which all together make it difficult to achieve the synergetic advantages.

    In 2003, AMD and Fujitsu created a joint-venture company named Spansion focused on flash memories and in 2005 AMD and Fujitsu sold their stake. In 1996 Hyundai Semiconductors became independent via a Initial Public Offering and merged in 1999 with LG Semiconductors, the name changed into Hynix Semiconductor in 2001. In 1987, when SGS-Thomson was formed as a merger of SGS Microelettronica and Thomson and by 1998 withdrawal of Thomson out of SGS-Thomson, changing the name into STMicroelectronics. In 2002, semiconductor operations of Mitsubishi and Hitachi were spun off and merged to form a new separate legal entity named Renesas, in 2010 Renesas was merged to Renesas Technologies by merging with NEC Electronics Corp., a spin-off of NEC in 2002.

  3. Leveraged Buy Out
    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale
    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale

    In an LBO, private equity firms typically buy a stable, mature company that’s generating cash. It can be a company that is underperforming or a corporate orphan—a division that is no longer part of the parent’s core business. The point is that the company has lots of cash and little or no debt; that way the private equity investors can easily borrow money to finance the acquisition and can even use borrowed funds for more acquisitions once the company is private. In other words, the private equity firms leverage the strong cash position of the target company to borrow money. Then they restructure and improve the company’s bottom line and later either sell it or take it public. Such deals can produce returns of 30 percent to 40 percent.

    In 2006, Philips Semiconductors was sold by Philips to a consortium of private equity firms (Apax, AlpInvest Partners, Bain Capital, Kohlberg Kravis Roberts & Co. and Silver Lake Partners) through an LBO to form a new separate legal entity named NXP Semiconductors. Philips was very late compared to other parent companies to disentangle from the semiconductor activities and as the NXP CEO’s Frans van Houten stated, the LBO was the only option left. In 2006, Freescale (formerly known as Motorola semiconductors) agreed to be acquired by a consortium of private equity firms through an LBO. 2006 was a dramatic year for Freescale since Apple decided to let there microprocessors be supplied by Intel instead of Freescale.

  4. Acquisitions
    Acquisitions
    Chart 8. Acquisitions in the semiconductor industry

    Acquisitions are part of life in the semiconductor world. As already stated, we only consider acquisitions amongst the top 20.

    In 2010 Micron Technology acquires Numonyx. In 2011 Intel takes over Infineon Technologies wireless division, Texas Instruments acquires National Semiconductor, Qualcomm acquires Atheros Communications and ON Semiconductor acquires the Sanyo semiconductor division. In 2012 Samsung Electronics acquires Samsung Electro-Mechanics share of Samsung LED and Hynix Semiconductor was acquired by the SK Group. In 2013 Intel acquires Fujitsu Semiconductor Wireless Products, Samsung Electronics sold their 4- and 8-bit microcontroller business to IXYS, Micron Technologies acquires Elpida and finally Broadcom acquires Renesas Electronics’ LTE unit.

  5. New Entrants 
    Chart 9. The new entrants in the top 20
    Chart 9. The new entrants in the top 20

    Even in this competitive, capital and know-how intensive industry there have been new entrants. The new entrants group is dominated by US companies.
    Marvell Technology Group, Limited was founded in 1995. Marvell is a fabless semiconductor company. MediaTek Inc. is a fabless semiconductor company founded in 1997.Nvidia was founded in 1993.

    New entrants in the top 20 are Qualcomm was founded in 1985 and has become the nr. 3 in revenue in 2013. Micron Technology, Inc. founded in 1978 and is nr. 4 in revenue in 2013. Samsung Electric Industries was established as an industry Samsung Group in 1969 in Suwon, South Korea in 1974, the group expanded into the semiconductor business by acquiring Korea Semiconductor and occupies the nr. 2 position.

 Evaluation

Chart 10. Click to enlarge. Revenue development in the top 20 in the semiconductor industry from 1987 to 2013 in the different M&A categories as described above.

In chart 10 per M&A category, the development of the revenue are presented. Spin-offs (blue colors). FROM is the total revenue of the parent organisations and TO is the total revenue of the new founded companies. The same format is applied to Mergers (green colors) FROM and TO and LBO (orange colors) FROM and TO. The “others” category is a mix of companies, from which we lack qualititative data on the M&A activity).

There were some new entrants in the industry between 1987 and 2013. Some of them succeeded to make it into the top 20. There were also new entrants in the top 20, but most of them started a decade earlier. They occupy nr.2,3 and 4 in the top 20 in 2013. The nr. 1,2,3,4 all started as semiconductor companies or were embedded in a component conglomerate, like Samsung. These companies have a culture and a managerial mindset for component business. The new entrants and Intel have grown substantially to 50% of the total – top 20- revenue in 2013. Most players in 1987 were conglomerates, where semiconductor activities where managed on the impact on corporate level and not on winning the semiconductor game. The companies that went into a spin-off, horizontal merger or LBO  are together delivering 25% of the revenue in 2013. Compared to the “new entrants”, we have to conclude that companies had to go through a deverticalisation process via spin-offs, mergers or LBO have been part of a consolidation trajectory with in absolute terms no growth between 1995 – 2013.

It is tempting to speculate about the underlying causes, so here are some hypotheses:

  • With respect to the spin-offs, these where rather small companies, maybe too small to grow business, although Infineon initially shows quite some growth. But analists report a lot of cash problems in this growth period. In the end the spin-offs did not lead to substantial growth.
  • The mergers, especially the Japanese companies have declined in ranking during the period that they were still part of a conglomerate and then have tried to regain scale by merging. Again the result in terms of growth is very disappointing.
  • There is evidence of poor post-acquisition performance of large acquirors (Harford, 2005)
  • There is a study on bidding merger contests, that for the winning bidder, where before the contest each bidder had a fair chance of winning the merger contest, the stock returns of the winning bidders is outperformed by those that lost the bid. However, the opposite is true in those cases with a predictable winner (Malmendier, Moretti, Peters, 2012)
  • The LBO’s occur very late in the deverticalisation stage. The performance after 3 years of the LBO is not inspiring, both showed a severe loss of revenues. My impression also based on an interview with the CEO of NXP, van Houten, is that they were just too late with considering M&A. For Philips all options were gone, even a last possibility, a merger with Freescale was considered, but rejected by Freescale. Fortunately, the last few years NXP seems strong enough to recover from this transition.
  • Humans always seek for a “Why”, sometimes it is not there and it is just bad luck.
  • LBO’s, mergers and LBO’s are strategic interventions changing the footprint of companies, changing the structure, maybe these companies did not have a structure problem, but a portfolio problem (markets, applications, products)

I guess statements of Derek Lidow, president and CEO of market research firm iSuppli in 1996 are spot on explaining the strategic M&A decisions leading to no growth.

“….. the chip industry itself has been unable or unwilling to take the hard steps necessary for consolidation. Many chip companies are run by engineers, who tend to think in terms of technology rather than of how best to manage a product portfolio….”

“The real leverage in this kind of a deal is to do portfolio management,” he notes, which means managing groups of products by market segment or geographic region, for example, rather than by technology category. “That’s usually not done in the semiconductor industry,”

Sources

How Do Networks Handle Stress?

Our society gets increasingly networked. They are everywhere, our technology is networked, our communication is networked, we work in networks, our money is taken care of in networks (I hope) , our industries are networked and so on and so on. Never has there been a time that individuals and organisations had access to so much knowledge and technology. And that thanks to networks.

“Normal Accidents”

But there is a drawback to networks. Networks are characterised by interacting nodes, allowing a lot of knowledge stored into the system. As nicely explained in RSA Animate’s video “The Power of Networks”,networks is a next step in handling complexity.  Triggered by the 1979 Three Mile Island accident, where a nuclear accident resulted from an unanticipated interaction of multiple failures in a complex system, Charles Perrow was one of the first researchers that started rethinking our safety and reliability approaches for complex systems. In his “Normal Accident Theory”, Perrow refers to failures of complex systems as failures that are inevitable and therefore he uses the term “Normal Accidents”.

Complexity and Coupling Drive System Failure

Perrow identfies two major variables, complexity and coupling, that play a major role in his Normal Accident Theory.

Complex Systems

Proximity
Common-mode connections
Interconnected subsystems
Limited substitutions
Feedback loops
Multiple and interacting controls
Indirect information
Limited understanding

Linear Systems

Spatial segregation
Dedicated connections
Segregated subsystems
Easy substitutions
Few feedback loops
Single purpose, segregated controls
Direct information
Extensive understanding

Tight Coupling

Delays in processing not possible
Invariant sequences
Only one method to achieve goal
Little slack possible in supplies,
equipment, personnel
Buffers and redundancies are
designed-in, deliberate
Substitutions of supplies, equipment,
personnel limited and designed-in

Loose Coupling

Processing delays possible
Order of sequences can be changed
Alternative methods available
Slack in resources possible
Buffers and redundancies fortuitously available
Substitutions fortuitously available

 

 

Although Perrow is very much focused on technical systems in 1984, his theory has a much broader application scope. Nowadays, his insights also apply to organizational, sociological and economical systems. Enabled by digitization and information technology, systems in our daily lives have grown in complexity. The enormous pressure in business to realize the value in products and services in the shortest possible time and at the lowest cost, has driven our systems into formalization, reducing the involvement of human beings and their adaptivity compared to the complexity of the systems. Going through the list Perrow provided, one can understand that this “fast-and-cheap” drive generates complex and tightly coupled systems, maximizing resource and asset utilization  and doing so closing on on the maximum stress a system can cope with. From the Normal Accident Theory, we learn that complexity and tight coupling (in terms of time, money, etc…) increases the system’s vulnerability for system failure or system crises and collapses.

Building a Simple Simulation Tool to Simulate Networks Handling Stress

First I build a little simulator to play around with his models. It is based on MS EXCEL with some VBA-code added, simulating a network of max 20×20 nodes.. Each iteration, nodes (one cell with a color is a node) are added, but in each iteration also links between cells are added. Randomly each node receives a little bit of stress, which is added to the stress already built-up in the previous iterations. Every iteration., some stress is drained out of the system.. When there are a few nodes in the system, most stress is drained away, so no system stress is build-up. But after a while more and more nodes are alive in the system and at a certain moment more stress is generated than the system can absorb. When the stress in a cell exceeds a certain thresshold, the cell explodes and its’ stress is send to all the connected nodes. The distribution mechanism can be scaled from distributing the stress evenly over the connected nodes (Real life example: a flood) or transferring the amount of stress to each of the connected nodes (Real life example: information). With this little simulator, I had a nice opportunity to see whether a crises is predictable or not.

Start the video and click on the small square in the bottom right corner to enlarge.

Let’s check!

In advance, I can set a number of parameters, like how many nodes are added per iteration, how many links are created per iteration, what is the maximum stress allowed, what is the stress distribution mechanism, how much stress is absorbed by the networks eco-system. Of course, not everything is fixed: the links are placed randomly the stress addition per iteration is randomized. What is intriguing however is that with the same settings, I get very different results:

Example 1

Example 2

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 P7-24-2014-3-05-33-PM-E1-2-3-30-S30-100-P80
 In this example, beyond 60 iterations, the system is no longer able to absorp all stress, keeping system stress at 0. Between 60-100 iteration the stress in the system grows. Around iteration 100, we get the first crises. A lot of nodes are overstressed and die. So the system stress shows a drawdown., but not completely, it quickly recovers and moves into growth-mode again until it reaches another serious drawdown around 170. Bumpy, but OK I guess.  In this example, the first part of the graph is identical to example 1. But,in the first100 iterations another network of nodes with other connections then in Example 1 has been created. Another network that as you see is behaving dramatically different. So, also around the 100th iteration, we get our first drawdown, but this is a serious one. It almost completely whipes out the network. A few nodes survived and the system rebuilds itself.
CCDF7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 CCDF7-24-2014 3-05-33 PM-E1-2-3-30-S30-100-P80
The graph above shows how often drawdowns have occured in example 1. The more it is only a stochastic process, the more you may expect that the identified drawdowns (peak-to-valleys) stay close to the line. Only in the tail, there is sometimes a tendency to stay above the line.

 

 

 

 

 

However, in the system of example 2, almost a complete collapse takes place and you can see, what Sornette calls a  “Dragon-King”. And of-course with this value, we are in an avelanche process that swipes the stress overload through the network. According to Sornette, this is no longer a stochastic process, since it does not follow the trend line. This is his underpinning that there is something causal and thus predictable. But he is wrong. Yes it is a lot of domino stones that in a cascade destabilize each other and lead to a bigger drawdown then the verical axis indicates. But it is triggered by a stochastic process like all the others. As you can reason, this does not imply predictability. After the drawdown, one can identify the crash, not before. Sornette seems to use an old trick to hit the target. First shoot the arrow, then draw the bulls-eye around it and finally claim you never miss and your predictions are spot on.

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80

In this graph, 10 simulation runs are depicted on top of each other. It shows that upto 100 iterations, the system behaves more or less identical in all the runs. However in one of the runs, during these 100 iterations a system has build up that is capable within boundaries to keep itself alive, while in all other runs, the build network is almost completely destroyed, sometimes rebuild, but in one occasion all nodes were dead. There is no way that the graph with the system stress contains enough information to predict any system behavior after iteration 100.

If we run the system not allowing it to create any links. the system stabilizes at around 3000-3500. Any nodes added above the 3000-3500 is creating a bubble and are compensated by nodes getting overstressed and killed. Since there are no links, no stress is transferred to other nodes. In other words the domino effect does not occur. The system is in a dynamic equilibrium. When the build-up of the links are allowed, and domino effects are possible, as soon as the stability line of 3000-3500 is exceeded, the system depending on its underlying network structure and the stochastic distribution of overstressed nodes determines the behavior and makes the system chaotic and no longer manageable.

Can We Predict Crises in Networks?

Recently Didier Sornette gave a TED-talk claiming he had found a way to predict crises in complex systems. See http://on.ted.com/Sornette. He shows examples from the financial world, health care and aerospace. This has the odeur of a grand achievement and it may bring us a step closer to managing or maybe even preventing crises. After my initial enthusiasm (since if this works, we can stress out stuff even further at a lower risk) I went deeper into Sornette’s work. However, I become less and less convinced that this is sound scientific stuff. But if he is right, there is a lot of potential in his theory. So, let’s dig deeper.

I also found a master thesis of mrs. Chen Li, University of Amsterdam, that in-depth tried to validate Sornette’s approach. Here is the conclusion:
“In sum, the bubble detection method is capable of identifying bubble-like (superexponential growth) price patterns and the model fits the historical price trajectory quite well in hindsight. However, fitting the model into historical data and using it to generate prediction in real-time are two different things and the latter is much more difficult. From test results, we conclude that bubble warnings generated by the model have no significance in predicting a crash/deflation.” Sornette’s claim of “a method that predicts crises” is falsified. To calculate whether a system is overstressed and to predict a crises are two different things.

The Answer Is No, But …..

So, if we can’t predict the drawdowns, we have to avoid creating bubbles. But in Techno-Human systems, we run into the phenomena of the tragedy of the commons (The tragedy of the commons is an economics theory by Garrett Hardin, according to which individuals, acting independently and rationally according to each one’s self-interest, behave contrary to the whole group’s long-term best interests by depleting some common resource). Who is prepared to respect boundaries, limit system stress and run the danger that others do not behave accordingly avoiding the system to become overstressed? Nobody wants to be a sucker, so this bubble game will probably continue, except for a few well-organized exceptions (see my blog in Dutch (Sorry) “Het managen van de groei”). Next to complicated social solutions, we can tmake the system more robust against crises by

  • Avoiding tight coupling and complexity in the deisgn of a system
  • Collecting and analyzing data on “close-call” incidents and improve the system
  • Improving the interaction between operators (people working with or in the system) and the system and apply human factors engineering

as described by Charles Perrow in 1984 among others.

Ruud Gal, July 25 2014

References

Charles Perrow, “Normal Accidents: Living with High-Risk Technologies”, 1984

Gurkaynak, R.S., Econometric tests of asset price bubbles: taking stock. Journal of Economic Surveys, 22(1): 166-186, 2008

Jiang Z-Q., Zhou,W-X., Sornette, D., Woodard, R., Bastiaensen, K., Cauwels, P., Bubble Diagnosis and Prediction of the 2005-2007 and 2008-2009 Chinese stock market bubbles, http://arxiv.org/abs/0909.1007v2

Johansen, A., Ledoit, O. and Sornette, D., 2000. Crashes as critical points. International Journal of Theoretical and Applied Finance, Vol 3 No 1.

Johansen,A and Sornette, D., Log-periodic power law bubbles in Latin-American and Asian markets and correlated anti-bubbles in Western stock markets: An empirical study, International Journal of Theoretical and Applied Finance 1(6), 853-920

Johansen,A and Sornette, D., Fearless versus Fearful Speculative Financial Bubbles. Physica A 337 (3-4), 565-585 (2004)

[Kaizoji,T., Sornette, D., Market bubbles and crashes. http://arxiv.org/ftp/arxiv/papers/0812/0812.2449.pdf

Lin, L., Ren, R.-E., Sornette, D., A Consistent Model of ‘Explosive’ Financial Bubbles With Mean-Reversing Residuals, arXiv:0905.0128, http://papers.ssrn.com/abstract=1407574

Lin, L., Sornette, D. Diagnostics of Rational Expectation Financial Bubbles with Stochastic Mean-Reverting Termination Times. http://arxiv.org/abs/0911.1921

Sornette, D., Why Stock Markets Crash (Critical Events in Complex Financial Systems), Princeton University Press, Princeton NJ, January 2003

Sornette, D., Bubble trouble, Applying log-periodic power law analysis to the South African market, Andisa Securities Reports, Mar 2006

Sornette, D., Woodard, R., Zhou, W.-X., The 20062008 Oil Bubble and Beyond, Physica A 388, 1571-1576 (2009). http://arxiv.org/abs/0806.1170v3

Sornette, D., Woodard, R., Financial Bubbles, Real Estate bubbles, Derivative Bubbles, and the Financial and Economic Crisis. http://arxiv.org/abs/0905.0220v1

Dan Brahaa, Blake Stacey, Yaneer Bar-Yam, “Corporate competition: A self-organized network”, Social Networks 33 (2011) 219– 230,

Chen Li, “Bubble Detection and Crash Prediction”, Master Thesis, University of Amsterdam, 2010

M.A. Greenfield, “Normal Accident Theory, The Changing Face of NASA and Aerospace Hagerstown, Maryland”, 1998

http://en.wikipedia.org/wiki/Tragedy_of_the_commons

 

 

Put A Strategy To The Test By Gaming

Recently I was asked to design and facilitate a game to test a strategy, the company was considering to implement. The last years the demand for support on dealing with complex multi-player situations is steadily increasing. due to the increase in strategic uncertainty and the underlying drivers for more open innovation and system innovation. Martin Reeves and Mike Deimler from the Boston Consulting Group confirm in their study a steadily decrease of strategic predictability over the last decades. Gaming is one of the few instruments to address these strategic challenges.

Gaming has a long tradition. For many ages, military leaders are used to play wargames to simulate the effectiveness of their strategy. In the business environment, strategists like Arie de Geus (Shell) and Jac Geurts (Philips) were already using games in their strategy development in the seventies and eighties. It is well known that Shell was one of the companies that reacted fast and effective to the oil-crises in 1973. This success has always been largely explained by the fact that a few years earlier management teams had the opportunity to play a game: “How to respond to an oil crises” and in that way – as de Geus says it – “Remember the Future”.

During the design of the game, players are identified, that may effect the outcome of the strategy. These players have to be analysed in terms of strength and weaknesses, sensitivity for certain events, succesfull patterns of response in the past, personal background of the key decision makers. As a first step, an inventory is created of all relevant moves of all players. Subsequently for every player all moves of all players are ranked in order of importance / preference. The result of the preference ranking is used to calculate the level of alignment between players (which players are competitors, which are complementors), the relative power positions of the players and the level of expected aggressiveness to be expected. Finally, this analysis can support defining the steps to achieve the best possible result.

After this first paper analysis, the real fun is starting by really playing the game. Often each of the players are represented by a few persons, that know that player very well. In this way, the probability of creating very likely counterstrategies is the highest. Playing games is fun. Human beings like playing. Additionally, it is not just an intellectual, analytical exercise, but also emotions and feelings come in play. Except for real life, I do not know a more complete way to learn. After the game, the game is evaluated and the participants open up about their strategies and in how far they were able to achieve their objectives. The sharing of the game experience is often very enlightening and helps greatly in understanding the considerations of the other players and likely the organizations they are representing.

Sources:

©2012 All Rights Reserverd | Copyright & Disclaimer