Evaluating Deverticalisation via M&A in the Semiconductor Industry

Mergers & Acquisitions Amongst The Top 20 In The Semiconductor Industry (1987-2013)

In this post, revenue development of companies with and without Mergers & Acquisitions are analysed amongst the top 20 in the Semiconductor Industry in the period (1987-2013). Often Mergers & Acquisitions are evaluated based on “short-term” financial impact. Here we try to get an impression of the longer term effects focused on growth in revenues. We are more interested in the long term revenue development of companies with an M&A somewhere in the period 1987-2013. So, we measure an integral effect of pre-M&A performance, M&A and post M&A performance.

Mergers & Acquisitions (M&A)  is a firm’s activity that deals with the buying, selling, dividing and combining of different companies and similar entities without creating a subsidiary, other child entity or using a joint venture. The dominant rationale used to explain M&A is that acquiring firms seek improved financial performance. Due to the scope of this analysis, financial performance amongst the top 20 are likely most improved by  (1) increasing economy of scale, (2) increasing economy of scope, (3)  increasing revenue or market share, (4) increasing synergy.

The restriction of M&A amongst the top 20 will hide a lot of acquisitions. Acquisitions are a normal element in this industry. “Eat or be eaten”. For example: from 2009 until 2013 Intel acquired Wind River Systems,McAfee,Infineon (Wireless),Silicon Hive,Telmap,Mashery, Aepona, SDN, Stonesoft Corporation,Omek Interactive, Indisys, Basis,Avago Technologies. Only Infineon will be mentioned in this analysis since it is a top 20 player.

The revenue dynamics in the top 20 in the period 1987-2013

The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest
Chart 1. Please Click to Enlarge. The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest

The semiconductor industry has seen quite some changes in the last decades. Chart 1 shows the changes in revenues, but also in ranks and is used to visualize the stability in the industry

The chart was constructed in the following way.  For every year, the revenues of the top 20  are stacked from nr. 20 at the bottom  to  nr.  1 at the top. The green top line for example in 2013 is the total revenue of the top 20 earned that year.  The difference between the green and the orange line in 2013 (second from the top) is Intel’s revenue. The difference between the orange and the black line represents the revenue of Samsung. 
Sometimes between the years cross-overs are present These cross-overs occur, when companies change position in the revenue ranking. For example the third line from the top (Qualcomm) in 2013 took over that position from Texas Instruments between 2011 and 2012.

In the period 1987-1995 the lines are quite parallel, ranking is more or less stable and growth is exponential. The only disruption of the ranking order  is Intel (green line), in 1992 taking over the nr. 1 position  from NEC Semiconductors. In 1996, the growth curve ends in a downturn. From now on the industry has to get used to a cyclic revenue pattern. Patterns that are typical for exponential growth systems hitting a constraint.  (See my blog on failing network systems)  or as business analists call it, the industry is in a consolidation phase.

Chart 2. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues
Chart 2. Please Click to Enlarge. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues

Also the number of changes in ranking (the number of cross-overs)  increase as we are entering the 21th century. An alternative of the graph above is that we only consider the changes in ranks and keep the revenue data out. The picture then looks like Chart 2. The chart looks even a bit more messy, but this is caused by the many crossovers in the lower ranks. Although changes in ranks count as 1, the change in revenue under these lower ranks may be quite small and therefore less dramatic as may appear from Chart 2. For readibility reasons, we will use this chart to plot M&A.

A Measure of Industry Stability

Chart 3. The stability index for the ranking in revenues in the semiconductor industry in 1987-2013, including the normalized revenue development for the top 20

One can express the level of stability in a distributed key parameter (i.e. revenue) over a set of actors over a period (i.e. one year) by a single parameter, let ‘s call it the stability index.

The revenue stability index is in this analysis defined as the correlation between two ranked and sorted revenues of the top 20 in consecutive years. The stability index is 1 in case the revenues of all players is equal to the previous year. It equals 0 in case the ranking is fully random, which hopefully will never happen.

In Chart 3, the revenue stability index is shown over the period 1987-2013. One can observe, a cyclic pattern that suggests that the companies are influencing each other dynamically. Furthermore, also a trend is visible towards increasing dynamics. This should be no surprise for the incumbants of course. Additionally, the total revenue of the top 20 is plotted (red line). The correlation between the two curves is -0.45. The more revenue is generated from a consolidating market, the higher the instability. This phenomena is called “The Tragedy Of The Commons” (See my Blog (unfortunately in Dutch) “Het Managen van Grenzen aan de Groei” (“Managing Growth Boundaries”). In this case during the stage of exponential growth (in general a sellers market), the competitive interaction at the supply side is limited. As soon as, the market becomes a buyers market, competitive interaction at the supply side increases and this interaction together with the constrained market leads to instability.

Underlying Drivers of M&A in the Semiconductor Industry

Transitions
Chart 4. Please click to enlarge. All red lines indicate transitions, like mergers, spin-off, LBO’s that DIRECTLY affect the revenues of the companies involved. Small, intellectual property / innovation related transitions are not presented here.

The semiconductor industry shows a lot of mergers and acquistions. Partial acquisitions, where we lack data on the revenue of this acquisition, are kept out of the quantitative analysis. In Chart 4, the red lines indicate the M&A, that will be discussed. We have identified the following underlying drivers in our analyses.

  • The semiconductor industry is The Enabler of the electronification and digitization of the world. There are nowadays not many data processing products or systems that do not contain semiconductors. This market has grown from US 29.4 B$ in 1987 to a US 213 B$ in 2013. In the growth phase the market size increased with 15-16% per year in the consolidation phase with 7-8% per year. 
  • The semiconductor industry is a capital intensive industry. The high capital intensity causes the business to have a cyclic character. This also implies that for the bad times, quite some cash is available during the good times in order to be able to survive the next downturn.  The cash-to-sales ratio of some companies was sometimes higher than 30%. As we will see this is attractive for private equity firms.
  • Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013
    Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013

    Scale is important. Only the top 3 have increased market share compared to the rest in the top 20. (See Chart 4).

  • Reduction of vertical integration. Initially, many corporations had their own semiconductor activities. Since the industry during the consolidation phase had a cyclic character with strong (financial) ups and downs, the parent companies were less and less prepared to absorp these cycles in their corporate results.
    • Semiconductor companies became independent from their parents and were no longer restricted by the parent’s  corporate strategies
    • An additional step in deverticalization was the introduction of the fabless business model, companies that design and sell semiconductors without having a “fab”. In 2010 7 out of the top 12 were fabless. In this way, fixed assets were substantially reduced, making the companies less vulnerable during the downturns.
    • The reduction of vertical integration also makes it easier to re-allocate (parts of) the activities to other companies / network structures and in this way achieve economy of scale or acquire new competences.

The M&A categories amongst the top 20 in the semiconductor industry

Reshuffeling in the semiconductor industry took place applying different M&A categories, (1) spin-off , (2) horizontal merger, (3) LBO (Leveraged Buyout), (4) acquisition. For comparison, we also analyse (5) new entrants.

  1. Spin-off

    Spin-off
    Chart 5. Please click to enlarge. The spin-offs in the semiconductor industry (Siemens, Infineon, Qimonda and AT&T, Lucent Technologies, Agere)

    Spin-off is defined here as a type of corporate transaction forming a new company or entity.

    The following spin-offs are considered. Infineon Technologies AG, the former Siemens Semiconductor, was founded on April 1, 1999, as a spinoff of Siemens AG. In 1988, as a first step Lucent Technologies was composed out of AT&T Technologies and spun off in 1996. In 2000, the semiconductor activity of Lucent Technologies was spun off and became a new legal entity called Agere. Agere in 2007 was acquired by LSI in 2007. ON Semiconductor was founded in 1999. The company was a spinoff of Motorola’s Semiconductor Products Sector. It continued to manufacture Motorola’s discrete, standard analog and standard logic devices. (Relative small spin-off not represented in the chart)

  2. Mergers
    Chart 6. Click to Enlarge. The main mergers in the chip industry (Hitachi-Mishubishi-Renesas-NEC, SGS-Thomson-SGS Thomson-ST Microelectronics, Fujitsu-AMD-Spansion, Hyundai-LG-Hynix-SK Hynix)

    The mergers in this analysis are horizontal mergers. These mergers are difficult to implement.  Integration, often means higher efficiencies and laying of personnel, but also reallocation of management responsibilities which all together make it difficult to achieve the synergetic advantages.

    In 2003, AMD and Fujitsu created a joint-venture company named Spansion focused on flash memories and in 2005 AMD and Fujitsu sold their stake. In 1996 Hyundai Semiconductors became independent via a Initial Public Offering and merged in 1999 with LG Semiconductors, the name changed into Hynix Semiconductor in 2001. In 1987, when SGS-Thomson was formed as a merger of SGS Microelettronica and Thomson and by 1998 withdrawal of Thomson out of SGS-Thomson, changing the name into STMicroelectronics. In 2002, semiconductor operations of Mitsubishi and Hitachi were spun off and merged to form a new separate legal entity named Renesas, in 2010 Renesas was merged to Renesas Technologies by merging with NEC Electronics Corp., a spin-off of NEC in 2002.

  3. Leveraged Buy Out
    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale
    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale

    In an LBO, private equity firms typically buy a stable, mature company that’s generating cash. It can be a company that is underperforming or a corporate orphan—a division that is no longer part of the parent’s core business. The point is that the company has lots of cash and little or no debt; that way the private equity investors can easily borrow money to finance the acquisition and can even use borrowed funds for more acquisitions once the company is private. In other words, the private equity firms leverage the strong cash position of the target company to borrow money. Then they restructure and improve the company’s bottom line and later either sell it or take it public. Such deals can produce returns of 30 percent to 40 percent.

    In 2006, Philips Semiconductors was sold by Philips to a consortium of private equity firms (Apax, AlpInvest Partners, Bain Capital, Kohlberg Kravis Roberts & Co. and Silver Lake Partners) through an LBO to form a new separate legal entity named NXP Semiconductors. Philips was very late compared to other parent companies to disentangle from the semiconductor activities and as the NXP CEO’s Frans van Houten stated, the LBO was the only option left. In 2006, Freescale (formerly known as Motorola semiconductors) agreed to be acquired by a consortium of private equity firms through an LBO. 2006 was a dramatic year for Freescale since Apple decided to let there microprocessors be supplied by Intel instead of Freescale.

  4. Acquisitions
    Acquisitions
    Chart 8. Acquisitions in the semiconductor industry

    Acquisitions are part of life in the semiconductor world. As already stated, we only consider acquisitions amongst the top 20.

    In 2010 Micron Technology acquires Numonyx. In 2011 Intel takes over Infineon Technologies wireless division, Texas Instruments acquires National Semiconductor, Qualcomm acquires Atheros Communications and ON Semiconductor acquires the Sanyo semiconductor division. In 2012 Samsung Electronics acquires Samsung Electro-Mechanics share of Samsung LED and Hynix Semiconductor was acquired by the SK Group. In 2013 Intel acquires Fujitsu Semiconductor Wireless Products, Samsung Electronics sold their 4- and 8-bit microcontroller business to IXYS, Micron Technologies acquires Elpida and finally Broadcom acquires Renesas Electronics’ LTE unit.

  5. New Entrants 
    Chart 9. The new entrants in the top 20
    Chart 9. The new entrants in the top 20

    Even in this competitive, capital and know-how intensive industry there have been new entrants. The new entrants group is dominated by US companies.
    Marvell Technology Group, Limited was founded in 1995. Marvell is a fabless semiconductor company. MediaTek Inc. is a fabless semiconductor company founded in 1997.Nvidia was founded in 1993.

    New entrants in the top 20 are Qualcomm was founded in 1985 and has become the nr. 3 in revenue in 2013. Micron Technology, Inc. founded in 1978 and is nr. 4 in revenue in 2013. Samsung Electric Industries was established as an industry Samsung Group in 1969 in Suwon, South Korea in 1974, the group expanded into the semiconductor business by acquiring Korea Semiconductor and occupies the nr. 2 position.

 Evaluation

Chart 10. Click to enlarge. Revenue development in the top 20 in the semiconductor industry from 1987 to 2013 in the different M&A categories as described above.

In chart 10 per M&A category, the development of the revenue are presented. Spin-offs (blue colors). FROM is the total revenue of the parent organisations and TO is the total revenue of the new founded companies. The same format is applied to Mergers (green colors) FROM and TO and LBO (orange colors) FROM and TO. The “others” category is a mix of companies, from which we lack qualititative data on the M&A activity).

There were some new entrants in the industry between 1987 and 2013. Some of them succeeded to make it into the top 20. There were also new entrants in the top 20, but most of them started a decade earlier. They occupy nr.2,3 and 4 in the top 20 in 2013. The nr. 1,2,3,4 all started as semiconductor companies or were embedded in a component conglomerate, like Samsung. These companies have a culture and a managerial mindset for component business. The new entrants and Intel have grown substantially to 50% of the total – top 20- revenue in 2013. Most players in 1987 were conglomerates, where semiconductor activities where managed on the impact on corporate level and not on winning the semiconductor game. The companies that went into a spin-off, horizontal merger or LBO  are together delivering 25% of the revenue in 2013. Compared to the “new entrants”, we have to conclude that companies had to go through a deverticalisation process via spin-offs, mergers or LBO have been part of a consolidation trajectory with in absolute terms no growth between 1995 – 2013.

It is tempting to speculate about the underlying causes, so here are some hypotheses:

  • With respect to the spin-offs, these where rather small companies, maybe too small to grow business, although Infineon initially shows quite some growth. But analists report a lot of cash problems in this growth period. In the end the spin-offs did not lead to substantial growth.
  • The mergers, especially the Japanese companies have declined in ranking during the period that they were still part of a conglomerate and then have tried to regain scale by merging. Again the result in terms of growth is very disappointing.
  • There is evidence of poor post-acquisition performance of large acquirors (Harford, 2005)
  • There is a study on bidding merger contests, that for the winning bidder, where before the contest each bidder had a fair chance of winning the merger contest, the stock returns of the winning bidders is outperformed by those that lost the bid. However, the opposite is true in those cases with a predictable winner (Malmendier, Moretti, Peters, 2012)
  • The LBO’s occur very late in the deverticalisation stage. The performance after 3 years of the LBO is not inspiring, both showed a severe loss of revenues. My impression also based on an interview with the CEO of NXP, van Houten, is that they were just too late with considering M&A. For Philips all options were gone, even a last possibility, a merger with Freescale was considered, but rejected by Freescale. Fortunately, the last few years NXP seems strong enough to recover from this transition.
  • Humans always seek for a “Why”, sometimes it is not there and it is just bad luck.
  • LBO’s, mergers and LBO’s are strategic interventions changing the footprint of companies, changing the structure, maybe these companies did not have a structure problem, but a portfolio problem (markets, applications, products)

I guess statements of Derek Lidow, president and CEO of market research firm iSuppli in 1996 are spot on explaining the strategic M&A decisions leading to no growth.

“….. the chip industry itself has been unable or unwilling to take the hard steps necessary for consolidation. Many chip companies are run by engineers, who tend to think in terms of technology rather than of how best to manage a product portfolio….”

“The real leverage in this kind of a deal is to do portfolio management,” he notes, which means managing groups of products by market segment or geographic region, for example, rather than by technology category. “That’s usually not done in the semiconductor industry,”

Sources

How Do Networks Handle Stress?

Our society gets increasingly networked. They are everywhere, our technology is networked, our communication is networked, we work in networks, our money is taken care of in networks (I hope) , our industries are networked and so on and so on. Never has there been a time that individuals and organisations had access to so much knowledge and technology. And that thanks to networks.

“Normal Accidents”

But there is a drawback to networks. Networks are characterised by interacting nodes, allowing a lot of knowledge stored into the system. As nicely explained in RSA Animate’s video “The Power of Networks”,networks is a next step in handling complexity.  Triggered by the 1979 Three Mile Island accident, where a nuclear accident resulted from an unanticipated interaction of multiple failures in a complex system, Charles Perrow was one of the first researchers that started rethinking our safety and reliability approaches for complex systems. In his “Normal Accident Theory”, Perrow refers to failures of complex systems as failures that are inevitable and therefore he uses the term “Normal Accidents”.

Complexity and Coupling Drive System Failure

Perrow identfies two major variables, complexity and coupling, that play a major role in his Normal Accident Theory.

Complex Systems

Proximity
Common-mode connections
Interconnected subsystems
Limited substitutions
Feedback loops
Multiple and interacting controls
Indirect information
Limited understanding

Linear Systems

Spatial segregation
Dedicated connections
Segregated subsystems
Easy substitutions
Few feedback loops
Single purpose, segregated controls
Direct information
Extensive understanding

Tight Coupling

Delays in processing not possible
Invariant sequences
Only one method to achieve goal
Little slack possible in supplies,
equipment, personnel
Buffers and redundancies are
designed-in, deliberate
Substitutions of supplies, equipment,
personnel limited and designed-in

Loose Coupling

Processing delays possible
Order of sequences can be changed
Alternative methods available
Slack in resources possible
Buffers and redundancies fortuitously available
Substitutions fortuitously available

 

 

Although Perrow is very much focused on technical systems in 1984, his theory has a much broader application scope. Nowadays, his insights also apply to organizational, sociological and economical systems. Enabled by digitization and information technology, systems in our daily lives have grown in complexity. The enormous pressure in business to realize the value in products and services in the shortest possible time and at the lowest cost, has driven our systems into formalization, reducing the involvement of human beings and their adaptivity compared to the complexity of the systems. Going through the list Perrow provided, one can understand that this “fast-and-cheap” drive generates complex and tightly coupled systems, maximizing resource and asset utilization  and doing so closing on on the maximum stress a system can cope with. From the Normal Accident Theory, we learn that complexity and tight coupling (in terms of time, money, etc…) increases the system’s vulnerability for system failure or system crises and collapses.

Building a Simple Simulation Tool to Simulate Networks Handling Stress

First I build a little simulator to play around with his models. It is based on MS EXCEL with some VBA-code added, simulating a network of max 20×20 nodes.. Each iteration, nodes (one cell with a color is a node) are added, but in each iteration also links between cells are added. Randomly each node receives a little bit of stress, which is added to the stress already built-up in the previous iterations. Every iteration., some stress is drained out of the system.. When there are a few nodes in the system, most stress is drained away, so no system stress is build-up. But after a while more and more nodes are alive in the system and at a certain moment more stress is generated than the system can absorb. When the stress in a cell exceeds a certain thresshold, the cell explodes and its’ stress is send to all the connected nodes. The distribution mechanism can be scaled from distributing the stress evenly over the connected nodes (Real life example: a flood) or transferring the amount of stress to each of the connected nodes (Real life example: information). With this little simulator, I had a nice opportunity to see whether a crises is predictable or not.

Start the video and click on the small square in the bottom right corner to enlarge.

Let’s check!

In advance, I can set a number of parameters, like how many nodes are added per iteration, how many links are created per iteration, what is the maximum stress allowed, what is the stress distribution mechanism, how much stress is absorbed by the networks eco-system. Of course, not everything is fixed: the links are placed randomly the stress addition per iteration is randomized. What is intriguing however is that with the same settings, I get very different results:

Example 1

Example 2

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 P7-24-2014-3-05-33-PM-E1-2-3-30-S30-100-P80
 In this example, beyond 60 iterations, the system is no longer able to absorp all stress, keeping system stress at 0. Between 60-100 iteration the stress in the system grows. Around iteration 100, we get the first crises. A lot of nodes are overstressed and die. So the system stress shows a drawdown., but not completely, it quickly recovers and moves into growth-mode again until it reaches another serious drawdown around 170. Bumpy, but OK I guess.  In this example, the first part of the graph is identical to example 1. But,in the first100 iterations another network of nodes with other connections then in Example 1 has been created. Another network that as you see is behaving dramatically different. So, also around the 100th iteration, we get our first drawdown, but this is a serious one. It almost completely whipes out the network. A few nodes survived and the system rebuilds itself.
CCDF7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 CCDF7-24-2014 3-05-33 PM-E1-2-3-30-S30-100-P80
The graph above shows how often drawdowns have occured in example 1. The more it is only a stochastic process, the more you may expect that the identified drawdowns (peak-to-valleys) stay close to the line. Only in the tail, there is sometimes a tendency to stay above the line.

 

 

 

 

 

However, in the system of example 2, almost a complete collapse takes place and you can see, what Sornette calls a  “Dragon-King”. And of-course with this value, we are in an avelanche process that swipes the stress overload through the network. According to Sornette, this is no longer a stochastic process, since it does not follow the trend line. This is his underpinning that there is something causal and thus predictable. But he is wrong. Yes it is a lot of domino stones that in a cascade destabilize each other and lead to a bigger drawdown then the verical axis indicates. But it is triggered by a stochastic process like all the others. As you can reason, this does not imply predictability. After the drawdown, one can identify the crash, not before. Sornette seems to use an old trick to hit the target. First shoot the arrow, then draw the bulls-eye around it and finally claim you never miss and your predictions are spot on.

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80

In this graph, 10 simulation runs are depicted on top of each other. It shows that upto 100 iterations, the system behaves more or less identical in all the runs. However in one of the runs, during these 100 iterations a system has build up that is capable within boundaries to keep itself alive, while in all other runs, the build network is almost completely destroyed, sometimes rebuild, but in one occasion all nodes were dead. There is no way that the graph with the system stress contains enough information to predict any system behavior after iteration 100.

If we run the system not allowing it to create any links. the system stabilizes at around 3000-3500. Any nodes added above the 3000-3500 is creating a bubble and are compensated by nodes getting overstressed and killed. Since there are no links, no stress is transferred to other nodes. In other words the domino effect does not occur. The system is in a dynamic equilibrium. When the build-up of the links are allowed, and domino effects are possible, as soon as the stability line of 3000-3500 is exceeded, the system depending on its underlying network structure and the stochastic distribution of overstressed nodes determines the behavior and makes the system chaotic and no longer manageable.

Can We Predict Crises in Networks?

Recently Didier Sornette gave a TED-talk claiming he had found a way to predict crises in complex systems. See http://on.ted.com/Sornette. He shows examples from the financial world, health care and aerospace. This has the odeur of a grand achievement and it may bring us a step closer to managing or maybe even preventing crises. After my initial enthusiasm (since if this works, we can stress out stuff even further at a lower risk) I went deeper into Sornette’s work. However, I become less and less convinced that this is sound scientific stuff. But if he is right, there is a lot of potential in his theory. So, let’s dig deeper.

I also found a master thesis of mrs. Chen Li, University of Amsterdam, that in-depth tried to validate Sornette’s approach. Here is the conclusion:
“In sum, the bubble detection method is capable of identifying bubble-like (superexponential growth) price patterns and the model fits the historical price trajectory quite well in hindsight. However, fitting the model into historical data and using it to generate prediction in real-time are two different things and the latter is much more difficult. From test results, we conclude that bubble warnings generated by the model have no significance in predicting a crash/deflation.” Sornette’s claim of “a method that predicts crises” is falsified. To calculate whether a system is overstressed and to predict a crises are two different things.

The Answer Is No, But …..

So, if we can’t predict the drawdowns, we have to avoid creating bubbles. But in Techno-Human systems, we run into the phenomena of the tragedy of the commons (The tragedy of the commons is an economics theory by Garrett Hardin, according to which individuals, acting independently and rationally according to each one’s self-interest, behave contrary to the whole group’s long-term best interests by depleting some common resource). Who is prepared to respect boundaries, limit system stress and run the danger that others do not behave accordingly avoiding the system to become overstressed? Nobody wants to be a sucker, so this bubble game will probably continue, except for a few well-organized exceptions (see my blog in Dutch (Sorry) “Het managen van de groei“). Next to complicated social solutions, we can tmake the system more robust against crises by

  • Avoiding tight coupling and complexity in the deisgn of a system
  • Collecting and analyzing data on “close-call” incidents and improve the system
  • Improving the interaction between operators (people working with or in the system) and the system and apply human factors engineering

as described by Charles Perrow in 1984 among others.

Ruud Gal, July 25 2014

References

Charles Perrow, “Normal Accidents: Living with High-Risk Technologies”, 1984

Gurkaynak, R.S., Econometric tests of asset price bubbles: taking stock. Journal of Economic Surveys, 22(1): 166-186, 2008

Jiang Z-Q., Zhou,W-X., Sornette, D., Woodard, R., Bastiaensen, K., Cauwels, P., Bubble Diagnosis and Prediction of the 2005-2007 and 2008-2009 Chinese stock market bubbles, http://arxiv.org/abs/0909.1007v2

Johansen, A., Ledoit, O. and Sornette, D., 2000. Crashes as critical points. International Journal of Theoretical and Applied Finance, Vol 3 No 1.

Johansen,A and Sornette, D., Log-periodic power law bubbles in Latin-American and Asian markets and correlated anti-bubbles in Western stock markets: An empirical study, International Journal of Theoretical and Applied Finance 1(6), 853-920

Johansen,A and Sornette, D., Fearless versus Fearful Speculative Financial Bubbles. Physica A 337 (3-4), 565-585 (2004)

[Kaizoji,T., Sornette, D., Market bubbles and crashes. http://arxiv.org/ftp/arxiv/papers/0812/0812.2449.pdf

Lin, L., Ren, R.-E., Sornette, D., A Consistent Model of ‘Explosive’ Financial Bubbles With Mean-Reversing Residuals, arXiv:0905.0128, http://papers.ssrn.com/abstract=1407574

Lin, L., Sornette, D. Diagnostics of Rational Expectation Financial Bubbles with Stochastic Mean-Reverting Termination Times. http://arxiv.org/abs/0911.1921

Sornette, D., Why Stock Markets Crash (Critical Events in Complex Financial Systems), Princeton University Press, Princeton NJ, January 2003

Sornette, D., Bubble trouble, Applying log-periodic power law analysis to the South African market, Andisa Securities Reports, Mar 2006

Sornette, D., Woodard, R., Zhou, W.-X., The 20062008 Oil Bubble and Beyond, Physica A 388, 1571-1576 (2009). http://arxiv.org/abs/0806.1170v3

Sornette, D., Woodard, R., Financial Bubbles, Real Estate bubbles, Derivative Bubbles, and the Financial and Economic Crisis. http://arxiv.org/abs/0905.0220v1

Dan Brahaa, Blake Stacey, Yaneer Bar-Yam, “Corporate competition: A self-organized network”, Social Networks 33 (2011) 219– 230,

Chen Li, “Bubble Detection and Crash Prediction”, Master Thesis, University of Amsterdam, 2010

M.A. Greenfield, “Normal Accident Theory, The Changing Face of NASA and Aerospace Hagerstown, Maryland”, 1998

http://en.wikipedia.org/wiki/Tragedy_of_the_commons

 

 

Fragile And Anti-fragile: Two Sides Of The Innovation Coin

Currently I am reading “Antifragile: Things That Gain from Disorder” by Nassim Nicholas Taleb. It is a provoking book. Being active in innovation for the last 30 years, a lot is recognized in daily innovation practice.

Anti-fragile is described by Nassim Nicholas Taleb:

“Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better”

Taleb is very much emphasizing that contrast between anti-fragile and fragile, where he personally is very much in favor of anti-fragile behavior.

Anti-fragile sounds very new, but it isn’t. Already Prahalad & Hamel refer to anti-fragile in their book “Competing for the Future”. In my view companies are successful in innovation with demanding customers and creative, dedicated, excellent employees embedded in a culture and an organisation that enables the fastest and cheapest learning loops. Like Taleb, Prahalad c.s. emphasize making mistakes as OK, as long as the cost in time and money is low.

In this blog, I would like to point out that fragility and anti-fragilty are interacting phenomena and together they have pushed our society in an innovation tornado the world has never experienced. My hypothesis is that both behaviors are valuable and interact with each other, and with respect to innovation have a non-linear amplifying effect on each other.

The following steps are discussed below, the anti-fragility loop which gradually transforms into the fragility-loop. The fragile loops makes anti-fragility loops faster, which leads again too more fragility loops and so on. With this increasing speed we run the risk of destabilizing our social systems. Fragile systems are big and complex and lack observability and transparency. Therefore observability and transparency must be enhanced by legal and media bodies in order to keep the system at a sufficient ethical level.

The Anti-fragile Loop

Our anti- fragility loop starts with an unbalance between a challenge and someones’ or organisational or even society’s competences. Step one is to recognize the unbalance before the others do. The mismatch between challenge and competences requires exploring unknown territory. The best way is to decompose the mismatch in a portfolio of small activities. If one activity fails, another one may be successful. Independency and redundancy is build in the approach. Although finding solutions maybe difficult, it is not complex. Each activity is a learning vehicle, keeping cost of mistakes low and re-iterating the loop until each problem is solved.

In the end solutions are generated. In this world one can plan activities, but not the outcomes. Whether solutions are found or not, competences develop. This complete process is anti-fragile behavior. We are only interested in options with a low loss (called learning) and a high potential value or profit.

The world gross product is growing 5% annually. That means that everything that is growing faster then 5% per year is in fact destroying value somewhere, especially if we talk about first of a kind products and that is what anti-fragile systems are supposed to produce. An easy answer is that all inefficient and ineffective value creation is substituted by this above average growth by the first of a kind products. But, especially if power systems are in place and they are, is not necessarily true. The indirect impact of breakthrough solutions is almost by definition unknown and not transparent. Cause and effect relations becomes more clear, once the volume of the business is high enough and the business is more in an improvement then in an breakthrough mode.

In general, the first solution is not the best, so several iterations may be needed to find a real solution that brings satisfactory performance. Now, gradually the second loop, the so-called fragility loop comes in.

The Fragile-Loopfragility-loop

Although the unbalance between challenge and competencies is reduced, innovation does not end here. There is a lot to win by eliminating the introduced redundancy. Often in this stage of the industry life cycle several solutions compete to become the dominant design, the design accepted by the industry (Utterback’s “Mastering The Dynamics Of Innovation“.

Gradually cost price becomes a key value driver. On-going cost reduction lead to increasing dependencies and trade-off decisions between the different business functions, optimizing product functionality, production cost, investments and order lead time. Once a dominant design is established, parts of the product may become outsourced in order to leverage learning curve / volume effects for these parts by sharing suppliers across the industry. This will lead to specialization and to decomposing the total system in a network of interdependent and inter-transactional organizations, all adding value towards the value proposition to the end-customer. In cost dominant industries, the control on the interfaces is more and more based on the power balance between the organizations, shifting costs and profits in the interest of the most powerful organizations. The old Michael Porter book “Competitive Strategy” provides an excellent framework to understand the underlying drivers of power in industries.

Squeezing all resources out of the system, the system becomes very sensitive to all kinds of undesired variances, which quickly lead to extra cost and impact margins.The system tries to maintain its’ balance by discipline, planning and powerplay. This is why I call this the fragility-loop. Complexity and structural elimination of resource claims makes the system fragile. The total system / network becomes so large that they are not allowed to fail, since that may have a devastating impact (example: the recent banking crises, failure of electricity supply, the crises in the construction industry).

The Social Loop

Social loopThe overall network gradually transforms towards a negative sum game. Prices have to go down, one has to grow in volume to make money and if growth has disappeared, it gets tough. The complexity, the power games at the interfaces, the disappearing control and accountability for the overall system behavior transforms it towards an increasing rational, a-moral system. And this is even worse, since the complexity causes transparency to decrease. The customer or citizen is not able to notice that a product is using components from a supplier that uses child labor to save costs. As customer you can not really see how healthy products are. As customer you can no longer grasp the risk profile of different financial products offered by your bank. As customer, you have no idea what happens with all your personal data and what your mobile phone, via apps like Talking Tom is sending to all kind of companies. Therefore external bodies with legal or media power are needed to compensate for the lack of observability and transparency and keep the system within legal and ethical boundaries.

Big complicated systems need people that behave predictable and disciplined (too much variances in behavior will only lead to extra cost). Most people are risk averse and will feel pretty comfortable in these systems (Daniel Kahneman, “Thinking Fast, Thinking Slow“). After enough stability is reached in terms of way-of-working and using standardized technology, expensive human interaction is gradually removed from the system by mechanization and automation transforming the system to higher efficiency, at the cost of learning ability, responsiveness to external changes.

Taleb in his book is rather negative about the fragility world, but I tend to disagree. Although the fragility loop can be perceived as a negative pattern and for sure it has its’ crosses to bear, but our complicated systems, our bureaucracies also have brought a lot of wealth and comfort (for example: a television set for 300 euro) and outsourcing and off-shoring have created opportunities for regions to develop, but this all comes at a price.

More Fragility Leads To More Anti-Fragility

Return loopIsn’t it strange, we have gained more and more knowledge and never in the history of human kind, we achieved such a deep understanding of the world and ourselves. But in our perception, and in real life, everybody has an uncomfortable perception of an increasing speed of change in our lives and our environment. The fragile system has made the anti-fragile system learn faster and cheaper, but also the fragile system it selfs learns faster and cheaper. This explains the shortening of industry, organization and product life cycles.

This is leading to an increasing non-linear, chaotic (in the definition of complexity theory) behavior of our total society and our environment, especially now at the brink of a new economy that is based on information.
without growth of our resource demands?
We live in exciting times.

Conclusion

At a society level, fragile and anti-fragile strengthen each other towards more transformational speed. This system runs in my view two risks:

– in the end the whole game is fueled by the sun. We are getting more and more aware of the physical and biological limits of our Earth. Our largest systems have become very fragile and difficult to transform without destabilizing society itself. The transformation to a sustainable society is awesome and if it fails to be on time, it will destabilize the earths ecosystems and it will destabilize our societies being part of the earth’s ecosystem.

– these sustainability issues are addressed too slow by the fact that the economic system described above is fueled by growth. Growth, as we know it, will be limited in the future. The question is if we can establish value added growth without growth of the demands on the eco-system.

– the complexity of our society leads to less transparency, which require a system that maintains ethical boundaries. The model discussed shows that only planned models lead to fragile systems and only anti-fragile systems lead to unacceptable inefficiencies.

fragility, antifragility, complexity and innovation

Managing Complexity

In our more and more complicated world, we are looking for tools to help us identifying the right interventions.

A few decades ago Peter Senge introduced systems thinking. Moving away from simple cause-effect analysis, he inspired us with his thinking in reinforcing and balancing loops in a complex system. Eric Berlow shows perfectly how to use systems diagrams for identifying the key interventions in order to make a change.

Ecologist Eric Berlow doesn’t feel overwhelmed when faced with complex systems. He knows that more information can lead to a better, simpler solution. Illustrating the tips and tricks for breaking down big issues, he distills an overwhelming infographic on U.S. strategy in Afghanistan to a few elementary points.

TED Senior Fellow Eric Berlow studies ecology and networks, exposing the interconnectedness of our ecosystems with climate change, government, corporations and more.

Reshoring: A Key Strategy Facing Present And Future Manufacturers

Tue, 02/18/2014 – Guy Morgan, Managing Director and Global Operations Advisory Group Lead, BBK

In the past several decades, global competitiveness has driven industries of all types to pursue a labor cost reduction model. The trend in North America alone has resulted in the following path:

Reduce cost in high cost environment
Move product to lower cost nation ( Mexico or Eastern Europe )
Move product to next lower cost nation ( China or Southeast Asia )
We are sure the reader may have participated in such outsourcing events that advertised huge savings to the manufacturer and in some cases, other parts of the supply chain. The company reaped benefits in labor and redundancy costs that hopefully offset inventory and transportation costs. But as we all know, the manufacturing environment is in a constant state of change. The failure to recognize the static nature of the business coupled with the failure of manufacturers to comply with disciplined global production systems has caused the participants to revisit the value of outsourcing and the opportunities associated with reshoring. To be clear, reshoring is the practice of returning outsourced products and assets back to the location (geographic selection) from which they were originally outsourced or removed.

Continue Reading ->

Supply Chain Strategies And The Future Of American Manufacturing

Rachel Leisemann Immel, Associate Editor, IMPO

China’s overwhelming manufacturing cost advantage over the U.S. is shrinking fast. Within three years, a Boston Consulting Group analysis concludes that rising Chinese wages, higher U.S. productivity, a weaker dollar, and other factors will virtually close the cost gap between the U.S. and China for many goods consumed in North America. Bill Michels, president of ADR North America LLC, a specialty consulting firm that focuses on purchasing and supply chain management and senior vice president of the Institute for Supply management, recently sat down with IMPO to discuss supply chain strategies, government intervention, and the future of American manufacturing.

Continue Reading->

About Reshoring and Off-shoring Trends in US and Europe

The Truth About Reshoring, Productivity, And Today’s Manufacturer

Fri, 10/04/2013 – 11:50am by Jim Shepherd , Plex Systems

There’s been a lot of buzz about the reshoring of American manufacturing business that had previously been lost to other regions. The talk seems to center on three areas:

Is reshoring actually happening?
Are we really going to make up for all the jobs lost to countries with lower-cost labor, primarily India and China?
What can we learn from success stories in order to achieve more reshoring?

Continue Reading->

Offshoring and Reshoring trends: European data

September 27, 2013 by Jan Van Mieghem

Continuing our blogs on offshoring, here is some interesting data from the European Manufacturing Survey conducted in 2009 as studied by Bernhard Dachs, Marcin Borowiecki, Steffen Kinkel and Thomas Christian Schmall (December 2012). Their survey quantifies the extent, trends, and reasons why European manufacturing firms offshore or re-shore production. Here is some of their key findings.

Continue Reading->

3,000 Years of Human History, Described in One Set of Mathematical Equations

Posted By: Joseph Stromberg

Most people think of history as a series of stories—tales of one army unexpectedly defeating another, or a politician making a memorable speech, or an upstart overthrowing a sitting monarch.

Peter Turchin of the University of Connecticut sees things rather differently. Formally trained as a ecologist, he sees history as a series of equations. Specifically, he wants to bring the types of mathematical models used in fields such as wildlife ecology to explain population trends in a different species: humans.

In a paper published with colleagues today in the Proceedings of the National Academy of Sciences, he presents a mathematical model (shown on the left of the video above) that correlates well with historical data (shown on the right) on the development and spread of large-scale, complex societies (represented as red territories on the green area studied). The simulation runs from 1500 B.C.E. to 1500 C.E.—so it encompasses the growth of societies like Mesopotamia, ancient Egypt and the like—and replicates historical trends with 65 percent accuracy.
Continue reading →

How Big The Internet Of Things Could Become

The potential size of the Internet Things sector could be a multi-trillion dollar market by the end of the decade.

by Brian Proffitt September 30, 2013

That’s the holy-@$#! number of devices that Morgan Stanley has extrapolated from a Cisco report that details how many devices will be connected to the Internet of Things by 2020. That’s 9.4 devices for every one of the 8 billion people that’s expected to be around in seven years.

Continue Reading

The Inflection Point of Economic Measures

by JAY DERAGON on 10/01/2013

What would happen if companies, industries, sectors and economies changed what results mattered and how those results are measured? What would happen is the behavior of companies, industries, sectors and economies would change. Sound familiar?

Companies, industries, sectors and economies are dynamic and constantly evolving. Inflection points are more significant than the small day-to-day progress made and the effects of the change are often well-known and widespread. An inflection point can be considered a turning point after which a dramatic change, with either positive or negative results, is expected to result.

Continue Reading