How To Measure Innovation Performance?

Measuring Innovation Performance

Often companies express their innovation effort (in general R&D) as a % of sales. This is, as shown in this blog, a limited way of measuring innovation performance. In my courses and trainings I always use an alternative indicator for innovation performance by adding an historical perspective. I figured out this indicator over 15 years ago, when I came to the insight that, if in hindsight we can’t measure past innovation performance, how can we make any statements about future innovation performance, since these are only predictions based on extrapolations and deductions of the past?

The blog starts with a simple definition of Innovation and then link innovation to added value. Then some issues with measuring new added value are discussed and alternatives are proposed.

What is Innovation?

The word innovation comes from the root Latin word, “Innovare” and it means to renew, start again and initiate change [1]. This is a very broad definition and in this blog we would like to be more focused and keep it simple by discussing an example of one organisation delivering a product to a customer. The approach is extendable to product categories, networks and to services. An important decision is the level of granularity that offers most valuable insights.

In it’s simplest form, we define innovation as follows:

Innovation is that part of the added value of a product sold to a customer that is new to him or her.

Creating Value

Innovation and Added Value
Figure 1. The transactional adding value process

Suppose we are a supplier, that offers a product. The customer buys and uses our product  and this hopefully 🙂 adds value to to him or her (See fig 1. added value 1). This added value is an experience in the brain of the customer. Unfortunately, at this moment in time we do not have accurate means to measure the sensations in the brain of a customer associated with creating value. With current developments in brain research, this may change in the future.
Still, if we can measure the sensations, there a second hurdle to take. Already in 1992, C. de Bont indicated that this experience of added value is also strongly depending on previous experiences and expectations of the customer. The sensations may not be caused by the product alone, they are always in an psychological and application context.

Of course, since we are in business, we expect something back in return (added value 2). This can be money, information (information about use, preferences and behaviour of the customer) and goodwill (sharing positive experiences with others (measured for example by the Net Promotor Score [4]), positive attitude, loyalty to the product and making it easier for us to sell the product more and easier.

 Creating New Value

Innovation in our definition is identified by that part in the added value (1) percieved by the customer, that is new to him or her. Due to the lack of observability, we can only guess what the true value of the ‘new’ part is for the customer. The most reliable feedback at this moment is added value (2), how much do we get back? This is also influenced by the difference in bargaining power bertween us and the customer and is part of the competitive context in which this transaction takes place.

I prefer to pragmatically define our measurable innovation performance as

  1. the added value (2), we get back in return from the customer and
  2. figure out a way what the dominant reasons are to buy the product and
  3. find the correlation with brand, product features or (delivery and communication)-process features that are new.

To take this one step further, consider tracking whether customers are postively or negatively surprised by the product when using it. Because, this may lead to increased or decreased goodwill, impacting future sales.

Let’s give a basic example to get the idea.

Figure 2. Basic definition of the innovation performance of a single product

Suppose a customer walks out of a store and has bought your product and as he walks out of the store you ask him or her the following question:

Why did you buy this product?

We assume you are an excellent market researcher and you know how to get a valid and reliable answer (that is not trivial) and the answer given can be in the end allocated to one of two categories:

The customer buys the product,

  1. because of product aspects that were already present in earlier generations of this product or experiences with the company OR
  2. because of product aspects that are new and for the first time introduced in this newest product generation.

I often challenge people to make an estimate for one of the product categories of their companies. Examples came up where participants estimated that 80 % of the sales was based on past performance: previous experiences, relationships, product characteristics or believes, perceptions on the product brand of previous generations of products or brand experiences about the company (maybe even totally outside of the scope of this product)

After asking enough customers to make your conclusions robust, you can make a chart that looks like figure 2.

To take this one step further, consider tracking whether customers are postively or negatively surprised by the product when using it, since this may lead to increased or decreased goodwill, impacting future sales.

Innovation Leverage and Innovation Performance of the Product (IPP)

Figure 3. Innovation Performance amongst Product Generations
Figure 3. Innovation Performance of the Product (IPP) amongst Product Generations

By tracking innovation performance for each product generation, interesting observations can be made.

High Innovation Performance sounds good. However, the leverage of innovation in a previous product generation can be much higher and stretch over many future product generations, this however limits the current innovation performance.

In figure 3, you see an example that innovation in past product generations via creating loyalty or increasing switching costs is still adding value (2) in the future generations to come. This is a highly appreciated situation, that implies a high innovation leverage and the creation of a sustainable advantage.

Every advantage has it’s disadvantage

If an innovative organisation is in this (comfortable) situation for a long time, the management must be aware that the innovation competence of an organization is less challenged and the innovation engine may become lazy and slow.

On the other hand, low leverage of past innovative performance is also not without issues. Although low leverage of past innovation performance often leads to world-classes enabling(!) innovation competences, one has to keep throwing resources in the innovation engine, but the innovation leverage remains low. Moreover, goal-finding innovation capabilities do not develop, because the set of value drivers on which the customer assess your product are in most cases well-known. In extreme cases, every product generation is a new ball game. Many “mature” component companies are in this situation especially when competition is heavy and companies are played out against each other by their customers. So excellent innovation engine, but no profit.

What about competition?

Figure 4. Innovation Performance In The Market (IPM)
Figure 4. Innovation Performance In The Market (IPM)

The Innovation Performance of the Product may give already quite some insight. However, it is rather internally focused and that is not very wise to do nowadays.

Therefore in figure 4, we have added the market share development in the figure rescaling the data in figure 3. Now although innovation leverage (from the past) is higher, the company is loosing market share and probably not competitive enough. Personally I think that, if this is a next level indicator: Innovation Performance In the Market (IPM). The formula is

IPM = IPP x Market Share (or Relative Market Share = Market Share against your largest competitor)

IPM in this example says that 5.5 % of the market bought your product because of the innovation in product generation N. Next generation that was reduced to 3.1 %. Message: You are in trouble in a fast growing market, your innovative sword has gone dull.

The short cut

Personally, I like short-cuts. I remember joining an innovation department and I wanted to get an idea of the Innovation Performance. I was asking the people, what was really new that had come out of the lab (for the last 2-3 product generations). I verified this with the commercial guys, get from them a best-guess what revenue we would have lost, if these new stuff would not have been there. And then it was easy to calculate a gutfeel innovation performance of the department.

Innovation Performance measures the Innovation Engine of the Complete. Organisation, not just the R&D organisation

I must warn not to use this as an indicator to judge an innovation department. There are many other departments as well as higher management that influence innovation performance.

Innovation Performance is measuring the output of the innovation process and it helps to look for bottlenecks in the innovation flow that limit the bang for the buck spend on innovation.

Ruud Gal

September 18th 2014

Any questions or need for support to get an idea of the innovation performance of your company and define actions to improve innovative behavior of the organisation, do not hesitate to contact me.

Innovation Performance is one of the topics in our Innovation Management Training & Coaching. See www.im-tc.nl

References

  1. http://activatechurch.wordpress.com/2010/06/03/innovare/
  2. http://en.wikipedia.org/wiki/Innovation
  3. Prof.dr. C.J.P.M. de Bont, “Consumer Evaluations of Early Product-Concepts”, Thesis, TUD 1992
  4. http://en.wikipedia.org/wiki/Net_Promoter

Evaluating Deverticalisation via M&A in the Semiconductor Industry

Mergers & Acquisitions Amongst The Top 20 In The Semiconductor Industry (1987-2013)

In this post, revenue development of companies with and without Mergers & Acquisitions are analysed amongst the top 20 in the Semiconductor Industry in the period (1987-2013). Often Mergers & Acquisitions are evaluated based on “short-term” financial impact. Here we try to get an impression of the longer term effects focused on growth in revenues. We are more interested in the long term revenue development of companies with an M&A somewhere in the period 1987-2013. So, we measure an integral effect of pre-M&A performance, M&A and post M&A performance.

Mergers & Acquisitions (M&A)  is a firm’s activity that deals with the buying, selling, dividing and combining of different companies and similar entities without creating a subsidiary, other child entity or using a joint venture. The dominant rationale used to explain M&A is that acquiring firms seek improved financial performance. Due to the scope of this analysis, financial performance amongst the top 20 are likely most improved by  (1) increasing economy of scale, (2) increasing economy of scope, (3)  increasing revenue or market share, (4) increasing synergy.

The restriction of M&A amongst the top 20 will hide a lot of acquisitions. Acquisitions are a normal element in this industry. “Eat or be eaten”. For example: from 2009 until 2013 Intel acquired Wind River Systems,McAfee,Infineon (Wireless),Silicon Hive,Telmap,Mashery, Aepona, SDN, Stonesoft Corporation,Omek Interactive, Indisys, Basis,Avago Technologies. Only Infineon will be mentioned in this analysis since it is a top 20 player.

The revenue dynamics in the top 20 in the period 1987-2013

The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest
Chart 1. Please Click to Enlarge. The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest

The semiconductor industry has seen quite some changes in the last decades. Chart 1 shows the changes in revenues, but also in ranks and is used to visualize the stability in the industry

The chart was constructed in the following way.  For every year, the revenues of the top 20  are stacked from nr. 20 at the bottom  to  nr.  1 at the top. The green top line for example in 2013 is the total revenue of the top 20 earned that year.  The difference between the green and the orange line in 2013 (second from the top) is Intel’s revenue. The difference between the orange and the black line represents the revenue of Samsung. 
Sometimes between the years cross-overs are present These cross-overs occur, when companies change position in the revenue ranking. For example the third line from the top (Qualcomm) in 2013 took over that position from Texas Instruments between 2011 and 2012.

In the period 1987-1995 the lines are quite parallel, ranking is more or less stable and growth is exponential. The only disruption of the ranking order  is Intel (green line), in 1992 taking over the nr. 1 position  from NEC Semiconductors. In 1996, the growth curve ends in a downturn. From now on the industry has to get used to a cyclic revenue pattern. Patterns that are typical for exponential growth systems hitting a constraint.  (See my blog on failing network systems)  or as business analists call it, the industry is in a consolidation phase.

Chart 2. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues
Chart 2. Please Click to Enlarge. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues

Also the number of changes in ranking (the number of cross-overs)  increase as we are entering the 21th century. An alternative of the graph above is that we only consider the changes in ranks and keep the revenue data out. The picture then looks like Chart 2. The chart looks even a bit more messy, but this is caused by the many crossovers in the lower ranks. Although changes in ranks count as 1, the change in revenue under these lower ranks may be quite small and therefore less dramatic as may appear from Chart 2. For readibility reasons, we will use this chart to plot M&A.

A Measure of Industry Stability

Chart 3. The stability index for the ranking in revenues in the semiconductor industry in 1987-2013, including the normalized revenue development for the top 20

One can express the level of stability in a distributed key parameter (i.e. revenue) over a set of actors over a period (i.e. one year) by a single parameter, let ‘s call it the stability index.

The revenue stability index is in this analysis defined as the correlation between two ranked and sorted revenues of the top 20 in consecutive years. The stability index is 1 in case the revenues of all players is equal to the previous year. It equals 0 in case the ranking is fully random, which hopefully will never happen.

In Chart 3, the revenue stability index is shown over the period 1987-2013. One can observe, a cyclic pattern that suggests that the companies are influencing each other dynamically. Furthermore, also a trend is visible towards increasing dynamics. This should be no surprise for the incumbants of course. Additionally, the total revenue of the top 20 is plotted (red line). The correlation between the two curves is -0.45. The more revenue is generated from a consolidating market, the higher the instability. This phenomena is called “The Tragedy Of The Commons” (See my Blog (unfortunately in Dutch) “Het Managen van Grenzen aan de Groei” (“Managing Growth Boundaries”). In this case during the stage of exponential growth (in general a sellers market), the competitive interaction at the supply side is limited. As soon as, the market becomes a buyers market, competitive interaction at the supply side increases and this interaction together with the constrained market leads to instability.

Underlying Drivers of M&A in the Semiconductor Industry

Transitions
Chart 4. Please click to enlarge. All red lines indicate transitions, like mergers, spin-off, LBO’s that DIRECTLY affect the revenues of the companies involved. Small, intellectual property / innovation related transitions are not presented here.

The semiconductor industry shows a lot of mergers and acquistions. Partial acquisitions, where we lack data on the revenue of this acquisition, are kept out of the quantitative analysis. In Chart 4, the red lines indicate the M&A, that will be discussed. We have identified the following underlying drivers in our analyses.

  • The semiconductor industry is The Enabler of the electronification and digitization of the world. There are nowadays not many data processing products or systems that do not contain semiconductors. This market has grown from US 29.4 B$ in 1987 to a US 213 B$ in 2013. In the growth phase the market size increased with 15-16% per year in the consolidation phase with 7-8% per year. 
  • The semiconductor industry is a capital intensive industry. The high capital intensity causes the business to have a cyclic character. This also implies that for the bad times, quite some cash is available during the good times in order to be able to survive the next downturn.  The cash-to-sales ratio of some companies was sometimes higher than 30%. As we will see this is attractive for private equity firms.
  • Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013
    Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013

    Scale is important. Only the top 3 have increased market share compared to the rest in the top 20. (See Chart 4).

  • Reduction of vertical integration. Initially, many corporations had their own semiconductor activities. Since the industry during the consolidation phase had a cyclic character with strong (financial) ups and downs, the parent companies were less and less prepared to absorp these cycles in their corporate results.
    • Semiconductor companies became independent from their parents and were no longer restricted by the parent’s  corporate strategies
    • An additional step in deverticalization was the introduction of the fabless business model, companies that design and sell semiconductors without having a “fab”. In 2010 7 out of the top 12 were fabless. In this way, fixed assets were substantially reduced, making the companies less vulnerable during the downturns.
    • The reduction of vertical integration also makes it easier to re-allocate (parts of) the activities to other companies / network structures and in this way achieve economy of scale or acquire new competences.

The M&A categories amongst the top 20 in the semiconductor industry

Reshuffeling in the semiconductor industry took place applying different M&A categories, (1) spin-off , (2) horizontal merger, (3) LBO (Leveraged Buyout), (4) acquisition. For comparison, we also analyse (5) new entrants.

  1. Spin-off

    Spin-off
    Chart 5. Please click to enlarge. The spin-offs in the semiconductor industry (Siemens, Infineon, Qimonda and AT&T, Lucent Technologies, Agere)

    Spin-off is defined here as a type of corporate transaction forming a new company or entity.

    The following spin-offs are considered. Infineon Technologies AG, the former Siemens Semiconductor, was founded on April 1, 1999, as a spinoff of Siemens AG. In 1988, as a first step Lucent Technologies was composed out of AT&T Technologies and spun off in 1996. In 2000, the semiconductor activity of Lucent Technologies was spun off and became a new legal entity called Agere. Agere in 2007 was acquired by LSI in 2007. ON Semiconductor was founded in 1999. The company was a spinoff of Motorola’s Semiconductor Products Sector. It continued to manufacture Motorola’s discrete, standard analog and standard logic devices. (Relative small spin-off not represented in the chart)

  2. Mergers
    Chart 6. Click to Enlarge. The main mergers in the chip industry (Hitachi-Mishubishi-Renesas-NEC, SGS-Thomson-SGS Thomson-ST Microelectronics, Fujitsu-AMD-Spansion, Hyundai-LG-Hynix-SK Hynix)

    The mergers in this analysis are horizontal mergers. These mergers are difficult to implement.  Integration, often means higher efficiencies and laying of personnel, but also reallocation of management responsibilities which all together make it difficult to achieve the synergetic advantages.

    In 2003, AMD and Fujitsu created a joint-venture company named Spansion focused on flash memories and in 2005 AMD and Fujitsu sold their stake. In 1996 Hyundai Semiconductors became independent via a Initial Public Offering and merged in 1999 with LG Semiconductors, the name changed into Hynix Semiconductor in 2001. In 1987, when SGS-Thomson was formed as a merger of SGS Microelettronica and Thomson and by 1998 withdrawal of Thomson out of SGS-Thomson, changing the name into STMicroelectronics. In 2002, semiconductor operations of Mitsubishi and Hitachi were spun off and merged to form a new separate legal entity named Renesas, in 2010 Renesas was merged to Renesas Technologies by merging with NEC Electronics Corp., a spin-off of NEC in 2002.

  3. Leveraged Buy Out
    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale
    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale

    In an LBO, private equity firms typically buy a stable, mature company that’s generating cash. It can be a company that is underperforming or a corporate orphan—a division that is no longer part of the parent’s core business. The point is that the company has lots of cash and little or no debt; that way the private equity investors can easily borrow money to finance the acquisition and can even use borrowed funds for more acquisitions once the company is private. In other words, the private equity firms leverage the strong cash position of the target company to borrow money. Then they restructure and improve the company’s bottom line and later either sell it or take it public. Such deals can produce returns of 30 percent to 40 percent.

    In 2006, Philips Semiconductors was sold by Philips to a consortium of private equity firms (Apax, AlpInvest Partners, Bain Capital, Kohlberg Kravis Roberts & Co. and Silver Lake Partners) through an LBO to form a new separate legal entity named NXP Semiconductors. Philips was very late compared to other parent companies to disentangle from the semiconductor activities and as the NXP CEO’s Frans van Houten stated, the LBO was the only option left. In 2006, Freescale (formerly known as Motorola semiconductors) agreed to be acquired by a consortium of private equity firms through an LBO. 2006 was a dramatic year for Freescale since Apple decided to let there microprocessors be supplied by Intel instead of Freescale.

  4. Acquisitions
    Acquisitions
    Chart 8. Acquisitions in the semiconductor industry

    Acquisitions are part of life in the semiconductor world. As already stated, we only consider acquisitions amongst the top 20.

    In 2010 Micron Technology acquires Numonyx. In 2011 Intel takes over Infineon Technologies wireless division, Texas Instruments acquires National Semiconductor, Qualcomm acquires Atheros Communications and ON Semiconductor acquires the Sanyo semiconductor division. In 2012 Samsung Electronics acquires Samsung Electro-Mechanics share of Samsung LED and Hynix Semiconductor was acquired by the SK Group. In 2013 Intel acquires Fujitsu Semiconductor Wireless Products, Samsung Electronics sold their 4- and 8-bit microcontroller business to IXYS, Micron Technologies acquires Elpida and finally Broadcom acquires Renesas Electronics’ LTE unit.

  5. New Entrants 
    Chart 9. The new entrants in the top 20
    Chart 9. The new entrants in the top 20

    Even in this competitive, capital and know-how intensive industry there have been new entrants. The new entrants group is dominated by US companies.
    Marvell Technology Group, Limited was founded in 1995. Marvell is a fabless semiconductor company. MediaTek Inc. is a fabless semiconductor company founded in 1997.Nvidia was founded in 1993.

    New entrants in the top 20 are Qualcomm was founded in 1985 and has become the nr. 3 in revenue in 2013. Micron Technology, Inc. founded in 1978 and is nr. 4 in revenue in 2013. Samsung Electric Industries was established as an industry Samsung Group in 1969 in Suwon, South Korea in 1974, the group expanded into the semiconductor business by acquiring Korea Semiconductor and occupies the nr. 2 position.

 Evaluation

Chart 10. Click to enlarge. Revenue development in the top 20 in the semiconductor industry from 1987 to 2013 in the different M&A categories as described above.

In chart 10 per M&A category, the development of the revenue are presented. Spin-offs (blue colors). FROM is the total revenue of the parent organisations and TO is the total revenue of the new founded companies. The same format is applied to Mergers (green colors) FROM and TO and LBO (orange colors) FROM and TO. The “others” category is a mix of companies, from which we lack qualititative data on the M&A activity).

There were some new entrants in the industry between 1987 and 2013. Some of them succeeded to make it into the top 20. There were also new entrants in the top 20, but most of them started a decade earlier. They occupy nr.2,3 and 4 in the top 20 in 2013. The nr. 1,2,3,4 all started as semiconductor companies or were embedded in a component conglomerate, like Samsung. These companies have a culture and a managerial mindset for component business. The new entrants and Intel have grown substantially to 50% of the total – top 20- revenue in 2013. Most players in 1987 were conglomerates, where semiconductor activities where managed on the impact on corporate level and not on winning the semiconductor game. The companies that went into a spin-off, horizontal merger or LBO  are together delivering 25% of the revenue in 2013. Compared to the “new entrants”, we have to conclude that companies had to go through a deverticalisation process via spin-offs, mergers or LBO have been part of a consolidation trajectory with in absolute terms no growth between 1995 – 2013.

It is tempting to speculate about the underlying causes, so here are some hypotheses:

  • With respect to the spin-offs, these where rather small companies, maybe too small to grow business, although Infineon initially shows quite some growth. But analists report a lot of cash problems in this growth period. In the end the spin-offs did not lead to substantial growth.
  • The mergers, especially the Japanese companies have declined in ranking during the period that they were still part of a conglomerate and then have tried to regain scale by merging. Again the result in terms of growth is very disappointing.
  • There is evidence of poor post-acquisition performance of large acquirors (Harford, 2005)
  • There is a study on bidding merger contests, that for the winning bidder, where before the contest each bidder had a fair chance of winning the merger contest, the stock returns of the winning bidders is outperformed by those that lost the bid. However, the opposite is true in those cases with a predictable winner (Malmendier, Moretti, Peters, 2012)
  • The LBO’s occur very late in the deverticalisation stage. The performance after 3 years of the LBO is not inspiring, both showed a severe loss of revenues. My impression also based on an interview with the CEO of NXP, van Houten, is that they were just too late with considering M&A. For Philips all options were gone, even a last possibility, a merger with Freescale was considered, but rejected by Freescale. Fortunately, the last few years NXP seems strong enough to recover from this transition.
  • Humans always seek for a “Why”, sometimes it is not there and it is just bad luck.
  • LBO’s, mergers and LBO’s are strategic interventions changing the footprint of companies, changing the structure, maybe these companies did not have a structure problem, but a portfolio problem (markets, applications, products)

I guess statements of Derek Lidow, president and CEO of market research firm iSuppli in 1996 are spot on explaining the strategic M&A decisions leading to no growth.

“….. the chip industry itself has been unable or unwilling to take the hard steps necessary for consolidation. Many chip companies are run by engineers, who tend to think in terms of technology rather than of how best to manage a product portfolio….”

“The real leverage in this kind of a deal is to do portfolio management,” he notes, which means managing groups of products by market segment or geographic region, for example, rather than by technology category. “That’s usually not done in the semiconductor industry,”

Sources

Great Innovators Steal

Posted on September 8, 2014 by Pete Foley

In 1996 Steve Jobs expressed strong agreement with a quote he ascribed to Pablo Picasso, Good artists copy; great artists steal. This is a meme with a long pedigree, and has also been expressed in very similar terms by other creative giants, from T S Elliot to Stravinsky.

With quotations, there is often debate around who said exactly what and when. However, what I love about them is that, like proverbs, they often very succinctly capture a powerful insight. I believe this is one such insight, and one worth stealing. While the idea itself is hardly new, I believe that looking at it through the lens of Behavioral Science, and hence using analogy, enabled by deep causal understanding, and problem mapping can teach us how to steal more effectively.

Continue. reading ->

How Do Networks Handle Stress?

Our society gets increasingly networked. They are everywhere, our technology is networked, our communication is networked, we work in networks, our money is taken care of in networks (I hope) , our industries are networked and so on and so on. Never has there been a time that individuals and organisations had access to so much knowledge and technology. And that thanks to networks.

“Normal Accidents”

But there is a drawback to networks. Networks are characterised by interacting nodes, allowing a lot of knowledge stored into the system. As nicely explained in RSA Animate’s video “The Power of Networks”,networks is a next step in handling complexity.  Triggered by the 1979 Three Mile Island accident, where a nuclear accident resulted from an unanticipated interaction of multiple failures in a complex system, Charles Perrow was one of the first researchers that started rethinking our safety and reliability approaches for complex systems. In his “Normal Accident Theory”, Perrow refers to failures of complex systems as failures that are inevitable and therefore he uses the term “Normal Accidents”.

Complexity and Coupling Drive System Failure

Perrow identfies two major variables, complexity and coupling, that play a major role in his Normal Accident Theory.

Complex Systems

Proximity
Common-mode connections
Interconnected subsystems
Limited substitutions
Feedback loops
Multiple and interacting controls
Indirect information
Limited understanding

Linear Systems

Spatial segregation
Dedicated connections
Segregated subsystems
Easy substitutions
Few feedback loops
Single purpose, segregated controls
Direct information
Extensive understanding

Tight Coupling

Delays in processing not possible
Invariant sequences
Only one method to achieve goal
Little slack possible in supplies,
equipment, personnel
Buffers and redundancies are
designed-in, deliberate
Substitutions of supplies, equipment,
personnel limited and designed-in

Loose Coupling

Processing delays possible
Order of sequences can be changed
Alternative methods available
Slack in resources possible
Buffers and redundancies fortuitously available
Substitutions fortuitously available

 

 

Although Perrow is very much focused on technical systems in 1984, his theory has a much broader application scope. Nowadays, his insights also apply to organizational, sociological and economical systems. Enabled by digitization and information technology, systems in our daily lives have grown in complexity. The enormous pressure in business to realize the value in products and services in the shortest possible time and at the lowest cost, has driven our systems into formalization, reducing the involvement of human beings and their adaptivity compared to the complexity of the systems. Going through the list Perrow provided, one can understand that this “fast-and-cheap” drive generates complex and tightly coupled systems, maximizing resource and asset utilization  and doing so closing on on the maximum stress a system can cope with. From the Normal Accident Theory, we learn that complexity and tight coupling (in terms of time, money, etc…) increases the system’s vulnerability for system failure or system crises and collapses.

Building a Simple Simulation Tool to Simulate Networks Handling Stress

First I build a little simulator to play around with his models. It is based on MS EXCEL with some VBA-code added, simulating a network of max 20×20 nodes.. Each iteration, nodes (one cell with a color is a node) are added, but in each iteration also links between cells are added. Randomly each node receives a little bit of stress, which is added to the stress already built-up in the previous iterations. Every iteration., some stress is drained out of the system.. When there are a few nodes in the system, most stress is drained away, so no system stress is build-up. But after a while more and more nodes are alive in the system and at a certain moment more stress is generated than the system can absorb. When the stress in a cell exceeds a certain thresshold, the cell explodes and its’ stress is send to all the connected nodes. The distribution mechanism can be scaled from distributing the stress evenly over the connected nodes (Real life example: a flood) or transferring the amount of stress to each of the connected nodes (Real life example: information). With this little simulator, I had a nice opportunity to see whether a crises is predictable or not.

Start the video and click on the small square in the bottom right corner to enlarge.

Let’s check!

In advance, I can set a number of parameters, like how many nodes are added per iteration, how many links are created per iteration, what is the maximum stress allowed, what is the stress distribution mechanism, how much stress is absorbed by the networks eco-system. Of course, not everything is fixed: the links are placed randomly the stress addition per iteration is randomized. What is intriguing however is that with the same settings, I get very different results:

Example 1

Example 2

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 P7-24-2014-3-05-33-PM-E1-2-3-30-S30-100-P80
 In this example, beyond 60 iterations, the system is no longer able to absorp all stress, keeping system stress at 0. Between 60-100 iteration the stress in the system grows. Around iteration 100, we get the first crises. A lot of nodes are overstressed and die. So the system stress shows a drawdown., but not completely, it quickly recovers and moves into growth-mode again until it reaches another serious drawdown around 170. Bumpy, but OK I guess.  In this example, the first part of the graph is identical to example 1. But,in the first100 iterations another network of nodes with other connections then in Example 1 has been created. Another network that as you see is behaving dramatically different. So, also around the 100th iteration, we get our first drawdown, but this is a serious one. It almost completely whipes out the network. A few nodes survived and the system rebuilds itself.
CCDF7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 CCDF7-24-2014 3-05-33 PM-E1-2-3-30-S30-100-P80
The graph above shows how often drawdowns have occured in example 1. The more it is only a stochastic process, the more you may expect that the identified drawdowns (peak-to-valleys) stay close to the line. Only in the tail, there is sometimes a tendency to stay above the line.

 

 

 

 

 

However, in the system of example 2, almost a complete collapse takes place and you can see, what Sornette calls a  “Dragon-King”. And of-course with this value, we are in an avelanche process that swipes the stress overload through the network. According to Sornette, this is no longer a stochastic process, since it does not follow the trend line. This is his underpinning that there is something causal and thus predictable. But he is wrong. Yes it is a lot of domino stones that in a cascade destabilize each other and lead to a bigger drawdown then the verical axis indicates. But it is triggered by a stochastic process like all the others. As you can reason, this does not imply predictability. After the drawdown, one can identify the crash, not before. Sornette seems to use an old trick to hit the target. First shoot the arrow, then draw the bulls-eye around it and finally claim you never miss and your predictions are spot on.

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80

In this graph, 10 simulation runs are depicted on top of each other. It shows that upto 100 iterations, the system behaves more or less identical in all the runs. However in one of the runs, during these 100 iterations a system has build up that is capable within boundaries to keep itself alive, while in all other runs, the build network is almost completely destroyed, sometimes rebuild, but in one occasion all nodes were dead. There is no way that the graph with the system stress contains enough information to predict any system behavior after iteration 100.

If we run the system not allowing it to create any links. the system stabilizes at around 3000-3500. Any nodes added above the 3000-3500 is creating a bubble and are compensated by nodes getting overstressed and killed. Since there are no links, no stress is transferred to other nodes. In other words the domino effect does not occur. The system is in a dynamic equilibrium. When the build-up of the links are allowed, and domino effects are possible, as soon as the stability line of 3000-3500 is exceeded, the system depending on its underlying network structure and the stochastic distribution of overstressed nodes determines the behavior and makes the system chaotic and no longer manageable.

Can We Predict Crises in Networks?

Recently Didier Sornette gave a TED-talk claiming he had found a way to predict crises in complex systems. See http://on.ted.com/Sornette. He shows examples from the financial world, health care and aerospace. This has the odeur of a grand achievement and it may bring us a step closer to managing or maybe even preventing crises. After my initial enthusiasm (since if this works, we can stress out stuff even further at a lower risk) I went deeper into Sornette’s work. However, I become less and less convinced that this is sound scientific stuff. But if he is right, there is a lot of potential in his theory. So, let’s dig deeper.

I also found a master thesis of mrs. Chen Li, University of Amsterdam, that in-depth tried to validate Sornette’s approach. Here is the conclusion:
“In sum, the bubble detection method is capable of identifying bubble-like (superexponential growth) price patterns and the model fits the historical price trajectory quite well in hindsight. However, fitting the model into historical data and using it to generate prediction in real-time are two different things and the latter is much more difficult. From test results, we conclude that bubble warnings generated by the model have no significance in predicting a crash/deflation.” Sornette’s claim of “a method that predicts crises” is falsified. To calculate whether a system is overstressed and to predict a crises are two different things.

The Answer Is No, But …..

So, if we can’t predict the drawdowns, we have to avoid creating bubbles. But in Techno-Human systems, we run into the phenomena of the tragedy of the commons (The tragedy of the commons is an economics theory by Garrett Hardin, according to which individuals, acting independently and rationally according to each one’s self-interest, behave contrary to the whole group’s long-term best interests by depleting some common resource). Who is prepared to respect boundaries, limit system stress and run the danger that others do not behave accordingly avoiding the system to become overstressed? Nobody wants to be a sucker, so this bubble game will probably continue, except for a few well-organized exceptions (see my blog in Dutch (Sorry) “Het managen van de groei“). Next to complicated social solutions, we can tmake the system more robust against crises by

  • Avoiding tight coupling and complexity in the deisgn of a system
  • Collecting and analyzing data on “close-call” incidents and improve the system
  • Improving the interaction between operators (people working with or in the system) and the system and apply human factors engineering

as described by Charles Perrow in 1984 among others.

Ruud Gal, July 25 2014

References

Charles Perrow, “Normal Accidents: Living with High-Risk Technologies”, 1984

Gurkaynak, R.S., Econometric tests of asset price bubbles: taking stock. Journal of Economic Surveys, 22(1): 166-186, 2008

Jiang Z-Q., Zhou,W-X., Sornette, D., Woodard, R., Bastiaensen, K., Cauwels, P., Bubble Diagnosis and Prediction of the 2005-2007 and 2008-2009 Chinese stock market bubbles, http://arxiv.org/abs/0909.1007v2

Johansen, A., Ledoit, O. and Sornette, D., 2000. Crashes as critical points. International Journal of Theoretical and Applied Finance, Vol 3 No 1.

Johansen,A and Sornette, D., Log-periodic power law bubbles in Latin-American and Asian markets and correlated anti-bubbles in Western stock markets: An empirical study, International Journal of Theoretical and Applied Finance 1(6), 853-920

Johansen,A and Sornette, D., Fearless versus Fearful Speculative Financial Bubbles. Physica A 337 (3-4), 565-585 (2004)

[Kaizoji,T., Sornette, D., Market bubbles and crashes. http://arxiv.org/ftp/arxiv/papers/0812/0812.2449.pdf

Lin, L., Ren, R.-E., Sornette, D., A Consistent Model of ‘Explosive’ Financial Bubbles With Mean-Reversing Residuals, arXiv:0905.0128, http://papers.ssrn.com/abstract=1407574

Lin, L., Sornette, D. Diagnostics of Rational Expectation Financial Bubbles with Stochastic Mean-Reverting Termination Times. http://arxiv.org/abs/0911.1921

Sornette, D., Why Stock Markets Crash (Critical Events in Complex Financial Systems), Princeton University Press, Princeton NJ, January 2003

Sornette, D., Bubble trouble, Applying log-periodic power law analysis to the South African market, Andisa Securities Reports, Mar 2006

Sornette, D., Woodard, R., Zhou, W.-X., The 20062008 Oil Bubble and Beyond, Physica A 388, 1571-1576 (2009). http://arxiv.org/abs/0806.1170v3

Sornette, D., Woodard, R., Financial Bubbles, Real Estate bubbles, Derivative Bubbles, and the Financial and Economic Crisis. http://arxiv.org/abs/0905.0220v1

Dan Brahaa, Blake Stacey, Yaneer Bar-Yam, “Corporate competition: A self-organized network”, Social Networks 33 (2011) 219– 230,

Chen Li, “Bubble Detection and Crash Prediction”, Master Thesis, University of Amsterdam, 2010

M.A. Greenfield, “Normal Accident Theory, The Changing Face of NASA and Aerospace Hagerstown, Maryland”, 1998

http://en.wikipedia.org/wiki/Tragedy_of_the_commons

 

 

Roadmap Eindhoven Energieneutraal 2045

In oktober 2013 starten Elke den Ouden, TU/e en ik in opdracht van Mary-Ann Schreurs, wethouder Eindhoven, en Marc Eggermont, algemeen directeur Woonbedrijf met het tot stand brengen van een roadmap, die Het verbruik van fossiele brandstoffen in Eindhoven naar 0 moet brengen in 2045. Natuurlijk is het maar een stipmop de horizon, die heel ver weg ligt, maar dit helpt toch om voor de komende jaren de juiste kant op te sturen.

Eigenlijk bestrijkt de roadmap maar een derde van de uitdaging. Een fors deel van het fossiele brandstofverbruik wordt veroorzaakt door mobiliteit (bijvoorbeeld auto’s, vliegtuigen) en ongeveer een derde wordt veroorzaakt, door het fossiele brandstofverbruik tijdens de productie, transport maar ook recyclen van alle goederen en voedsel die we in Eindhoven consumeren. Weinig mensen weten bijvoorbeeld dat het stukje vlees op uw bord zeer veel fossiele brandstof vergt. De roadmap concentreert zich op de gebouwde omgeving, huizen, kantoren, bedrijven.

De roadmap is vorige week woensdag op 2 juli gelanceerd aan het eind van een workshop waarbij we met allerlei belanghebbenden, de roadmap vertalen naar acties voor de komende jaren.

De gebieden van vernieuwing

20140712-084607-31567138.jpg

Uit de eerste discussies met betrokkenen in de beginfase van het roadmapproces, bleek al dat er drie gebieden van inhoudelijke vernieuwing aanwezig waren. In de verte was al aangegeven dat Eindhoven in 2045 energieneutraal  moest zijn. Maar op de korte termijn (linksonder) was duidelijk dat hoewel al een aantal mensen druk bezig zijn met allerlei oplossingen aan het verzinnen zijn (bij voorkeur smart…), terwijl er nog een aantal zijn, die zich afvragen wat er aan de hand is en in ieder geval nog niet actief op pad zijn. Aan de linkerkant van het pad worden al allerlei duurzame energie (deel)-oplossingen verzonnen en is er een sterk geloof in een buurtgewijze benadering. Onder het pad (in de schets rechtsonder) was er ook een groep, dei zich begon te realiseren dat de energiestromen in de vorm van gas en electriciteit van en naar Eindhoven, maar ook binnen Eindhoven zich in de komende decennia grote en dus dure aanpassingen aan de energieinfrastructuur wel eens nodig zouden kunnen zijn. (de tekening is overigens gemaakt door Jan-Jaap Rietjens).

Betekenis, bewustwording en behoefteontwikkeling

Elke en ik waren toen we begonnen niet echt deskundigen op gebied van duurzaamheid. Verre van dat, we hebben veel kennis over hoe je met een groep mensen tot een roadmap moet komen en hoe die te vertalen naar actie. Het was voor ons beiden, in ieder geval voor mij, een indringende kennismaking. Voor die tijd had ik natuurlijk wel van duurzaamheid gehoord, maar ik was niet bewust bezig met mijn gedrag aan te passen. Toen ik me begon te realiseren dat het echt met onze aardbol en dus met ons verschrikkelijk mis gaat als we zo doorgaan, begonnen we in ons gezin meer te fietsen, minder vlees te eten, proberen zo min mogelijk spulletjes in plastic te laten verpakken en plastic tassen te gebruiken. plastic apart in te zamelen en ga zo maar door. Maar zoals ik aan dit project begin zijn er velen, die zich noet bewust zijn wat we als we zo doorgaan achterlaten aan onze kinderen en kleinkinderren. De grootste uitdaging is om duurzaamheid betekenis te geven: het doorgeven van een aardbol, die goed is om op te leven voor alle levende wezens op deze aarde. Daarnaast weten veel mensen niet wat dan erg belastend is voor het milieu. Dus zolang je niet bewust bent van die belasting, kun je ook niet sturen naar meer duurzaamheid. bewustwording is het tweede sleutelbegrip. En ten
slotte ontstaat er de vraag naar oplossingen. Als die vraag niet ontstaat, gaat het bedrijfsleven niet investeren, want de bedoeling is dat bedrijven waarde creëren en geld verdienen om te kunnen voortbestaan en verder te ontwikkelen.

Waardeladder

Om een energiesysteem aantrekkelijk te maken voor gebruikers en eigenaars zijn een aantal zogenaamde waardedrijvers (onderste rij in de afbeelding) van belang. Een paar belangrijke waardedrijvers als voorbeeld: Natuurlijk is op de eerste plaats de bijdrage aan de vermindering van fossiele brandstoffen van belang, maar uiteraard ook de economische waarde (betaald zich de investering wel terug?). Veel energiesystemen scoren slecht op de architectonische waarde (ik vind het niet mooi, ik wil zo’n ding niet in de buurt van mijn achtertuin). Als er keuzes moeten worden gemaakt tussen systemen dan zullen die systemen verschillen op hun score op de waardedrijvers. Sommige gebruikers zullen andere keuzes maken dan andere. Als je dan de vraag stelt waarom geef je aan dit systeem de voorkeur, dan hoor je een hoger liggende laag van “waarde drijvers”. Door dit voor elke laag te herhalen, kom je uit op een verzameling van eindwaarden (gerelateerd aan het einddoel), maar ook instrumentele waarden (die iets zeggen over de manier waarop je het einddoel wilt bereiken). Belangrijk is om te realiseren dat mensen verschillen en dus vanuit andere eind- en instrumentele waarden tot andere keuzes komen. Dit is gebied wat veel meer onderzoek vergt en helpt met het huidige aanbod van energiesystemen veel beter te laten aansluiten aan de behoefte van (verschillende) mensen. 

Energiesystemen

Energie is een zeer complex thema. Er zijn zeer veel verschillende technologiëen nodig om het fossiele brandstofverbruik naar 0 te brengen. Het is de uitdaging om kennis op te bouwen, welke systeemconfiguraties voor welk soort woon- of werkomgeving geschikt zijn.
Bovendien is het gebied technologisch nog sterk in beweging. We hebben nig niet echt een goede oplossing voor electrische energieopslag. Het kan wel, maar het is erg duur of erg zwaar. Het kan nog veel kanten op. Dus als we investeren, hoe zorgen we er dan voor dat de investeringen niet in de toekomst te snel afgeschreven moeten worden.

Energieinfrastructuur

In en op de bodem van Eindhoven liggen zeer veel infrastructuren, gas, water, electra, data, warmtenetren, glasvezel, riolering, wegen, spoorlijnen. Infrastructuren vergen grote investeringen, maar daarnaast bepalen ook wie wel en wie geen toegang krijgt. De verwachting is dat zeer veel energie lokaal wordt opgewekt en verbruikt. Echter, de zonnepanel
en geven alleen energie op een heldere dag als de zon aan de hemel staat. Hoe die variaties tissen dag en nacht, tussen zomer en winter op te vangen. Hetzelfde geldt voor warmte en koude. Als iedereen een elektrische auto voor de deur heeft staan met een oplaadpunt aan huis heeft dat gigantische consequenties voor de infrastructuur, want er moet opeens een heleboel stroom worden geleverd.

20140712-084606-31566923.jpgEen energieneutrale oplossing voor Eindhoven vergt inzicht en kennis op zeer veel gebieden. Bijvoorbeeld: met betrekking tot het opwekken van energie via zonnepanelen blijkt dat niet elk dak geschikt is. Kijk bijvoorbeeld naar de geschiktheid van je eigen dak op de zonneatlas.Ook met betrekking tot opwekking door middel van wind zijn er beoperkingen. Op daken is er twijfel of er goede oplossingen zijn (maar die race is nog niet gelopen, kijk bijvoorbeeld naar IRWES). Dan is er de optie van de grote windturbines, maar door het nabij gelegen vliegveld  is het gebied waar dit kan ook beperkt. Zo is er voor elke laag de huidige situatie en de mogelijkheden op de kaart van EIndhoven te leggen en zullen er waarschijnlijk andere energiesystemen mogelijk of onmogelijk zijn. 

Maatschappelijke vernieuwing

20140712-084607-31567228.jpg

In innovatie stromen de geode ideëen als onderdeel van een groter geheel in de toepassing. De slechte ideëen moeten bij voorkeur tegen zo laag mogelijke kosten uit de innovatiestroomworden verwijderd. Ruwweg zijn er drie grove stappen. Is het op zichzelf een goed idee? Een van de dilemma’s is dat een goed idee voor iedereen iets anders betekent. Als veel partijen zich allemaal positief over een idee moeten uitlaten en ook nog werk en geld daarin moeten stoppen, dan zal duidelijk zijn dat de kans op succes dat het idee dit overleeft snel veel kleiner wordt. Daarom heeft innovatie in netwerken niet alleen veel verbinding nodig aan de operationele kant, maar ook aan de beoordelende kant. Maar vaak blijkt dat niet het geval en gaat een potentieel goed idee ten onrechte ter ziele.

Tenslotte is er ook de uitdaging om de vernieuwing of innovatie te organiseren. Er zijn vier clusters van partijen hierbij betrokken:
– de overheden
– het bedrijfsleven
– de kennisinstituten
– de burgers, gebruikeurs en eigenaren.

Wat het ingewikkeld maakt is dat iedere partij voor een deel eigenaar is van de energie uitdaging, maar ook zelf deel moet worden van de oplossing, sterker nog deel moet worden van het bedenken van de oplossing. Het is mij duidelijk geworden dat de wereld in 2045 er behoorlijk anders uitziet, zowel technologisch, maar ook sociaal en maatschappelijk. Diepgaande veranderingen zijn nodig en we staan nog maar aan het begin.

Wilt je meer lezen dan volgt hier wat literatuur, die mij erg geholpen heeft om inzicht te creëren.

Rest mij om iedereen uit te nodigen om zich te verdiepen in deze materie en mee te doen! Het is iets van ons allemaal!

Ruud Gal

Referenties:

Visie en roadmap Eindhoven Energieneutraal in de gebouwde omgeving
Persbericht Lancering Roadmap
Rapport Eindhoven Energieneutraal

4 Myths About Apple Design, From An Ex-Apple Designer

WHAT’S LIFE REALLY LIKE DESIGNING FOR APPLE? AN ALUM SHARES WHAT HE LEARNED DURING HIS SEVEN YEARS IN CUPERTINO.

Mark Wilson, May 22, 2014

Apple is synonymous with upper echelon design, but very little is known about the company’s design process. Most of Apple’s own employees aren’t allowed inside Apple’s fabled design studios. So we’re left piecing together interviews, or outright speculating about how Apple does it and what it’s really like to be a designer at the company.

Continue Reading ->

Comment Ruud Gal

What I really appreciated in this post of Mark Wilson is that we over-glorify the importance of individuals. In essence, it is much more the culture that has been created (of course driven by Steve Jobs), a culture that aligns purpose and passion of the people for the maximum customer experience.

Secondly, I see a lot of companies being either design-driven or technology-driven. They have never been able to merge the two very different innovation strategies and behaviors in one organization. It seems that Apple has bridged the two naturally conflicting clans into one.

Very recognizable is also the remark that in essence reveals that an innovation can not be scheduled. It is so difficult for organizations to keep the balance between exploration and exploitation.

Making better decisions about the risks of capital projects

A handful of pragmatic tools can help managers decide which projects best fit their portfolio and risk tolerance.

May 2014 | by Martin Pergler and Anders Rasmussen

Never is the fear factor higher for managers than when they are making strategic investment decisions on multibillion-dollar capital projects. With such high stakes, we’ve seen many managers prepare elaborate financial models to justify potential projects. But when it comes down to the final decision, especially when hard choices need to be made among multiple opportunities, they resort to less rigorous means—arbitrarily discounting estimates of expected returns, for example, or applying overly broad risk premiums.

Continue Reading ->

How materialism makes us sad

Tanya Gold
The Guardian, Wednesday 7 May 2014

The more we spend, the less happy we are. Can this explain why affluent politicians insist on taking from the poor?

Money is a brutalising agent and a paranoiac drug.

Graham Music, a psychotherapist, has written a book called The Good Life: Wellbeing and the New Science of Altruism, Selfishness and Immorality. It confirms, through use of data collected by scientists over the last 40 years, what we have all long suspected from anecdote and our own eyes: the materialistic tend to be unhappy, those with material goods will remain unhappy, and the market feeds on unhappiness. It is an outreach programme for personal and political desolation; and it is, so far, an outstanding success. Peel away the images of the gaudy objects and find instead a condition. Reading Vanity Fair, I deduce, is now mere collusion with the broken.

Continue Reading->

Het managen van grenzen aan de groei

Ik merk in Nederland weinig van het werk van Elinor Ostrom. De Amerikaanse professor Elinor Ostrom (1933-2012) heeft in 2009 als eerste vrouw een Nobelprijs voor de economie gewonnen. Het onderzoek van Elinor Ostrom besteedt aandacht aan de omgang van samenlevingen met ecosystemen. Zij kwam tot de conclusie dat veel maatschappijen een manier van omgang weten te creeëren waarbij voorkomen wordt dat voorraden uitgeput raken. Hierbij legde zij er nadruk op dat het politieke bestuur hierbij niet altijd even effectief is, het organiserend vermogen vanuit de samenleving zelf is soms veel effectiever.

Onze overheid is druk bezig om allerlei taken af te stoten en de burger op te peppen tot participatie. Ik heb het gevoel dat Elinor Ostrom voor een heleboel taken handvatten aanreikt op lokaal gemeenschapsniveau om die zaken goed te organiseren. Het is op z’n minst het proberen waard.

Er is meer naast de Publieke en de Private Zaak ….

Volgens Elinor Ostrom, kunnen goederen en diensten in vier categorieën langs twee dimensies worden ingedeeld:

  • Uitsluiting van mogelijke begunstigden
    Uitsluiting refereert aan hoe moeilijk het is om personen uit te sluiten van het gebruik van een goed of een dienst.
  • Vermindering van beschikbaarheid
    Vermindering refereert naar de mate dat consumptie van de ene persoon de beschikbaarheid van het goed of de dienst voor consumptie van anderen vermindert

PowerPoint Presentation

Voorbeelden:

Clubgoederen:              Theaters, sport- of hobby-clubs

Publieke goederen:        Vrede en veiligheid, defensie, kennis, brandweer, weersvoorspellingen

Private goederen:           Voedsel, kleding, auto’s

Collectieve Goederen:    Drinkwatervoorzieningen, meren, lucht, bossen, visgebieden, energie, niet-recyclebaar
afval, parkeerplaatsen

Hier zien we al één karakteristiek van haar werk. In dit simpele plaatje zitten al twee stromingen (communistisch, socialistisch en liberaal) braaf naast elkaar, die al heel lang elkaar de koppen in aan het slaan zijn. Voor het eerst wordt er gekeken wat werkt beter en waarom en waar geen van beide een goede oplossing bieden, is er een alternatief.

Laatste opmerking, dit schema wekt de indruk dat de aard van de goederen bepaald in welk kwadrant de goederen terecht komen, maar er is altijd een keuze of systeemcondities, die afwijkingen (met natuurlijk een aantal nadelen) mogelijk of noodzakelijk maken.

Het totaal systeem

Elinor Ostrom beschrijft collectieve goederen systemen (“Common Resource Pools”) als een open ecosysteem. Hieronder zie je het plaatje:

PowerPoint Presentation

Het systeem zit ingebed in een hogerliggend systeem, dat sociaal-economische en politieke randvoorwaarden bepaald (S). Verder is er een resource system, bijvoorbeeld een meer en daarin zwemmen vissen (resource units). Users (vissers) vangen de vissen uit het meer (interactie 1) en zeer vaak dreigt het meer overbevist te raken (interactie 2), waardoor de gebruikers op termijn onvoldoende inkomsten en eten uit het meer kunnen halen (uitkomst oude situatie).  Wat voor een regels (governance systeem) moeten we afspreken om deze situatie te voorkomen? Elinor Ostrom heeft dit systeem nog veel verder uitgewerkt en allerlei aspecten  geïdentificeerd, die in een analyse en herontwerp moeten worden meegenomen. Deze zijn niet vermeld, maar bij gebruik is het een waardevol overzicht, waarvan moet overwogen om deze wel of niet mee te nemen.

Niet door de overheid, maar door de gemeenschap

Opvallend is dat dit “governance” systeem niet door de overheid is opgelegd, maar spontaan door zelforganisatie in de gemeenschap ontstaat. Verder valt op dat het een lerend systeem is, als het “governance” systeem goed is opgezet worden de regels door de betrokkenen net zolang aangepast tot het systeem blijkt te werken.

Elinor Ostrom heeft honderden casussen onderzocht en komt tot een aantal interessante kritische succes factoren.

  • Goede gedefinieerde grenzen
    Individuen of huishoudens met rechten om “resource units” te onttrekken aan de “common resource pool” en de grenzen aan de “common resource pool” zelf zijn helder gedefinieerd.
  • Congruentie
    Het verdelen van de voordelen door toewijzingsregels is ongeveer evenredig met de kosten van ontrekken van “resource units” aan het “resource systeem”, zoals bepaald door kostenregels (wat telt wel mee in de kosten en wat niet).
    Er zijn regels voor het toe-eigenen van “resource units”, die restricties opleggen in tijf (hoelang mag er worden gevist), plaats (waar mag wel en niet worden gevist), technologie (sleepnetten niet toegestaan) en de hoeveelheid resource units (visquota). Deze regels zijn sterk lokaal bepaald.
  • Aanpassen van de operationele regels
    Diegene, die met de operationele regels te maken hebben mogen participeren in het proces om de operationele regels aan te passen
  • Controle
    Diegene, die actief de condities van de “common-pool-resources” (het meer) en het gedrag van gebruikers bewaken moeten aan de gebruikers verantwoording afleggen of zijn zelf gebruiker.
  • Redelijke sancties
    Gebruikers, die de regels overtreden, krijgen een sanctie opgelegd, afhankelijk van de context en de ernst van de overtreding door andere gebruikers of door aangestelde personen, die verantwoording afleggen aan de gebruikers of beide
  • Conflicten oplossen
    Gebruikers en aangestelde personen hebben snel toegang tot goedkope, lokale plaatsen om hun conflicten op te lossen
  • Recht op zelf-organisatie
    Het recht van gebruikers om hun eigen instituties in te richten wordt door de overheid erkend

Tenslotte voor collectieve goederen systemen, die onderdeel zijn van een groter systeem geldt aanvullend

  • Inbedding
    Collectieve goederen systemen zijn onderdeel en ingebed in een groter systeem met meerdere lagen

Toepassingen

Zoals gezegd, wij staan voor een heleboel uitdagingen als moderne samenleving.

Er zijn grenzen aan onze groei. Daar waar het gaat knellen, zullen we moeten kijken hoe we de beschikbare middelen op een faire manier verdelen.

Gemiddeld groeit het GDP in de wereld met 2-3 % per jaar. Er komen steeds meer mensen op deze wereld en de economische groei van de afgelopen jaren was nep. Die moeten we terugbetalen, helaas niet diegene die er een vermogen mee hebben verdient, maar wel diegene, die nu de rekening krijgen en diep in de schulden zitten. Iedereen zit te kijken naar de AEX-index en hoopt op de tijden van weleer, ik denk dat het realistischer is om onze consumptie aan te passen aan wat we produceren en verdienen.

De overheid stoot veel taken af, aangezien er grenzen zijn aan de bestuurbaarheid en betaalbaarheid.  Hoe richten we de zorg, lokaal in? De transitie naar privatisering heeft helaas weer een hoop pervers gedrag afgeroepen en sinds de privatisering zijn dus de kosten dramatisch en onbeheersbaar omhoog gegaan. Het lijstje van kritische succes factoren van Elinor Ostrom helpt bij het begrijpen van dit systeemgedrag.

De hele omslag naar duurzame energie. Duurzame energie verbruikt heel veel oppervlakte, hoe gaan we dat verdelen?

 

Conclusie

Elinor Ostrom werk nodigt uit om uit te proberen en te leren, hoe we onze samenleving beter geschikt maken om  te gaan met beperkingen zonder meteen weer door te schieten in verwijten over en weer tussen liberale en socialistische kampen. Haar aanpak heeft trekjes van beide. Daar zit ook een risico in dat beide kampen hierin redenen zien om het af te schieten.

Natuurlijk is haar benadering geen oplossing voor alles, maar het is een veelbelovende start. Bovendien denk ik dat er al heel veel van dit soort zelf-organisatie in onze samenleving bestaat, zonder dat we het een Collectieve Goederen – systeem noemen. Een heel mooi voorbeeld zijn onze waterschappen, die notabene bijna door de Nederlandse overheid waren afgeschaft http://www.rtlnieuws.nl/nieuws/binnenland/kamer-wil-waterschappen-opheffen.

Blijkbaar niemand in Den Haag, die het werk van Elinor Ostrom kent. Gelukkig is door een internationaal rapport, dit voorkomen.

Dus erkenning van dit soort systemen heeft zelfs al meerwaarde om met het snelle besliswerk van tegenwoordig, te vermijden dat we de kinderen met het badwater weg gooien.

Bronnen

http://managementscope.nl/manager/elinor-ostrom

http://www.bol.com/nl/p/governing-the-commons/1001004000830387/

 

 

 

Hoera, voortaan kunnen we crises beter voorspellen?…. Mooi niet.

Ik ben al enige tijd geïnteresseerd in complexe en gecompliceerde systemen, in duurzaamheid, de interactie tussen technologie en mens/samenleving. Vooral de besturing van deze systemen is wat mij intrigeert en wat ook sterke raakvlakken heeft met mijn vakgebied “Innovatiemanagement”. Wat dat betreft is het smullen in het huidig tijdsgewricht, want er zijn in deze domeinen onopgeloste problemen genoeg, die een behoorlijke maatschappelijke relevantie bevatten.

Recent kwam ik in aanraking met het werk van Didier Sornette. Sornette is hoogleraar aan de leerstoel “Entrepreneurial Risks” aan de EHT Zürich. In juni 2013 heeft hij wat van zijn gedachtengoed gedeeld via TED.

Anders dan Taleb (auteur van de boeken “The Black Swan” and “Antifragile”), die ons adviseert om toch vooral te investeren in zaken met alleen maar “upwards potential” (ik refereer naar het boek “Antifragile”), beweert Sornette dat juist het geloof in de mythe van het opwaartse potentieel een van de aanjagers van crises is.

Sornette stelt dat in onze samenleving er een heilig geloof bestaat in de mythe, dat er welvaart gecreëerd kan worden uit het genereren van schulden. Tot 1980 werd de hoeveelheid geld in omloop in balans gehouden door dit te relateren aan productiviteit. De laatste keer dat we dat niet hadden gedaan (1929) was ons slecht bevallen. Maar 50 jaar later is de les verleerd en wordt de hoeveelheid geld in omloop gerelateerd aan consumptie en aan schulden. En de groei niet langer aan productiviteit verbetering (ongeveer 2-3 % per jaar), maar aan groeiende schulden (d.w.z. nog in de toekomst te ontwikkelen productiviteitverbetering). En daar was ie dan: “de mythe van de perpetuum geldgenerator”.

Salaris-consumptie

Het aandeel van de salarissen en de private consumptie als een percentage van het GDP voor de Verenigde Staten, de Europese Unie en Japan: Bron: Michel Husson, http://hussonet.free.fr/toxicap.xls

Het is niet de bedoeling om oude koeien uit de sloot te halen, maar de kern van Sornette zijn werk is dat hij de “bubbles” kan opsporen en dat zonder ingrijpen onvermijdelijk na zo’n “bubble” een heftige crisis volgt, die hij binnen een bepaalde waarschijnlijkheid kan voorspellen. Deze crises worden eufemistisch faseovergangen genoemd en inderdaad ze vertonen veel gelijkenis met faseovergangen in de fysica. De mechanismes zijn:

  • Het aanwezig zijn van één of meerdere stressfactoren (bijv. GDP op bases van productiviteit (salaris) vs. GDP op basis van consumptie en schulden)
  • De aanwezigheid van positieve feedback (zichzelf versterkende terugkoppeling)

Door de steeds sterker wordende stress te volgen wordt met een bepaalde waarschijnlijkheid een crises, een “dragon-king”, zoals Sornette ze noemt, voorspelbaar. Zijn theorie is in staat gebleken om niet alleen in de financiële wereld, maar ook op andere gebieden goede voorspellingen te doen, zoals aardbevingen, landverschuivingen, epilepsie, geboorte, metaalmoeheid bij raketten en ook de huidige overbelasting van onze aarde door de mensheid.

cover

 

Bron: W. Steffen, A. Sanderson, P. D. Tyson, J. Jäger, P. A. Matson et al.,”Global Change and the Earth System”, Executive Summary, IGBP Secretariat, Royal Swedish Academy of Sciences, 2004

Hier staan een aantal stress factoren, die allemaal een steeds sterkere groei laten zien. CO2, N2O, CH2-concentraties in de atmosfeer, ozonlaag, oppervlakte temperatuur, overstromingen, ecosystemen in de oceaan, de veranderende structuur en de biogeochemie aan de kust, verlies aan tropisch regenwoud, de hoeveelheid gedomesticeerd land en de afnemende biodiversiteit.

Sornette maakt de inschatting dat de komende decades de kritische grenzen zullen bereiken en dan zullen worden gedwongen om in een zeer korte tijd diep terug te vallen en hopelijk de kans krijgen naar een niveau van belasting van het milieu terug te komen die de aarde kan absorberen (duurzaamheidsevenwicht).

YouTube Link Didier Sornette: How can we predict the next financial crises

Zijn werk heeft mij weer een aantal “aha”-ervaringen opgeleverd met betrekking tot duurzaamheid en, ik heb ze hieronder vastgelegd.

Mijn conclusies met betrekking tot een transitie naar een duurzame samenleving:

  • Het is een illusie om te denken dat we de huidige manier van leven en ons organiseren kunnen volhouden.
  • Het is ook een illusie om te denken dat technologie dit voor ons gaat oplossen. Technologie kan misschien het tijdstip van de crises wat naar achteren schuiven, maar het fundamentele probleem van een samenleving met versterkende terugkoppelingen vraagt om een fundamentele politieke, sociale en economische doorbraak. Toch stoppen we als samenleving meer in technologieontwikkeling dan in sociale en misschien wel politieke innovatie (bijvoorbeeld het werk van Elinor Ostrom op gebied van Common Pool Resources)
  • Technologie bepaalt wel de relatie tussen het welvaartsniveau wat haalbaar is gegeven een duurzame belasting van ons milieu, maar gaat ons niet redden als we ons gedrag en onze samenleving niet op fundamenteel andere principes gaan herontwerpen.

Bronnen: