Market-Driven Innovation

The title of this chapter is directly copied from a book on marketing: “The 22 Immutable Laws of Marketing” by the two marketing gurus Ries & Trout. It is very easy and funny to read and allthough one can doubt the scientific quality of their “laws”, Ries & Trout derive a couple of excellent statements. But first, we have to position their work into the right business model.

Marketing marketing  is “the process by which companies create value for customers and build strong customer relationships, in order to capture value from customers in return”. Ries & Trout are experts in A-brand consumer marketing and their ideas fit very well with a “Operational Efficiency” business model. In a “product excellence” world, the loyalty of customers takes the time of one product generation and then the custoomer will explore the market for the best product or proposition again. Also in customer intimacy, their ideas are less relevant, since the crucial elelement in trhe customer interface is that the customer wants a trusted party that helps him redefining the problem, he wants to solve, exploring the solutions that are available and then make a choice. Customer Intimacy is appropriate when the  customer does not know how to translate “symptoms” that bother him into a problem definition and a description of the requirements to the solution, he is looking for.

But this is not Ries & Trout, they are in the “operational efficiency” world with a customer that understands what he is looking, but after a while does not want to invest too much time in selecting exactly the right offer for his  “problem”.

The key messages of Ries & Trout:

It is better to be first in the customers’ mind than to be better

Ries en Trout observed that those companies that came with a new product category in general also when the industry has become more mature, where still leading the pack. They demonstrate, how this works with a question and answer intermezzo:

Who is the first guy that flew over the Atlantic Ocean?

Answer from the audfience: Lindbergh

OK, who was the second who flew over the ocean, he was faster and consumed less fuel?

Silence from the Audience…

You got the point, I guess. Nobody knew the second pilot, his name was Bert Hinkler, because nobody was interested. Ries & Trout give many examples of products that are well-known and that turned out to be the first in their category. For example: Luxaflex, Heinz Sandwhich Spread, Hertz in rent-a-cars, Coca Cola, Heineken, no 1 imported beer in the US, first domestic light beer, Miller Lite. The first college founded in America: Harvard.

Jeep was first in four-wheel-drive off-the-road vehicles. Acura was first in luxury Japanese cars. IBM was first in mainframe computers. Sun Microsystems was first in workstations. Jeep, Acura, IBM, and Sun are all leading brands.

The first minivan was introduced by Chrysler. Today Chrysler has 10 percent of the car market and 50 percent of the minivan market.

The first desktop laser printer was introduced by a computer company, Hewlett-Packard. Today the company has 5 percent of the personal computer market and 45 percent of the laser printer market.
Gillette was the first safety razor. Tide was the first laundry detergent. Hayes was the first computer modem. Leaders all.

Marketing is not a battle of products, it’s a battle of perception.

Many people think marketing is a battle of products. In the long run, they figure, the best product will win. Marketing people are preoccupied with doing research and “getting the facts.” They analyze the situation to make sure that truth is on their side. Then they sail confidently into the marketing arena, secure in the knowledge that they have the best product and that ultimately the best product will win.

It’s an illusion. There is no objective reality. There are no facts. There are no best products. All that
exists in the world of marketing are perceptions in the minds of the customer or prospect. The perception is the reality. Everything else is an illusion.

Key messages:

  • In the world of operational excellence, it is better to be first than to be better in the customers mind
  • Marketing is not a battle of products, it is a battle of perceptions
  • There is no best product.

References:


Ries & Trout: “The 22 Immutable Laws of Marketing”, 1993,

Understanding Social Behavior via Simulations

This is fun and also very insightful to understand behavior of people in groups.

Human behavior with respect to social norms and values is very complex. The last decades a new area: complex systems has developed, which focuses on complex behavior that is based on a set of simple rules. A year ago I bumped into the work of prof. Dirk Helbing, ETH Zurich. He and his team had build a complex system simulator simulating behavior of people adhering to social reules based on simple rules.

Helbing distinguishes 4 types of agents:

  • The Moralist: An agent that adheres to the social rules and confront and “punishes” agents who don’t.
  • The Free-rider: An agent that does not want to follow the rules.
  • The Conformist: An agent that adheres to the social rules and values, but does not confront others who don’t
  • The Immoralist: An agent that does not adhere to the rules but at the same time “punishes” agents who don’t.

Furthermore, there are three parameters that govern the complex behavior.

  • The Synergy parameter: A parameter that determines the reward to all agents generated by those behaving according to the social rules. The synergy value is distributed over all agents.
  • The Cost of Punishment parameter: A parameter that determines the cost of detection and punishment to those that punish (The Moralists and Immoralists)
  • The Fine parameter: A parameter that determines the fine for those that have been catched being non-compliant to the rules.

Finally, there is an algorithm making agents switch with  a certain probability to behavior of the most successful neighbouring 4) agents.

People stick to social rules when it is in their self-interest.

The simulation starts in a 100×100 grid with in each grid element randomly generated roles. The roles have colours:  Green are Moralists, Red are Free-riders, Yellow are Immoralists, Blue are Conformists.

The synergy, cost and fine parameters are set in such away that synergy prevails. Initially free-riders strategy seems quite successful, but in the longer term, the moralist create more and more spaces where agents adhere to social rules and standards.

Click on the encircled triangle to play the video. (Synergy: 150, Fine:100, Cost:30)

In this situation, the synergy, the reward for adherence to social rules, is rather low, but the cost of detection of non-conformance is so low and the fine for the free-rider(red) is so high, that after an initial stage, conformance is implemented and moralists (green) and conformists (blue) prevail.

In the first phase, the free-riders dominate and seems to be the most value added role. After a while, the modalists reverse this process and one can see that the moralists are in spatial clusters. These clusters protect the moralists in the cluster and reduce the exposure to free-riders at the boundaries.

 

Sources:

Arend Hintze and Christoph Adami, “Punishment in public goods games leads to meta-stable phase transitions and hysteresis”, 2015, Phys. Biol. 12

Dirk Helbing a.o. , “On Norms, Punishment and Society”, powerpoint presentation, ETH Zurich, 2010

Ambachtelijke Innovatie door architect Pierre Cuypers

Pierre Cuypers (1827-1921) was een Nederlands architect. Hij is bekend als de architect van het Rijksmuseum en het Centraal Station van Amsterdam. Cuypers heeft echter in zijn leven meer gebouwen ontworpen. Het is met name als je in Zuid-Nederland woont niet onwaarschijnlijk dat een door Cuypers ontworpen gebouw zich in je naaste omgeving bevindt. Hij is één van de grote voorbeelden van de beroemde architect H.P. Berlage.

In 1848 werd het de katholieken toegestaan om weer eigen kerken te bezitten, hetgeen sinds de reformatie in de 16e eeuw niet was toegestaan. Dit was voor een ondernemende architect een gigantisch gat in de markt, waar Cuypers met zijn neogotische stijl met zeer veel succes aan de slag ging. Hij bouwde in zijn leven ongeveer 70 kerken en veel bestaande kloosters en kerken werden door hem gerestaureerd en ondergingen een metamorfose.

Cuypers is geboren en getogen in Roermond en heeft daar altijd gewoond en gewerkt. Hij had daar zijn atelier en in de hoogtijdagen waren veel mensen bij hem in dienst. Het Cuypershuis is zeer de moeite waard om te bezoeken. Ik kende Cuypers nauwelijks en kwam eigenlijk voor de tijdelijke tentoonstelling van spotprenten uit de eerste wereldoorlog door Louis Raemaekers, maar werd snel gefascineerd door Pierre Cuypers en zijn wereld.

Aangezien innovatie altijd mijn interesse heeft was het een feest van herkenning om bij Cuypers ook die essentiële combinatie van vaardigheden waar te kunnen nemen. De combinatie van een nieuwe kijk op een systeem, het vermogen tot het bouwen van een nieuw commercieel netwerk en het gedegen vakmanschap op vele niveaus van details tot de passing van het gebouw in zijn omgeving en tenslotte  ondernemerschap. in dit opzicht is er weinig verschil tussen zijn tijd en onze tijd.

Cuypers maakte al zijn meubels zelf en geïnspireerd door een voorbeeld (volgens mij uit Frankrijk) maakte hij de volgende kast. (Zie afbeelding).

De kast gebouwd door Pierre Cuypers
De kast gebouwd door Pierre Cuypers

Die kast staat voor mij symbool voor de eerder genoemde combinatie. Op de kast staan namelijk twee rijen van afbeeldingen. De bovenste rij representeert allerlei “geestelijke, strategische en sprituele” vaardigheden en de onderste rij allerlei “ambachtelijke en technische” vaardigheden. De kast getuigde voor mij van respect, interesse en integratie van beide niveaus van innovatie. Door met zowel het strategische als het technische bezig te zijn, ontwikkelde hij een geïntegreerde intuïtie op beide terreinen. Naar mijn mening de enige mogelijkheid om diepgaander te vernieuwen en te innoveren.

Veel managers nemen het woord innovatie snel in de mond, maar door de afstand met de inhoud ontwikkelt zich niet de zo noodzakelijke intuïtie. Deze is nodig om in onduidelijk en onzekere situaties met voldoende vertrouwen de goede keuzes en de goede wegen te volgen. Vaak uit zich dit in dralen of zelfs uitstellen van beslissingen. Het dralen vertaalt zich in de praktijk in het vragen naar een degelijke op feiten gebaseerde onderbouwing. Helaas als zaken feiten zijn geworden, is iemand je al voor geweest…. Ik geloof dat het een uitspraak van Dreesman is die stelde dat “een slechte manager altijd te weinig informatie heeft om te kunnen beslissen”.

Natuurlijk was Cuypers ook een slimme zakenman. Dat is aan details in het Cuypershuis te zien. Het Cuypershuis was eigenlijk gebouwd voor twee gezinnen. Voor zijn eigen gezin en voor zijn compagnon, die er overigens nooit gewoond heeft. Het huis is symmetrisch gebouwd, maar toch zijn er verschillen. Eén van de medewerkers maakte mij daarop attent. Aan de kant waar Cuypers woonde is het huis ook een soort etalage. Bijvoorbeeld, de mooie houten wenteltrap heeft aan zijn kant van het huis allerlei variaties in bijvoorbeeld de stijltjes. Daarmee kon hij potentiële klanten alternatieven laten zien. Ook aan de buitenkant van het huis zijn er allerlei variaties in de ramen en de ornamenten om dezelfde reden. Dit is één van de vele voorbeelden van het soort pragmatisme wat je bij echte creatieve ondernemers vaak aantreft. Cuypers is een inspirerend voorbeeld voor innoverend ondernemerschap van Nederlandse bodem en het Cuypershuis is alleszins een bezoek waard. Zijn belang voor Nederland en hoe Nederland er nu uitziet wordt naar mijn mening onderschat en ik hoop dat deze blog helpt om eens een kijkje te nemen in Roermond en je te laten inspireren door de wereld van Pierre Cuypers.

Innovative Organisational Dynamics in a Network Representation

An Organisation from a Dynamic Network Prespective

In this post a different perspective on a innovative organisation is presented. We are used to look at organisations from a process point of view, but this way of observing organisational behaviour can additionally give information about how co-operations or conflicts between innovation clans (that often correlate with departments) develop.

This information can be presented dynamically, not just on isolated static moments.

This is the behavior of a real organisation that was much focused on innovation and deliver product concepts to the development in the business.

Organisation Network

Dots and Colours

Every dot is a person. The color indicates to which department, the person belongs. The most important departments are the ‘Red’ cluster, a ‘Green’ cluster,  a “Yellow’ cluster and somewhat later a “Blue” cluster.

The closer the dots are positioned amongst each other, the more interaction is reported. Interaction is measured a 1 (=is happened) or a 0(=has not happened). The frequency or time is not measured, only yes or no.

Start the video

Please click on the picture. It will show you a video taken from the screen in the Gelphi environment in which the data is loaded and transferred into a video.

A 4 year time period is compressed into a video of 1.44 minutes.

What is happening?

From 0.08 you see Yellow moving and at 0.34 moving out of scope. This was a group that was developing, but in another part of the organisation a similar group developed and they were put together and left the organisation, but we see them coming back at 0.44, since management decided to have the total group move to this organisation again.

The fact that Yellow keeps at a distance of the rest is due to mistrust between the Green and Red cluster and the Yellow cluster. The Yellow cluster had a totally other
way of innovating and they were afraid that that would not be recognised by the management of the organisation coming forth from the Red and Green cluster.

In the mean time (more and more visible from 1.05 onwards) also between Red and Green tension increases (distances get bigger), becauses Red and Green were actually competing technologies. In the beginning Green was not seen as a serious replacement by Red, but that changed  around 1.05. It became clear to the Red cluster that the Green cluster gradually would take over the innovation lead to the business.

In the mean time a fourth cluster, a new system technology cluster, started to shape (Blue).

Management decided that Red innovation was no longer desired from a business perspective and reorganised by bringing the Red cluster  closer to the business. In this way Red breakthrough innovation was were effectively hampered, because there was no financial and managerial structure present in the business to make breakthrough innovation possible.

In the mean time the Blue cluster stayed dispersed and stronger group management was necessary to let the Blue cluster develop it’s own innovation focused instead of being used as only a support function to the Green and Yellow cluster. Tension between Green and Yellow remained until the very end. There is started to improve because innovation was needed that would integrate Red, Yellow and Blue. For that reason the mother organisation was strengthened to  a stronger cultural entity.

It is generally accepted that ‘knowledge exploring’ creates sparse networks and ‘knowledge exploitation’ creates dense networks. In this case, these networks are entangled and not really identyfiable. However, in depth-analysis shows that at the outside of such an entangled network, interesting observations can be made.

Often innovative people, people collaborating with external organisations, managers,but also social drop-outs are present in the outer parts of the network clusters.

 

The Crucial Role of Coffee Machines in Innovation

The Real Purpose of Brainstorms

Often people are convinced that innovation come from brainstorms and from brilliant insights by individuals. However research in the last decades indicates differently.
Brainstorms have gradually developed towards Creative Problem Solving. In my view, it is a professional way of problem solving.  The most crucial and therefore vital phase in brainstorming is the first step focusing on “What is the problem and what are the facts that underpin the problem definition?” The next step is to generate ideas on the defined problem. Nowadays, in the “generating ideas” proces high levels of sophistication can be achieved (for example: TRIZ). But the essence of TRIZ proves that it is not a creative process, but a methodology consisting of logical deductive and inductive process steps. In other words it mostly remains in the domain of the known. It fits perfectly the ideas of the ancient Greeks. There is no term in the ancient greek language corresponding to “to create” or “creator”. The association the ancient Greeks had was a proces of discovery. Discovery implies it is already there, you just have to find it. Brainstorming is an excellent tool for sophisticated problem solving (and there are problems enough in innovation), but not the most logical process step for creating a breakthrough.

An Idea is a Network

Brilliant insights were always considered to come as a lightning stroke.  As a lamp bulb that turns on and suddenly sheds light on a solution or insight, the so called  “Eureka” moment. There is a nice TED-talk “Where good ideas come from” by Steven Johnson. But nowadays psychologists challenge this view on creativity. As an example Johnson in his talk refers to Howard Grubers work on Darwin. Based on the analysis of Charles Darwin journals, Johnson states that all the basic concepts of Darwin’s “Theory of Natural Selection” were already described in his journal a long period before Darwin had his “Eureka” moment. His key statement “An idea is not a single thing popping-up in an illuminating moment, it is a network”.

Johnson concluded that new ideas and insights come from either interaction with people that have different points of view. This is one way of providing an experience that surprises people and intrigues them. In his talk, he discusses the important role of the first coffee house, “The Grand Cafe in Oxford”. In these places scientist from different disciplines and started discussions, which in the end  led to the birth of Enlightment. This basic insight I have often used to advice innovation managers on making their organisation more creative.

The Importance of Coffee Machines

AllenCurve

In essence you want to enhance the opportunity that people meet with different backgrounds and discuss in an unplanned, informal way. Location is an issue. In the late Seventies Thomas Allen already revealed the relation between the probability of unplanned, informal interaction and distance. See figure.

Suppose, you as a manager have a gut feel that there is a high probability of finding novel ideas if two groups of people would interact more. The first step would be to share this “strategic intent” to the people in the group and see whether some individuals from both groups are intrigued and show self-propelling behavior to explore this. Then apply Allen’s findings and co-locate the groups without trying to integrate organizationally. If you would integrate the groups, before you know, time and energy is spend on a classical group development process  – the storming, norming, forming and performing thing instead of exploration. An important underlying enabler is to do this via informal interaction. In this way a coffee machine and a nearby table where people can gather and interact are very instrumental for fostering innovation in organizations.

Evaluating Deverticalisation via M&A in the Semiconductor Industry

Mergers & Acquisitions Amongst The Top 20 In The Semiconductor Industry (1987-2013)

In this post, revenue development of companies with and without Mergers & Acquisitions are analysed amongst the top 20 in the Semiconductor Industry in the period (1987-2013). Often Mergers & Acquisitions are evaluated based on “short-term” financial impact. Here we try to get an impression of the longer term effects focused on growth in revenues. We are more interested in the long term revenue development of companies with an M&A somewhere in the period 1987-2013. So, we measure an integral effect of pre-M&A performance, M&A and post M&A performance.

Mergers & Acquisitions (M&A)  is a firm’s activity that deals with the buying, selling, dividing and combining of different companies and similar entities without creating a subsidiary, other child entity or using a joint venture. The dominant rationale used to explain M&A is that acquiring firms seek improved financial performance. Due to the scope of this analysis, financial performance amongst the top 20 are likely most improved by  (1) increasing economy of scale, (2) increasing economy of scope, (3)  increasing revenue or market share, (4) increasing synergy.

The restriction of M&A amongst the top 20 will hide a lot of acquisitions. Acquisitions are a normal element in this industry. “Eat or be eaten”. For example: from 2009 until 2013 Intel acquired Wind River Systems,McAfee,Infineon (Wireless),Silicon Hive,Telmap,Mashery, Aepona, SDN, Stonesoft Corporation,Omek Interactive, Indisys, Basis,Avago Technologies. Only Infineon will be mentioned in this analysis since it is a top 20 player.

The revenue dynamics in the top 20 in the period 1987-2013

The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest
Chart 1. Please Click to Enlarge. The top 20 revenues in the semiconductor per company in the period 1987-2013 stacked from small to large for each year. Source of the data: IHS iSuppli, Dataquest

The semiconductor industry has seen quite some changes in the last decades. Chart 1 shows the changes in revenues, but also in ranks and is used to visualize the stability in the industry

The chart was constructed in the following way.  For every year, the revenues of the top 20  are stacked from nr. 20 at the bottom  to  nr.  1 at the top. The green top line for example in 2013 is the total revenue of the top 20 earned that year.  The difference between the green and the orange line in 2013 (second from the top) is Intel’s revenue. The difference between the orange and the black line represents the revenue of Samsung. 
Sometimes between the years cross-overs are present These cross-overs occur, when companies change position in the revenue ranking. For example the third line from the top (Qualcomm) in 2013 took over that position from Texas Instruments between 2011 and 2012.

In the period 1987-1995 the lines are quite parallel, ranking is more or less stable and growth is exponential. The only disruption of the ranking order  is Intel (green line), in 1992 taking over the nr. 1 position  from NEC Semiconductors. In 1996, the growth curve ends in a downturn. From now on the industry has to get used to a cyclic revenue pattern. Patterns that are typical for exponential growth systems hitting a constraint.  (See my blog on failing network systems)  or as business analists call it, the industry is in a consolidation phase.

Chart 2. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues
Chart 2. Please Click to Enlarge. Similar as Chart 1, only now the ranking is visualized and not the size of the revenues

Also the number of changes in ranking (the number of cross-overs)  increase as we are entering the 21th century. An alternative of the graph above is that we only consider the changes in ranks and keep the revenue data out. The picture then looks like Chart 2. The chart looks even a bit more messy, but this is caused by the many crossovers in the lower ranks. Although changes in ranks count as 1, the change in revenue under these lower ranks may be quite small and therefore less dramatic as may appear from Chart 2. For readibility reasons, we will use this chart to plot M&A.

A Measure of Industry Stability

Chart 3. The stability index for the ranking in revenues in the semiconductor industry in 1987-2013, including the normalized revenue development for the top 20

One can express the level of stability in a distributed key parameter (i.e. revenue) over a set of actors over a period (i.e. one year) by a single parameter, let ‘s call it the stability index.

The revenue stability index is in this analysis defined as the correlation between two ranked and sorted revenues of the top 20 in consecutive years. The stability index is 1 in case the revenues of all players is equal to the previous year. It equals 0 in case the ranking is fully random, which hopefully will never happen.

In Chart 3, the revenue stability index is shown over the period 1987-2013. One can observe, a cyclic pattern that suggests that the companies are influencing each other dynamically. Furthermore, also a trend is visible towards increasing dynamics. This should be no surprise for the incumbants of course. Additionally, the total revenue of the top 20 is plotted (red line). The correlation between the two curves is -0.45. The more revenue is generated from a consolidating market, the higher the instability. This phenomena is called “The Tragedy Of The Commons” (See my Blog (unfortunately in Dutch) “Het Managen van Grenzen aan de Groei” (“Managing Growth Boundaries”). In this case during the stage of exponential growth (in general a sellers market), the competitive interaction at the supply side is limited. As soon as, the market becomes a buyers market, competitive interaction at the supply side increases and this interaction together with the constrained market leads to instability.

Underlying Drivers of M&A in the Semiconductor Industry

Transitions
Chart 4. Please click to enlarge. All red lines indicate transitions, like mergers, spin-off, LBO’s that DIRECTLY affect the revenues of the companies involved. Small, intellectual property / innovation related transitions are not presented here.

The semiconductor industry shows a lot of mergers and acquistions. Partial acquisitions, where we lack data on the revenue of this acquisition, are kept out of the quantitative analysis. In Chart 4, the red lines indicate the M&A, that will be discussed. We have identified the following underlying drivers in our analyses.

  • The semiconductor industry is The Enabler of the electronification and digitization of the world. There are nowadays not many data processing products or systems that do not contain semiconductors. This market has grown from US 29.4 B$ in 1987 to a US 213 B$ in 2013. In the growth phase the market size increased with 15-16% per year in the consolidation phase with 7-8% per year. 
  • The semiconductor industry is a capital intensive industry. The high capital intensity causes the business to have a cyclic character. This also implies that for the bad times, quite some cash is available during the good times in order to be able to survive the next downturn.  The cash-to-sales ratio of some companies was sometimes higher than 30%. As we will see this is attractive for private equity firms.
  • Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013
    Chart 4a. Market Share of the top 3 in 1987, 2000 and 2013

    Scale is important. Only the top 3 have increased market share compared to the rest in the top 20. (See Chart 4).

  • Reduction of vertical integration. Initially, many corporations had their own semiconductor activities. Since the industry during the consolidation phase had a cyclic character with strong (financial) ups and downs, the parent companies were less and less prepared to absorp these cycles in their corporate results.
    • Semiconductor companies became independent from their parents and were no longer restricted by the parent’s  corporate strategies
    • An additional step in deverticalization was the introduction of the fabless business model, companies that design and sell semiconductors without having a “fab”. In 2010 7 out of the top 12 were fabless. In this way, fixed assets were substantially reduced, making the companies less vulnerable during the downturns.
    • The reduction of vertical integration also makes it easier to re-allocate (parts of) the activities to other companies / network structures and in this way achieve economy of scale or acquire new competences.

The M&A categories amongst the top 20 in the semiconductor industry

Reshuffeling in the semiconductor industry took place applying different M&A categories, (1) spin-off , (2) horizontal merger, (3) LBO (Leveraged Buyout), (4) acquisition. For comparison, we also analyse (5) new entrants.

  1. Spin-off

    Spin-off
    Chart 5. Please click to enlarge. The spin-offs in the semiconductor industry (Siemens, Infineon, Qimonda and AT&T, Lucent Technologies, Agere)

    Spin-off is defined here as a type of corporate transaction forming a new company or entity.

    The following spin-offs are considered. Infineon Technologies AG, the former Siemens Semiconductor, was founded on April 1, 1999, as a spinoff of Siemens AG. In 1988, as a first step Lucent Technologies was composed out of AT&T Technologies and spun off in 1996. In 2000, the semiconductor activity of Lucent Technologies was spun off and became a new legal entity called Agere. Agere in 2007 was acquired by LSI in 2007. ON Semiconductor was founded in 1999. The company was a spinoff of Motorola’s Semiconductor Products Sector. It continued to manufacture Motorola’s discrete, standard analog and standard logic devices. (Relative small spin-off not represented in the chart)

  2. Mergers

    Chart 6. Click to Enlarge. The main mergers in the chip industry (Hitachi-Mishubishi-Renesas-NEC, SGS-Thomson-SGS Thomson-ST Microelectronics, Fujitsu-AMD-Spansion, Hyundai-LG-Hynix-SK Hynix)

    The mergers in this analysis are horizontal mergers. These mergers are difficult to implement.  Integration, often means higher efficiencies and laying of personnel, but also reallocation of management responsibilities which all together make it difficult to achieve the synergetic advantages.

    In 2003, AMD and Fujitsu created a joint-venture company named Spansion focused on flash memories and in 2005 AMD and Fujitsu sold their stake. In 1996 Hyundai Semiconductors became independent via a Initial Public Offering and merged in 1999 with LG Semiconductors, the name changed into Hynix Semiconductor in 2001. In 1987, when SGS-Thomson was formed as a merger of SGS Microelettronica and Thomson and by 1998 withdrawal of Thomson out of SGS-Thomson, changing the name into STMicroelectronics. In 2002, semiconductor operations of Mitsubishi and Hitachi were spun off and merged to form a new separate legal entity named Renesas, in 2010 Renesas was merged to Renesas Technologies by merging with NEC Electronics Corp., a spin-off of NEC in 2002.

  3. Leveraged Buy Out

    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale
    Chart 7. Please click to enlarge. The Leveraged Buy-outs in the chip industry, Philips and Freescale

    In an LBO, private equity firms typically buy a stable, mature company that’s generating cash. It can be a company that is underperforming or a corporate orphan—a division that is no longer part of the parent’s core business. The point is that the company has lots of cash and little or no debt; that way the private equity investors can easily borrow money to finance the acquisition and can even use borrowed funds for more acquisitions once the company is private. In other words, the private equity firms leverage the strong cash position of the target company to borrow money. Then they restructure and improve the company’s bottom line and later either sell it or take it public. Such deals can produce returns of 30 percent to 40 percent.

    In 2006, Philips Semiconductors was sold by Philips to a consortium of private equity firms (Apax, AlpInvest Partners, Bain Capital, Kohlberg Kravis Roberts & Co. and Silver Lake Partners) through an LBO to form a new separate legal entity named NXP Semiconductors. Philips was very late compared to other parent companies to disentangle from the semiconductor activities and as the NXP CEO’s Frans van Houten stated, the LBO was the only option left. In 2006, Freescale (formerly known as Motorola semiconductors) agreed to be acquired by a consortium of private equity firms through an LBO. 2006 was a dramatic year for Freescale since Apple decided to let there microprocessors be supplied by Intel instead of Freescale.

  4. Acquisitions

    Acquisitions
    Chart 8. Acquisitions in the semiconductor industry

    Acquisitions are part of life in the semiconductor world. As already stated, we only consider acquisitions amongst the top 20.

    In 2010 Micron Technology acquires Numonyx. In 2011 Intel takes over Infineon Technologies wireless division, Texas Instruments acquires National Semiconductor, Qualcomm acquires Atheros Communications and ON Semiconductor acquires the Sanyo semiconductor division. In 2012 Samsung Electronics acquires Samsung Electro-Mechanics share of Samsung LED and Hynix Semiconductor was acquired by the SK Group. In 2013 Intel acquires Fujitsu Semiconductor Wireless Products, Samsung Electronics sold their 4- and 8-bit microcontroller business to IXYS, Micron Technologies acquires Elpida and finally Broadcom acquires Renesas Electronics’ LTE unit.

  5. New Entrants 

    Chart 9. The new entrants in the top 20
    Chart 9. The new entrants in the top 20

    Even in this competitive, capital and know-how intensive industry there have been new entrants. The new entrants group is dominated by US companies.
    Marvell Technology Group, Limited was founded in 1995. Marvell is a fabless semiconductor company. MediaTek Inc. is a fabless semiconductor company founded in 1997.Nvidia was founded in 1993.

    New entrants in the top 20 are Qualcomm was founded in 1985 and has become the nr. 3 in revenue in 2013. Micron Technology, Inc. founded in 1978 and is nr. 4 in revenue in 2013. Samsung Electric Industries was established as an industry Samsung Group in 1969 in Suwon, South Korea in 1974, the group expanded into the semiconductor business by acquiring Korea Semiconductor and occupies the nr. 2 position.

 Evaluation

Chart 10. Click to enlarge. Revenue development in the top 20 in the semiconductor industry from 1987 to 2013 in the different M&A categories as described above.

In chart 10 per M&A category, the development of the revenue are presented. Spin-offs (blue colors). FROM is the total revenue of the parent organisations and TO is the total revenue of the new founded companies. The same format is applied to Mergers (green colors) FROM and TO and LBO (orange colors) FROM and TO. The “others” category is a mix of companies, from which we lack qualititative data on the M&A activity).

There were some new entrants in the industry between 1987 and 2013. Some of them succeeded to make it into the top 20. There were also new entrants in the top 20, but most of them started a decade earlier. They occupy nr.2,3 and 4 in the top 20 in 2013. The nr. 1,2,3,4 all started as semiconductor companies or were embedded in a component conglomerate, like Samsung. These companies have a culture and a managerial mindset for component business. The new entrants and Intel have grown substantially to 50% of the total – top 20- revenue in 2013. Most players in 1987 were conglomerates, where semiconductor activities where managed on the impact on corporate level and not on winning the semiconductor game. The companies that went into a spin-off, horizontal merger or LBO  are together delivering 25% of the revenue in 2013. Compared to the “new entrants”, we have to conclude that companies had to go through a deverticalisation process via spin-offs, mergers or LBO have been part of a consolidation trajectory with in absolute terms no growth between 1995 – 2013.

It is tempting to speculate about the underlying causes, so here are some hypotheses:

  • With respect to the spin-offs, these where rather small companies, maybe too small to grow business, although Infineon initially shows quite some growth. But analists report a lot of cash problems in this growth period. In the end the spin-offs did not lead to substantial growth.
  • The mergers, especially the Japanese companies have declined in ranking during the period that they were still part of a conglomerate and then have tried to regain scale by merging. Again the result in terms of growth is very disappointing.
  • There is evidence of poor post-acquisition performance of large acquirors (Harford, 2005)
  • There is a study on bidding merger contests, that for the winning bidder, where before the contest each bidder had a fair chance of winning the merger contest, the stock returns of the winning bidders is outperformed by those that lost the bid. However, the opposite is true in those cases with a predictable winner (Malmendier, Moretti, Peters, 2012)
  • The LBO’s occur very late in the deverticalisation stage. The performance after 3 years of the LBO is not inspiring, both showed a severe loss of revenues. My impression also based on an interview with the CEO of NXP, van Houten, is that they were just too late with considering M&A. For Philips all options were gone, even a last possibility, a merger with Freescale was considered, but rejected by Freescale. Fortunately, the last few years NXP seems strong enough to recover from this transition.
  • Humans always seek for a “Why”, sometimes it is not there and it is just bad luck.
  • LBO’s, mergers and LBO’s are strategic interventions changing the footprint of companies, changing the structure, maybe these companies did not have a structure problem, but a portfolio problem (markets, applications, products)

I guess statements of Derek Lidow, president and CEO of market research firm iSuppli in 1996 are spot on explaining the strategic M&A decisions leading to no growth.

“….. the chip industry itself has been unable or unwilling to take the hard steps necessary for consolidation. Many chip companies are run by engineers, who tend to think in terms of technology rather than of how best to manage a product portfolio….”

“The real leverage in this kind of a deal is to do portfolio management,” he notes, which means managing groups of products by market segment or geographic region, for example, rather than by technology category. “That’s usually not done in the semiconductor industry,”

Sources

How Do Networks Handle Stress?

Our society gets increasingly networked. They are everywhere, our technology is networked, our communication is networked, we work in networks, our money is taken care of in networks (I hope) , our industries are networked and so on and so on. Never has there been a time that individuals and organisations had access to so much knowledge and technology. And that thanks to networks.

“Normal Accidents”

But there is a drawback to networks. Networks are characterised by interacting nodes, allowing a lot of knowledge stored into the system. As nicely explained in RSA Animate’s video “The Power of Networks”,networks is a next step in handling complexity.  Triggered by the 1979 Three Mile Island accident, where a nuclear accident resulted from an unanticipated interaction of multiple failures in a complex system, Charles Perrow was one of the first researchers that started rethinking our safety and reliability approaches for complex systems. In his “Normal Accident Theory”, Perrow refers to failures of complex systems as failures that are inevitable and therefore he uses the term “Normal Accidents”.

Complexity and Coupling Drive System Failure

Perrow identfies two major variables, complexity and coupling, that play a major role in his Normal Accident Theory.

Complex Systems

Proximity
Common-mode connections
Interconnected subsystems
Limited substitutions
Feedback loops
Multiple and interacting controls
Indirect information
Limited understanding

Linear Systems

Spatial segregation
Dedicated connections
Segregated subsystems
Easy substitutions
Few feedback loops
Single purpose, segregated controls
Direct information
Extensive understanding

Tight Coupling

Delays in processing not possible
Invariant sequences
Only one method to achieve goal
Little slack possible in supplies,
equipment, personnel
Buffers and redundancies are
designed-in, deliberate
Substitutions of supplies, equipment,
personnel limited and designed-in

Loose Coupling

Processing delays possible
Order of sequences can be changed
Alternative methods available
Slack in resources possible
Buffers and redundancies fortuitously available
Substitutions fortuitously available

 

 

Although Perrow is very much focused on technical systems in 1984, his theory has a much broader application scope. Nowadays, his insights also apply to organizational, sociological and economical systems. Enabled by digitization and information technology, systems in our daily lives have grown in complexity. The enormous pressure in business to realize the value in products and services in the shortest possible time and at the lowest cost, has driven our systems into formalization, reducing the involvement of human beings and their adaptivity compared to the complexity of the systems. Going through the list Perrow provided, one can understand that this “fast-and-cheap” drive generates complex and tightly coupled systems, maximizing resource and asset utilization  and doing so closing on on the maximum stress a system can cope with. From the Normal Accident Theory, we learn that complexity and tight coupling (in terms of time, money, etc…) increases the system’s vulnerability for system failure or system crises and collapses.

Building a Simple Simulation Tool to Simulate Networks Handling Stress

First I build a little simulator to play around with his models. It is based on MS EXCEL with some VBA-code added, simulating a network of max 20×20 nodes.. Each iteration, nodes (one cell with a color is a node) are added, but in each iteration also links between cells are added. Randomly each node receives a little bit of stress, which is added to the stress already built-up in the previous iterations. Every iteration., some stress is drained out of the system.. When there are a few nodes in the system, most stress is drained away, so no system stress is build-up. But after a while more and more nodes are alive in the system and at a certain moment more stress is generated than the system can absorb. When the stress in a cell exceeds a certain thresshold, the cell explodes and its’ stress is send to all the connected nodes. The distribution mechanism can be scaled from distributing the stress evenly over the connected nodes (Real life example: a flood) or transferring the amount of stress to each of the connected nodes (Real life example: information). With this little simulator, I had a nice opportunity to see whether a crises is predictable or not.

Start the video and click on the small square in the bottom right corner to enlarge.

Let’s check!

In advance, I can set a number of parameters, like how many nodes are added per iteration, how many links are created per iteration, what is the maximum stress allowed, what is the stress distribution mechanism, how much stress is absorbed by the networks eco-system. Of course, not everything is fixed: the links are placed randomly the stress addition per iteration is randomized. What is intriguing however is that with the same settings, I get very different results:

Example 1

Example 2

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 P7-24-2014-3-05-33-PM-E1-2-3-30-S30-100-P80
 In this example, beyond 60 iterations, the system is no longer able to absorp all stress, keeping system stress at 0. Between 60-100 iteration the stress in the system grows. Around iteration 100, we get the first crises. A lot of nodes are overstressed and die. So the system stress shows a drawdown., but not completely, it quickly recovers and moves into growth-mode again until it reaches another serious drawdown around 170. Bumpy, but OK I guess.  In this example, the first part of the graph is identical to example 1. But,in the first100 iterations another network of nodes with other connections then in Example 1 has been created. Another network that as you see is behaving dramatically different. So, also around the 100th iteration, we get our first drawdown, but this is a serious one. It almost completely whipes out the network. A few nodes survived and the system rebuilds itself.
CCDF7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80 CCDF7-24-2014 3-05-33 PM-E1-2-3-30-S30-100-P80
The graph above shows how often drawdowns have occured in example 1. The more it is only a stochastic process, the more you may expect that the identified drawdowns (peak-to-valleys) stay close to the line. Only in the tail, there is sometimes a tendency to stay above the line.

 

 

 

 

 

However, in the system of example 2, almost a complete collapse takes place and you can see, what Sornette calls a  “Dragon-King”. And of-course with this value, we are in an avelanche process that swipes the stress overload through the network. According to Sornette, this is no longer a stochastic process, since it does not follow the trend line. This is his underpinning that there is something causal and thus predictable. But he is wrong. Yes it is a lot of domino stones that in a cascade destabilize each other and lead to a bigger drawdown then the verical axis indicates. But it is triggered by a stochastic process like all the others. As you can reason, this does not imply predictability. After the drawdown, one can identify the crash, not before. Sornette seems to use an old trick to hit the target. First shoot the arrow, then draw the bulls-eye around it and finally claim you never miss and your predictions are spot on.

P7-24-2014 2-56-39 PM-E1-2-3-30-S30-100-P80

In this graph, 10 simulation runs are depicted on top of each other. It shows that upto 100 iterations, the system behaves more or less identical in all the runs. However in one of the runs, during these 100 iterations a system has build up that is capable within boundaries to keep itself alive, while in all other runs, the build network is almost completely destroyed, sometimes rebuild, but in one occasion all nodes were dead. There is no way that the graph with the system stress contains enough information to predict any system behavior after iteration 100.

If we run the system not allowing it to create any links. the system stabilizes at around 3000-3500. Any nodes added above the 3000-3500 is creating a bubble and are compensated by nodes getting overstressed and killed. Since there are no links, no stress is transferred to other nodes. In other words the domino effect does not occur. The system is in a dynamic equilibrium. When the build-up of the links are allowed, and domino effects are possible, as soon as the stability line of 3000-3500 is exceeded, the system depending on its underlying network structure and the stochastic distribution of overstressed nodes determines the behavior and makes the system chaotic and no longer manageable.

Can We Predict Crises in Networks?

Recently Didier Sornette gave a TED-talk claiming he had found a way to predict crises in complex systems. See http://on.ted.com/Sornette. He shows examples from the financial world, health care and aerospace. This has the odeur of a grand achievement and it may bring us a step closer to managing or maybe even preventing crises. After my initial enthusiasm (since if this works, we can stress out stuff even further at a lower risk) I went deeper into Sornette’s work. However, I become less and less convinced that this is sound scientific stuff. But if he is right, there is a lot of potential in his theory. So, let’s dig deeper.

I also found a master thesis of mrs. Chen Li, University of Amsterdam, that in-depth tried to validate Sornette’s approach. Here is the conclusion:
“In sum, the bubble detection method is capable of identifying bubble-like (superexponential growth) price patterns and the model fits the historical price trajectory quite well in hindsight. However, fitting the model into historical data and using it to generate prediction in real-time are two different things and the latter is much more difficult. From test results, we conclude that bubble warnings generated by the model have no significance in predicting a crash/deflation.” Sornette’s claim of “a method that predicts crises” is falsified. To calculate whether a system is overstressed and to predict a crises are two different things.

The Answer Is No, But …..

So, if we can’t predict the drawdowns, we have to avoid creating bubbles. But in Techno-Human systems, we run into the phenomena of the tragedy of the commons (The tragedy of the commons is an economics theory by Garrett Hardin, according to which individuals, acting independently and rationally according to each one’s self-interest, behave contrary to the whole group’s long-term best interests by depleting some common resource). Who is prepared to respect boundaries, limit system stress and run the danger that others do not behave accordingly avoiding the system to become overstressed? Nobody wants to be a sucker, so this bubble game will probably continue, except for a few well-organized exceptions (see my blog in Dutch (Sorry) “Het managen van de groei“). Next to complicated social solutions, we can tmake the system more robust against crises by

  • Avoiding tight coupling and complexity in the deisgn of a system
  • Collecting and analyzing data on “close-call” incidents and improve the system
  • Improving the interaction between operators (people working with or in the system) and the system and apply human factors engineering

as described by Charles Perrow in 1984 among others.

Ruud Gal, July 25 2014

References

Charles Perrow, “Normal Accidents: Living with High-Risk Technologies”, 1984

Gurkaynak, R.S., Econometric tests of asset price bubbles: taking stock. Journal of Economic Surveys, 22(1): 166-186, 2008

Jiang Z-Q., Zhou,W-X., Sornette, D., Woodard, R., Bastiaensen, K., Cauwels, P., Bubble Diagnosis and Prediction of the 2005-2007 and 2008-2009 Chinese stock market bubbles, http://arxiv.org/abs/0909.1007v2

Johansen, A., Ledoit, O. and Sornette, D., 2000. Crashes as critical points. International Journal of Theoretical and Applied Finance, Vol 3 No 1.

Johansen,A and Sornette, D., Log-periodic power law bubbles in Latin-American and Asian markets and correlated anti-bubbles in Western stock markets: An empirical study, International Journal of Theoretical and Applied Finance 1(6), 853-920

Johansen,A and Sornette, D., Fearless versus Fearful Speculative Financial Bubbles. Physica A 337 (3-4), 565-585 (2004)

[Kaizoji,T., Sornette, D., Market bubbles and crashes. http://arxiv.org/ftp/arxiv/papers/0812/0812.2449.pdf

Lin, L., Ren, R.-E., Sornette, D., A Consistent Model of ‘Explosive’ Financial Bubbles With Mean-Reversing Residuals, arXiv:0905.0128, http://papers.ssrn.com/abstract=1407574

Lin, L., Sornette, D. Diagnostics of Rational Expectation Financial Bubbles with Stochastic Mean-Reverting Termination Times. http://arxiv.org/abs/0911.1921

Sornette, D., Why Stock Markets Crash (Critical Events in Complex Financial Systems), Princeton University Press, Princeton NJ, January 2003

Sornette, D., Bubble trouble, Applying log-periodic power law analysis to the South African market, Andisa Securities Reports, Mar 2006

Sornette, D., Woodard, R., Zhou, W.-X., The 20062008 Oil Bubble and Beyond, Physica A 388, 1571-1576 (2009). http://arxiv.org/abs/0806.1170v3

Sornette, D., Woodard, R., Financial Bubbles, Real Estate bubbles, Derivative Bubbles, and the Financial and Economic Crisis. http://arxiv.org/abs/0905.0220v1

Dan Brahaa, Blake Stacey, Yaneer Bar-Yam, “Corporate competition: A self-organized network”, Social Networks 33 (2011) 219– 230,

Chen Li, “Bubble Detection and Crash Prediction”, Master Thesis, University of Amsterdam, 2010

M.A. Greenfield, “Normal Accident Theory, The Changing Face of NASA and Aerospace Hagerstown, Maryland”, 1998

http://en.wikipedia.org/wiki/Tragedy_of_the_commons

 

 

Roadmap Eindhoven Energieneutraal 2045

In oktober 2013 starten Elke den Ouden, TU/e en ik in opdracht van Mary-Ann Schreurs, wethouder Eindhoven, en Marc Eggermont, algemeen directeur Woonbedrijf met het tot stand brengen van een roadmap, die Het verbruik van fossiele brandstoffen in Eindhoven naar 0 moet brengen in 2045. Natuurlijk is het maar een stipmop de horizon, die heel ver weg ligt, maar dit helpt toch om voor de komende jaren de juiste kant op te sturen.

Eigenlijk bestrijkt de roadmap maar een derde van de uitdaging. Een fors deel van het fossiele brandstofverbruik wordt veroorzaakt door mobiliteit (bijvoorbeeld auto’s, vliegtuigen) en ongeveer een derde wordt veroorzaakt, door het fossiele brandstofverbruik tijdens de productie, transport maar ook recyclen van alle goederen en voedsel die we in Eindhoven consumeren. Weinig mensen weten bijvoorbeeld dat het stukje vlees op uw bord zeer veel fossiele brandstof vergt. De roadmap concentreert zich op de gebouwde omgeving, huizen, kantoren, bedrijven.

De roadmap is vorige week woensdag op 2 juli gelanceerd aan het eind van een workshop waarbij we met allerlei belanghebbenden, de roadmap vertalen naar acties voor de komende jaren.

De gebieden van vernieuwing

20140712-084607-31567138.jpg

Uit de eerste discussies met betrokkenen in de beginfase van het roadmapproces, bleek al dat er drie gebieden van inhoudelijke vernieuwing aanwezig waren. In de verte was al aangegeven dat Eindhoven in 2045 energieneutraal  moest zijn. Maar op de korte termijn (linksonder) was duidelijk dat hoewel al een aantal mensen druk bezig zijn met allerlei oplossingen aan het verzinnen zijn (bij voorkeur smart…), terwijl er nog een aantal zijn, die zich afvragen wat er aan de hand is en in ieder geval nog niet actief op pad zijn. Aan de linkerkant van het pad worden al allerlei duurzame energie (deel)-oplossingen verzonnen en is er een sterk geloof in een buurtgewijze benadering. Onder het pad (in de schets rechtsonder) was er ook een groep, dei zich begon te realiseren dat de energiestromen in de vorm van gas en electriciteit van en naar Eindhoven, maar ook binnen Eindhoven zich in de komende decennia grote en dus dure aanpassingen aan de energieinfrastructuur wel eens nodig zouden kunnen zijn. (de tekening is overigens gemaakt door Jan-Jaap Rietjens).

Betekenis, bewustwording en behoefteontwikkeling

Elke en ik waren toen we begonnen niet echt deskundigen op gebied van duurzaamheid. Verre van dat, we hebben veel kennis over hoe je met een groep mensen tot een roadmap moet komen en hoe die te vertalen naar actie. Het was voor ons beiden, in ieder geval voor mij, een indringende kennismaking. Voor die tijd had ik natuurlijk wel van duurzaamheid gehoord, maar ik was niet bewust bezig met mijn gedrag aan te passen. Toen ik me begon te realiseren dat het echt met onze aardbol en dus met ons verschrikkelijk mis gaat als we zo doorgaan, begonnen we in ons gezin meer te fietsen, minder vlees te eten, proberen zo min mogelijk spulletjes in plastic te laten verpakken en plastic tassen te gebruiken. plastic apart in te zamelen en ga zo maar door. Maar zoals ik aan dit project begin zijn er velen, die zich noet bewust zijn wat we als we zo doorgaan achterlaten aan onze kinderen en kleinkinderren. De grootste uitdaging is om duurzaamheid betekenis te geven: het doorgeven van een aardbol, die goed is om op te leven voor alle levende wezens op deze aarde. Daarnaast weten veel mensen niet wat dan erg belastend is voor het milieu. Dus zolang je niet bewust bent van die belasting, kun je ook niet sturen naar meer duurzaamheid. bewustwording is het tweede sleutelbegrip. En ten
slotte ontstaat er de vraag naar oplossingen. Als die vraag niet ontstaat, gaat het bedrijfsleven niet investeren, want de bedoeling is dat bedrijven waarde creëren en geld verdienen om te kunnen voortbestaan en verder te ontwikkelen.

Waardeladder

Om een energiesysteem aantrekkelijk te maken voor gebruikers en eigenaars zijn een aantal zogenaamde waardedrijvers (onderste rij in de afbeelding) van belang. Een paar belangrijke waardedrijvers als voorbeeld: Natuurlijk is op de eerste plaats de bijdrage aan de vermindering van fossiele brandstoffen van belang, maar uiteraard ook de economische waarde (betaald zich de investering wel terug?). Veel energiesystemen scoren slecht op de architectonische waarde (ik vind het niet mooi, ik wil zo’n ding niet in de buurt van mijn achtertuin). Als er keuzes moeten worden gemaakt tussen systemen dan zullen die systemen verschillen op hun score op de waardedrijvers. Sommige gebruikers zullen andere keuzes maken dan andere. Als je dan de vraag stelt waarom geef je aan dit systeem de voorkeur, dan hoor je een hoger liggende laag van “waarde drijvers”. Door dit voor elke laag te herhalen, kom je uit op een verzameling van eindwaarden (gerelateerd aan het einddoel), maar ook instrumentele waarden (die iets zeggen over de manier waarop je het einddoel wilt bereiken). Belangrijk is om te realiseren dat mensen verschillen en dus vanuit andere eind- en instrumentele waarden tot andere keuzes komen. Dit is gebied wat veel meer onderzoek vergt en helpt met het huidige aanbod van energiesystemen veel beter te laten aansluiten aan de behoefte van (verschillende) mensen. 

Energiesystemen

Energie is een zeer complex thema. Er zijn zeer veel verschillende technologiëen nodig om het fossiele brandstofverbruik naar 0 te brengen. Het is de uitdaging om kennis op te bouwen, welke systeemconfiguraties voor welk soort woon- of werkomgeving geschikt zijn.
Bovendien is het gebied technologisch nog sterk in beweging. We hebben nig niet echt een goede oplossing voor electrische energieopslag. Het kan wel, maar het is erg duur of erg zwaar. Het kan nog veel kanten op. Dus als we investeren, hoe zorgen we er dan voor dat de investeringen niet in de toekomst te snel afgeschreven moeten worden.

Energieinfrastructuur

In en op de bodem van Eindhoven liggen zeer veel infrastructuren, gas, water, electra, data, warmtenetren, glasvezel, riolering, wegen, spoorlijnen. Infrastructuren vergen grote investeringen, maar daarnaast bepalen ook wie wel en wie geen toegang krijgt. De verwachting is dat zeer veel energie lokaal wordt opgewekt en verbruikt. Echter, de zonnepanel
en geven alleen energie op een heldere dag als de zon aan de hemel staat. Hoe die variaties tissen dag en nacht, tussen zomer en winter op te vangen. Hetzelfde geldt voor warmte en koude. Als iedereen een elektrische auto voor de deur heeft staan met een oplaadpunt aan huis heeft dat gigantische consequenties voor de infrastructuur, want er moet opeens een heleboel stroom worden geleverd.

20140712-084606-31566923.jpgEen energieneutrale oplossing voor Eindhoven vergt inzicht en kennis op zeer veel gebieden. Bijvoorbeeld: met betrekking tot het opwekken van energie via zonnepanelen blijkt dat niet elk dak geschikt is. Kijk bijvoorbeeld naar de geschiktheid van je eigen dak op de zonneatlas.Ook met betrekking tot opwekking door middel van wind zijn er beoperkingen. Op daken is er twijfel of er goede oplossingen zijn (maar die race is nog niet gelopen, kijk bijvoorbeeld naar IRWES). Dan is er de optie van de grote windturbines, maar door het nabij gelegen vliegveld  is het gebied waar dit kan ook beperkt. Zo is er voor elke laag de huidige situatie en de mogelijkheden op de kaart van EIndhoven te leggen en zullen er waarschijnlijk andere energiesystemen mogelijk of onmogelijk zijn. 

Maatschappelijke vernieuwing

20140712-084607-31567228.jpg

In innovatie stromen de geode ideëen als onderdeel van een groter geheel in de toepassing. De slechte ideëen moeten bij voorkeur tegen zo laag mogelijke kosten uit de innovatiestroomworden verwijderd. Ruwweg zijn er drie grove stappen. Is het op zichzelf een goed idee? Een van de dilemma’s is dat een goed idee voor iedereen iets anders betekent. Als veel partijen zich allemaal positief over een idee moeten uitlaten en ook nog werk en geld daarin moeten stoppen, dan zal duidelijk zijn dat de kans op succes dat het idee dit overleeft snel veel kleiner wordt. Daarom heeft innovatie in netwerken niet alleen veel verbinding nodig aan de operationele kant, maar ook aan de beoordelende kant. Maar vaak blijkt dat niet het geval en gaat een potentieel goed idee ten onrechte ter ziele.

Tenslotte is er ook de uitdaging om de vernieuwing of innovatie te organiseren. Er zijn vier clusters van partijen hierbij betrokken:
– de overheden
– het bedrijfsleven
– de kennisinstituten
– de burgers, gebruikeurs en eigenaren.

Wat het ingewikkeld maakt is dat iedere partij voor een deel eigenaar is van de energie uitdaging, maar ook zelf deel moet worden van de oplossing, sterker nog deel moet worden van het bedenken van de oplossing. Het is mij duidelijk geworden dat de wereld in 2045 er behoorlijk anders uitziet, zowel technologisch, maar ook sociaal en maatschappelijk. Diepgaande veranderingen zijn nodig en we staan nog maar aan het begin.

Wilt je meer lezen dan volgt hier wat literatuur, die mij erg geholpen heeft om inzicht te creëren.

Rest mij om iedereen uit te nodigen om zich te verdiepen in deze materie en mee te doen! Het is iets van ons allemaal!

Ruud Gal

Referenties:

Visie en roadmap Eindhoven Energieneutraal in de gebouwde omgeving
Persbericht Lancering Roadmap
Rapport Eindhoven Energieneutraal

Hoera, voortaan kunnen we crises beter voorspellen?…. Mooi niet.

Ik ben al enige tijd geïnteresseerd in complexe en gecompliceerde systemen, in duurzaamheid, de interactie tussen technologie en mens/samenleving. Vooral de besturing van deze systemen is wat mij intrigeert en wat ook sterke raakvlakken heeft met mijn vakgebied “Innovatiemanagement”. Wat dat betreft is het smullen in het huidig tijdsgewricht, want er zijn in deze domeinen onopgeloste problemen genoeg, die een behoorlijke maatschappelijke relevantie bevatten.

Recent kwam ik in aanraking met het werk van Didier Sornette. Sornette is hoogleraar aan de leerstoel “Entrepreneurial Risks” aan de EHT Zürich. In juni 2013 heeft hij wat van zijn gedachtengoed gedeeld via TED.

Anders dan Taleb (auteur van de boeken “The Black Swan” and “Antifragile”), die ons adviseert om toch vooral te investeren in zaken met alleen maar “upwards potential” (ik refereer naar het boek “Antifragile”), beweert Sornette dat juist het geloof in de mythe van het opwaartse potentieel een van de aanjagers van crises is.

Sornette stelt dat in onze samenleving er een heilig geloof bestaat in de mythe, dat er welvaart gecreëerd kan worden uit het genereren van schulden. Tot 1980 werd de hoeveelheid geld in omloop in balans gehouden door dit te relateren aan productiviteit. De laatste keer dat we dat niet hadden gedaan (1929) was ons slecht bevallen. Maar 50 jaar later is de les verleerd en wordt de hoeveelheid geld in omloop gerelateerd aan consumptie en aan schulden. En de groei niet langer aan productiviteit verbetering (ongeveer 2-3 % per jaar), maar aan groeiende schulden (d.w.z. nog in de toekomst te ontwikkelen productiviteitverbetering). En daar was ie dan: “de mythe van de perpetuum geldgenerator”.

Salaris-consumptie

Het aandeel van de salarissen en de private consumptie als een percentage van het GDP voor de Verenigde Staten, de Europese Unie en Japan: Bron: Michel Husson, http://hussonet.free.fr/toxicap.xls

Het is niet de bedoeling om oude koeien uit de sloot te halen, maar de kern van Sornette zijn werk is dat hij de “bubbles” kan opsporen en dat zonder ingrijpen onvermijdelijk na zo’n “bubble” een heftige crisis volgt, die hij binnen een bepaalde waarschijnlijkheid kan voorspellen. Deze crises worden eufemistisch faseovergangen genoemd en inderdaad ze vertonen veel gelijkenis met faseovergangen in de fysica. De mechanismes zijn:

  • Het aanwezig zijn van één of meerdere stressfactoren (bijv. GDP op bases van productiviteit (salaris) vs. GDP op basis van consumptie en schulden)
  • De aanwezigheid van positieve feedback (zichzelf versterkende terugkoppeling)

Door de steeds sterker wordende stress te volgen wordt met een bepaalde waarschijnlijkheid een crises, een “dragon-king”, zoals Sornette ze noemt, voorspelbaar. Zijn theorie is in staat gebleken om niet alleen in de financiële wereld, maar ook op andere gebieden goede voorspellingen te doen, zoals aardbevingen, landverschuivingen, epilepsie, geboorte, metaalmoeheid bij raketten en ook de huidige overbelasting van onze aarde door de mensheid.

cover

 

Bron: W. Steffen, A. Sanderson, P. D. Tyson, J. Jäger, P. A. Matson et al.,”Global Change and the Earth System”, Executive Summary, IGBP Secretariat, Royal Swedish Academy of Sciences, 2004

Hier staan een aantal stress factoren, die allemaal een steeds sterkere groei laten zien. CO2, N2O, CH2-concentraties in de atmosfeer, ozonlaag, oppervlakte temperatuur, overstromingen, ecosystemen in de oceaan, de veranderende structuur en de biogeochemie aan de kust, verlies aan tropisch regenwoud, de hoeveelheid gedomesticeerd land en de afnemende biodiversiteit.

Sornette maakt de inschatting dat de komende decades de kritische grenzen zullen bereiken en dan zullen worden gedwongen om in een zeer korte tijd diep terug te vallen en hopelijk de kans krijgen naar een niveau van belasting van het milieu terug te komen die de aarde kan absorberen (duurzaamheidsevenwicht).

YouTube Link Didier Sornette: How can we predict the next financial crises

Zijn werk heeft mij weer een aantal “aha”-ervaringen opgeleverd met betrekking tot duurzaamheid en, ik heb ze hieronder vastgelegd.

Mijn conclusies met betrekking tot een transitie naar een duurzame samenleving:

  • Het is een illusie om te denken dat we de huidige manier van leven en ons organiseren kunnen volhouden.
  • Het is ook een illusie om te denken dat technologie dit voor ons gaat oplossen. Technologie kan misschien het tijdstip van de crises wat naar achteren schuiven, maar het fundamentele probleem van een samenleving met versterkende terugkoppelingen vraagt om een fundamentele politieke, sociale en economische doorbraak. Toch stoppen we als samenleving meer in technologieontwikkeling dan in sociale en misschien wel politieke innovatie (bijvoorbeeld het werk van Elinor Ostrom op gebied van Common Pool Resources)
  • Technologie bepaalt wel de relatie tussen het welvaartsniveau wat haalbaar is gegeven een duurzame belasting van ons milieu, maar gaat ons niet redden als we ons gedrag en onze samenleving niet op fundamenteel andere principes gaan herontwerpen.

Bronnen: