Von der Konzernstrategie in den Kindergarten: Unsicherheitsbasierte Ansätze in der Kapazitätsplanung

Because of the local relevance, this article is first published in German.

Dr. Holm Gero Hümmler ist Diplom-Physiker und Diplom-Wirtschaftsphysiker und beschäftigt sich seit mehr als 15 Jahren als Unternehnemsberater mit Fragen der Planung unter Zukunftsunsicherheit, überwiegend in größeren Untenrehmen. Von 1993 bis 1997 war er Stadtverordneter und Mitglied des Sozialausschusses der Stadt Nidderau.

Auf den ersten Blick hat die Planung einer Konzernstrategie relativ wenig mit kommunaler Kinderbetreuung zu tun: Im einen Fall geht es um ein gewinnorientiertes Unternehmen, das relativ frei über seine Geschäftsfelder entscheiden und sich gegebenenfalls auch vollständig von einzelnen Geschäften treffen kann, im zweiten Fall um eine kommunale Leistung, die teils tatsächlich Pflichtaufgabe ist, teils zumindest von den betroffenen Bürgern erwartet wird. Von Gewinnorientierung kann hier keine Rede sein, handelt es sich doch um eine der wenigen Dienstleistungen von Kommunen, bei denen Gebühren, sofern sie überhaupt noch erhoben werden, nicht kostendeckend sein können. In größeren Verwaltungseinheiten werden zwar inzwischen auch im Sozialbereich zum Teil strategische Ziele in Form von Mission, Vision oder Leitbild formuliert, aber allzu unternehmerisches Denken ist doch in vielen Bereichen weder realistisch noch gewünscht.

Eine Planungsaufgabe aus der Konzernstrategie findet sich jedoch tatsächlich fast identisch in der Kinderbetreuung praktisch jeder Kommune wieder: die Kapazitätsplanung. Auch die dabei auftretenden Herausforderungen sind bemerkenswert ähnlich:

  • Zukünftige Bedarfsmengen sind ähnlich wie bei einem im Markt etablierten Produkt grundsätzlich absehbar, aber mit spürbaren Unsicherheiten behaftet.
  • Der Aufbau zusätzlicher Kapazitäten stellt für die Gesamtorganisation eine Investition in erheblicher Größenordnung dar.
  • Auch bei vorhandenen Mitteln lassen sich Kapazitäten nicht beliebig schnell auf- oder abbauen, weder auf der Personal-, noch auf der Immobilienseite.
  • Flexible Kapazitäten, zum Beispiel in Containern oder durch Bustransport in andere Ortsteile, haben langfristig ihren Preis, einerseits hinsichtlich der Kosten, aber auch bezüglich der gebotenen Qualität.

Die Zukunftsunsicherheiten erscheinen in der Kinderbetreuung zunächst überschaubarer, weil Aktivitäten von Wettbewerbern als Unsicherheitsfaktor weitgehend entfallen. Private Betreuungsangebote werden in vielen Fällen eher als Entlastung denn als Wettbewerb angesehen, und größere private oder kirchliche Kindertagesstätten sind den Kommunen planungsrechtlich zumindest bekannt und in der Regel auch abgestimmt. Signifikante Unsicherheiten in der zukünftigen Angebotsentwicklung dürfte es somit lediglich bei Tagesmüttern und gegebenenfalls privaten Kindervereinen geben. Hinzu kommen unter Umständen Unsicherheiten seitens des Bundeslandes oder des Landkreises als Schulträger in der Einführung von Ganztagsangeboten an Schulen, die sich massiv auf den zukünftigen Bedarf an Hortplätzen auswirken.

Bedarfsseitig gibt die Einwohnermeldestatistik eine gute Ausgangsbasis für die Planung, aber die Unsicherheiten sollten nicht unterschätzt werden. Gerade beim wachsenden Thema der Kleinkindbetreuung ist die Vorlaufzeit zwischen Geburtenentwicklung und Betreuungsbedarf kurz. Zudem können gesellschaftliche Entwicklungen die Nachfrage nach Ganztagesplätzen im Kindergarten, Kleinkindbetreuung oder Hortplätzen deutlich stärker schwanken lassen als nach der schon jetzt praktisch flächendeckend wahrgenommenen Betreuung der Drei- bis Fünfjährigen.

Ein gutes Kinderbetreuungsangebot entwickelt sich auch zunehmend zum Standortfaktor. Dies gilt zum einen in der Ansiedelung von Unternehmen, für die das Betreuungsangebot in der Nähe selbst ein wichtiger Faktor im Wettbewerb um geeignete Mitarbeiter ist. Nur wenige große Unternehmen haben die Möglichkeit, hier selbst aktiv zu werden – Mittelständler sind in der Regel auf ein gutes kommunales Angebot angewiesen. Im ländlichen Bereich, aber auch für die einzelnen Regionen innerhalb eines Ballungsraums, ist ein gutes Kinderbetreuungsangebot auch ein Schlüssel zu einer langfristig stabilen, gut durchmischten Demographie. Neben bezahlbarem Wohnraum ist das Betreuungsangebot der wohl wichtigste Standortfaktor für junge Familien.

Kommunen haben also auch außerhalb ihrer verpflichtenden Aufgaben ein deutliches Interesse daran, ein gutes Betreuungsangebot sicherzustellen, müssen gleichzeitig aber gerade angesichts der dort besonders unsicheren Nachfrage vorsichtig die benötigten Kapazitäten planen, um sorgsam mit Steuergeldern umzugehen. Wie aber kann eine solche Planung die vorhandenen Unsicherheiten angemessen berücksichtigen?

Genau für diese Art von Unsicherheiten gibt es Planungsansätze, die sich in der Unternehmensplanung bewährt haben. Dabei nutzt man die Tatsache, dass die tatsächliche zukünftige Entwicklung zwar nicht vorhersagbar ist, der realistische Spielraum möglicher Entwicklungen sich bei Planungsproblemen dieser Art jedoch in der Regel recht gut eingrenzen lässt. So folgt auf einen Anstieg in der Zahl der Kleinkinder in einer Kommune mit hoher Sicherheit mit zwei bis drei Jahren Verzögerung ein Anstieg bei den Kindergartenkindern. Ebenso wird sich die Zahl betreuungsbedürftiger Kinder nicht innerhalb kurzer Zeit verdoppeln – es sei denn, im betreffenden Ortsteil würde gerade massiv neu entstehender Wohnraum von jungen Familien bezogen.

Die Szenarioplanung bildet Unsicherheiten in Form konsistenter, in sich plausibler aber gegenseiteig klar unterscheidbarer Zukunftsverläufe ab. Hierzu werden zunächst die Parameter zusammengestellt, hinsichlich derer sich zukünftige Entwicklungen unterscheiden können (z.B. Zu-/Wegzüge junger Familien, Geburtenzahl, Anteil Kleinkinder mit Betreuungsbedarf…). Anschließend werden für jeden Parameter unterschiedliche Zukunftsentwicklungen skizziert und diese schließlich zu konsistenten Szenarien zusammengeführt, innerhalb derer für alle Parameter jeweils eine Entwicklung beschrieben wird. Die Szenarien sollen unterschiedliche Pfade durch den Raum möglicher Zukunftsentwicklungen aufzeigen und dabei möglichst auch die Grenzen des Realistischen berühren.

In ihrer klassischen Form ist die Szenarioplanung ein rein qualitatives Instrument, das keine Planzahlen liefern soll. Der Ansatz lässt sich jedoch problemlos auf eine quantitative Planung übertragen, wenn die Parameter und ihre Ausprägungen entsprechend in Zahlen formuliert werden. Hierbei ist je nach Fragestellung entweder eine zeitpunktsbezogene Quantifizierung der Szenarien oder eine Modellierung von Zeitverläufen möglich.

Da die Szenarien nur einzelne, beispielhafte Entwicklungen aufzeigen, kann eine interaktive Modellierung die Möglichkeit bieten, Zusammenhänge durchzuspielen und das Zusammenspiel der in der Szenarioplanung entwickelten möglichen Zukunftsentwicklungen mit denkbaren eigenen Maßnahmen zu testen. Darüber hinaus ist es im Modell möglich, die Annahmen der Szenarien direkt zu variieren. So lassen sich Fragen der folgenden Form beantworten: Unter welchen Voraussetzungen reichen die geplanten Kapazitäten bis zum Jahr X aus? Bis wann muss eine Entscheidung zwischen Erweiterung und Neubau eines Kindergartens erfolgt sein? Wann würde eine abweichende Entwicklung erstmals zu spürbaren Wartelisten für Betreuungsplätze führen, und wäre dann noch Zeit zum Gegensteuern? Lohnen sich die Schaffung vielseitig nutzbarer Räumlichkeiten oder ein Provisorium, oder ist eine dauerhafte Auslastung einer neuen Tagesstätte absehbar?

Schätzt man für die einzelnen den Szenarien zugrunde liegenden Parameter Streubreiten mit entsprechenden Eintrittswahrscheinlichkeiten ab, so ermöglicht dies eine statistische Analyse resultierender Risiken. Dies kann zum Beispiel auf Basis der Monte-Carlo-Simulation erfolgen, bei der über viele Durchläufe immer wieder Zufallszahlen im Rahmen der angenommenen Streubreiten generiert und die Ergebnisse für das Gesamtsystem erfasst werden. Dies ermöglicht Antworten auf Fragen wie: Mit welcher Wahrscheinlichkeit werden die heutigen Hortplätze trotz zusätzlicher Ganztagsschulklassen noch hinreichend ausgelastet? Wie viele neue Kleinkindbetreuungsplätze sind notwendig, um im Jahr 2022 mit einer Wahrscheinlichkeit von 95 Prozent lange Wartelisten zu vermeiden?

Modelle dieser Art sind nicht neu – in Großunternehmen sind sie zum Teil seit langem in Gebrauch, obwohl die benötigten Daten und quantitativen Abschätzungen aufgrund von Marktunsicherheiten dort oft weitaus schwerer zu generieren und deutlich ungenauer sind. Im öffentlichen Sektor sind ähnliche Ansätze zum Beispiel in der Bedarfsplanung für Verkehrswege etabliert. Mit ihrer stetig wachsenden gesellschaftlichen Bedeutung hat die Kinderbetreuung einen vergleichbar professionellen Blick in die Zukunft verdient.

How Much Does a Disease Cost? A Case Study with Relevance to Very Different Markets

Generally, when a company does a market analysis, they are interested in the sales potential for a current or future product. For most markets, there is data available from market research companies for that purpose, and companies will generally have procedures in place to base their analyses and their forecasts on that.

That, however, generally does not include looking at the global impact of what creates the market. Automobile manufacturers, for example, have increasingly been seeing themselves as suppliers of mobility for more than two decades, and while the impact is still not visible everywhere, they have started developing technologies accordingly. When looking at market volumes, however, the focus is still usually on the market for a certain type of vehicle, not for a certain mobiliy need – and for the decisions of everyday business, that is absolutely reasonable.

For a pharmaceutical company, a market analysis usually starts with the number of patients affected by a certain medical condition. Shortly after that, however, comes the question of the share of those patients that is or will be treated with pharmaceuticals. That share is then broken up by treatment regimes or substance classes to identify the share of the market that is accessible for a specific product. As always, whether that procedure makes sense depends on what is relevant for the decisions at hand. If the decisions to be made involve investing into certain products in different markets to build a well-rounded portfolio, this is certainly the way to go.

There are, however, situations when quite a different perspective on the market is needed. In the healthcare industries, that is particularly true when dealing with regulators, health insurance systems and politics. Putting pharmaceuticals in relation to the cost of hospitalization can make the price of a novel drug look much less outrageous, and even extended inpatient treatments can have moderate cost in comparison to the societal effects of inability to work. In other businesses, there are also stakeholders with a quite different focus: Putting company sales into the perspective of the overall impact of the problems which the company’s products are helping to solve can provide meaning to employees, affirmation to customers, and visions of future growth to stockholders. For these purposes, the numbers don’t have to be exact, but they have to be reasonable and defensible, and it should be possible to make the way the have been derived transparent in discussions, even if not all details can be revealed.

The challenge in providing this type of information is that it is relatively far from what internal marketing, strategy or research departments and even external market research companies are usually asked to deliver. Information on less globalized market segments (like home-care nurse services in healthcare or bicycle repairs in traffic) and on societal impacts (like missed work days or logistics time lost in congestion) are usually not available as comprehensive market research data. There is usually some fragmentary information available from scientific studies or statistics offices, but that will hardly suffice to give a clear view of the overall numbers, let alone effects like market dynamics and the potential effects of uncertain future developments.

In such a situation, large companies are in a unique position to derive that kind of information: They tend to have a profound understanding how the market works and the best (even non-public) data available for the specific segments of the global market they are active in. Combining that with publically available, but less detailed, data can allow them to provide unique value in communication with regulators, politicians and internal stakeholders alike.

To illustrate the process of deriving that information, let us look at a company in the pharmaceuticals/healthcare sector that seeks to derive the overall cost and potential future cost dynamics for a disease treated with the company’s products. The company has good knowledge of the basic drivers of the market, in this case total patients, their access to different levels of medical care, and their split into current treatment modalities (e.g. surgery, different drug types,…), usually on a country level. The societal impact of undiagnosed and untreated cases can also be included in this approach. There is also a sense of possible future developments of these drivers. However, in a case like this, there will be very limited cost data available beyond the own products and their immediate competitors. A pharmaceutical manufacturer will know its own prices and volumes as well as those of competitors, including their market shares, but will usually have at best vague knowledge on things like hospitalization or home care cost. There will be information available on these components, but it will usually be fragmentary, and it will generally be difficult to estimate overall cost and cost dynamics based on this information alone. The information will for example be limited to single countries and either single or varying composites of cost factors. The approach in this case is to leverage the information available within the company to provide a basic structure to which the fragmentary knowledge on other cost factors can be linked. For that purpose, an overall cost component tree is developed, of which the overall drivers (patients and treatment modalities) and own market segments constitute the known part.

To fill the unknown parts of the component tree, a number of sources from market research companies, scientific researchers, regulatory bodies and statistical offices is evaluated. They provide either aggregate total cost figures related to patient numbers, which can vary greatly by country, or ratios of different cost components, which tend to be more stable. The sources will be of different scope, relevance and validity: Each source usually includes only a subset of the cost components, and in many cases, components will be included in the form of different composites. In addition, the information derived will often be contradictory.

That, however, is more a strength than a weakness of this approach: The model with the data available constitutes an overdetermined system, in which the weaknesses of the empirical data can balance each other in an overall numerical optimization.

In many cases, percent shares of cost components will be more stable between different sources and markets than absolute numbers. Therefore, the overall structure of the cost component tree is described in the form of shares, which can then be applied to the scope and granularity of the different data sources.

Rather than trying to derive the overall component shares directly from the incomplete and incoherent information in the sources, a “current best assumption” set of overall shares is used as a starting point for a systematic optimization process. For these assumed shares, expected results can be derived for the data structure of each source. From the deviations of all shares in each source, an error parameter for that source is calculated, resulting in an overall error for the assumed shares.

In the graph above, for each data point, the first row indicates the cost share of the component within the data available from that source. The second row is the expected cost share within this available data, based on the assumed share distribution shown at the top. The third row is the deviation of actual and expected share for that source, written as simple decimals. The total error parameter is calculated by adding the squares of the individual errors.

Minimizing the sum of squared errors follows the idea of a statistical regression. To reduce the effect of single outliers in component values, which may be results of unreliable data, the absolute value of deviations can be used instead of the squared errors. In addition, weights can be applied to account for differences in validity or relevance of the different sources. The weights are applied to the total error parameter for that source. If sufficient data is available, separate instances of the model can be implemented for sets of regional markets (e.g. mature, evolving, developing), in which the weights on the sources are varied depending on their respective relevance.

In the final step, the set of assumed shares is optimized, minimizing the overall error parameter. The optimiziation problem is nonlinear, but generally continuous. In real cases, there will often be dozens of sources and more than ten cost components per segment/treatment modality. Therefore, a numerical or heuristical optimization of the shares will be needed. Mathematical tools for this task are available for most commonly used data platforms. To evaluate the stability of the optimization, multiple optimization runs should be done, varying the starting assumptions.

The resulting component cost shares can now be used to extrapolate from sums of known to unknown components. This will work with a relatively high precision for segments/treatment modalities in which significant components are known, but with a lesser precision, it can also be used across segments.

To project the model into the future, scenarios for market dynamics can be applied to the known own market data, for which future scenarios may already be in place and agreed upon in the company, but also to the component shares. That way, it is possible to also project effects like increasing cost of labor in evolving markets or expiring patents for key pharmaceuticals on the overall market.

Obviously, the approach cannot replace a detailed market analysis in any new segment in preparation of significant investment decisions. Beyond the look at overall market size, it can, however, also serve as a quick screening for further opportunities in adjacent segments, which can then be investigated in more detail.

Why Smart People Struggle with Strategy – Reading Recommendation in HBR

Harvard Business Review Online has some interesting thoughts by Roger Martin on strategy, uncertainty and the problems that people with top-notch academic credentials might have with them.

Martin certainly isn’t wrong when he points out that strategy is not about finding the right answer, but about choices for an uncertain future. His basic suggestions, having diverse teams including people who have experienced failure and treating each other with respect, sure are nothing to disagree with in principle, either.

Still, there is a stale aftertaste to the article. Blaming strategy consultants for having the need to feel right and for having that correctness validated is a bit absurd when in most strategy projects, that is exactly what the consultants are hired to do. The client pays for confidence and certainty, so for the consultant, pointing out that that doesn’t exist is a tough job. If the client is willing to accept and handle uncertainty, I am convinced most of my colleagues across a multitude of companies will be more than willing and able to incorporate that in their projects.

The first thing one has to do about uncertainty is to acknowledge that it is there. The next step is to use methods to still come to a strategic decision, without pretending to know exactly how the future will turn out. Those are actually the much more important issues than who the people working on a strategy are, or if they might possibly be too smart for the job…

 

On Forecasting New Product Sales, Experience, Artificial Intelligence and Statistics

Forecasting sales for new products is among the most difficult tasks in planning. There is nothing to extrapolate, and in the early stages, there is not even a finished product to present to customers and get their feedback. At the same time, sales forecasts for product innovations are inevitable. Some innovators say they cannot forecast, some say the do not forecast, but in the end, who doesn’t calculate sales forecasts explicitly will do it somewhere, somehow implicitly, often in a less thoughtful and therefore less careful way.

For the following case study, let us assume we are in an innovation-driven industry, which regularly develops new products. The current sales forecasts for product innovations have been done based on model assumptions and market research. Controlling has shown significant, unexplained discrepancies between planning and actual sales developments.

new product forecast demo controlling

There is a variety of methods available to forecast product innovation sales. For development products that are advanced enough to be presented to potential customers, conjoint analysis offers a tool to gauge perceived product advantage. While that product advantage is quite useful to determine what might be seen as a fair price, its connection to the achievable market share is obvious as a fact but not trivial in numbers. The Dirichlet Model of Buying Behavior links market penetration to market share and can thus be used to estimate the impact of marketing activities. It assumes market shares to be constant over the relevant period of time, but appears sufficiently tested to be considered valid at least for the peak market share to be reached by a product. Regarding the development of market share over time, the Fourt-Woodlock model differentiates between product trials and repeat purchases, whereas the Bass diffusion model describes the early phases of a product lifecycle in more mathematical terms, claiming to model the share of innovators and imitators among customers. For high repeat purchase rates and a vanishing share of imitators, both models are more and more similar.

The problem with all of these models is that in spite of all their assumptions, they still contain quite a few free parameters. In practical use, these parameters have to be derived from market research, taken from textbooks or guessed, introducing a significant degree of arbitrariness into the forecasts. The problem becomes apparent in this quote attributed to mathematician John von Neumann: “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” In fact, we will do just that, and the trick will be to find the proper elephant.

A totally different approach from these models would be to go by experience. What has been true for previous product launches, by one’s own company or by others, should have a good chance of being true for for future new products. Obviously, while having personal experience with the introduction of new products will help in managing such a process, a realistic forecast will have to be based on more than the experience of a few products and, therefore, individual managers. Individuals tend to introduce various kinds of bias in their judgements, which makes personal experience invaluable for asking the right questions but highly problematic for getting unbiased, reliable answers.

On the other hand, there is usually plenty of “quantitative experience” available. The company will have detailed data about its own product launches in the target market and in similar markets, and market research should be able to provide market volumes and market shares for competitors’ past innovations. Aligned, scaled to peak values and visualized in a forecast tool, the market share curves from past data could, for example, look like this:new product forecast demo research view

The simplest way to estimate new product sales would be to find a suitable model product in past data and assume the sales of the new product will develop in roughly the same way. That is relatively close to what an experienced individual expert asked for a judgement would probably do, with or without realizing it. Unfortunately, this approach can be thwarted by a variety of factors influencing the success of new products: newProductInfluences

The varying influence of all these factors will make it, at best, difficult to find and verify a suitable model in past data. Besides, judging from one model product yields no error bars or other indication of the forecast’s trustworthiness.

Apparently, to get a reasonable forecast, we need a more complex system, which should be able to learn from all the experience stored in past data. The term “learn” indicates artificial intelligence, and in fact, AI tools like a neural network could be used for such a task: It could be trained to link resulting sales or market share curves to a set of input parameters specifying the mentioned influences. The disadvantage of a neural network in this context is that the way it reaches a certain conclusion remains largely intransparent, which will not help the acceptance of the forecast. Try explaining to your top-level management that you have reached a conclusion with a tool without knowing how the tool came to that conclusion.

On the other hand, there is no need to model a new product exclusively from past data without any further assumptions. All the more analytical models and forecast tools cited above have their justification, and they define a well-founded set of basic shapes, which sales of new products generally follow. New product market shares will usually rise to a certain peak value in a certain time, after which they will either decrease or gradually level off, depending on market characteristics. The uptake curve to the peak value will be either r- or s-shaped, and can usually be well fitted by adjusting the parameters of the Bass model. The development after the peak is usually of lesser importance and depends strongly on future competitor innovations, which are more difficult to forecast. Often, relatively simple models will be able to describe the data with sufficient accuracy – if they use the right parameters.

Generally, all published sales forecast models use market research data from actual products to verify their validity and to tune their parameters. The question is to what extent that historical data actually relates to the products and markets we want to forecast. In this case, we are simply using model parameters to structure the information we will derive from recent market data from our own markets. Based on an analysis of the available data, we have selected the following set of parameters to structure the forecast:

  • Peak market share
  • Time from product launch to peak market share
  • Bass model innovation parameter p
  • Bass model imitation parameter q
  • Post-peak change rate per time period

The list may look slightly different depending on the market looked at. Tuning these parameters to the full set of the available data would lead to the average product. On the other hand, we have to take into account the influencing factors displayed in the graph above. These influencing factors can be quantified, either as simple numbers by scoring or by their similarity to the new product to be forecasted. This leads us to the following structure of influence factors, forecast parameters and forecasted market share development:dependency network

If the parameters were discrete numbers, this graph would describe a Bayesian network. In that case, forecasting could take the form of a probabilistic expert system like SPIRIT, which was an interesting research topic in the 1990s.

In our case, however, the parameters are continuous functions of all the influencing factors, which we approximate using simple, mostly linear, dependencies. These approximations are done jointly in a multidimensional numerical optimization. For example, rather than calculating peak market share as a function of product profile scoring, everything else being equal, we approximate it as a function of product profile, order to market, marketing effort and the other influencing factors simultaneously. The more market research data is available, the more detailed the functions can be. For most parameters, however, linear dependencies should be sufficient. As the screenshot from the case study tool shows, the multidimensional field leads to reasonable results, even if research data is missing in certain dimensions (right hand side graph). new product forecast demo parameter fit view

In addition, confidence intervals can be derived in the fitting process, leading to a well-founded, quantitative market share forecast implemented in an interactive model that can be used for all new products in the markets analyzed. Implemented in a planning tool, forecasts from that model could look as follows:

new product forecast demo forecast graph

Forecast numbers will depend on the values of the different influence factors selected for the respective product innovation.

This approach presented can be implemented for a multitude of different markets and products, provided there is sufficient market research data available. While using known and well-researched models to structure the problem, the actual information used to put numbers in the forecast stems entirely from from sales data on actual, marketed products. Besides this pure form, it can also be combined with more theory-based approaches, depending on the confidence decision makers in the company have in different theories.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

What if Your Flight is a Few Years Late? – Planning for Investments that Depend on Others

We apologize for the inconvenience. Unfortunately, the Federal Republic of Germany and its capital are unable to complete the country’s most prestigious infrastructure project anywhere near on time. Whoever was planning on doing business around Berlin Brandenburg Airport better has a Plan B somewhere in the drawer.

Unfortunately, as even international media is pointing out by now, such delays are not an unusual problem for large public or semi-public development projects in Germany. More often than not, they come with a hefty extra bill for the taxpayer: Hamburg’s new concert hall, the rebuilding of the Stuttgart central train station, the Cologne subway, even the new headquarters for Germany’s foreign intelligence service are due to open years behind schedule and with a total cost far beyond the original price tag.

Various commentators, including Oxford project planning professor Bent Flyvbjerg, see a method behind the madness: Faced with a public that is generally hostile towards new technology or infrastructure, German politicians tend to be overly optimistic on cost and timeline to get their investments through the political process. The resulting problems will usually be inherited by their successors, while after successful completion, much of the positive aspects of the project will be attributed to the long-retired initial promoter. Positive examples cited often come from the UK, like the 2012 olympics or the extension of Heathrow airport. However, it is quite obviously possible to complete major infrastructure investment according to plan in Germany , as well: The Frankfurt airport operator Fraport has a history of completing projects like the North-West runway or Terminal 2 on time and within a reasonable budget, at least once external political and administrative hurdles have been cleared. Planning is not perfect there, either, as it never will be: Terminal 2 was designed to handle airliners even larger than today’s A380, which are nowhere in sight, while passengers boarding the many much smaller planes there today mostly have to use rather unconvenient bus gates. However, the simple fact that the capacities promised were available at the time they were promised is a key factor for Frankfurt’s continued success.

So, are the delays at Flughafen Berlin Brandenburg a problem of strategic planning? They certainly weren’t caused by mistakes in strategic planning. If professor Flyvbjerg is right, they were caused by a combination of intentionally overambitious aims, supported by the politically elected board members, and unprofessional project management. To a certain extent, the delays of course represent a problem for strategic planning. However, no strategy could have prepared the company to survive maintaining a complete international airport that does not produce revenues for years without getting help from the (public) ownership.

The most interesting strategic planning questions come up outside the airport operator. About 80 tenants were waiting to start their business in the new terminal. Airport shuttles and logistics companies were preparing for the new airport.

For most companies planning to do business with, at or around the new Berlin airport, the fact that there are delays isn’t even the worst part. Many will be growing or migrating from existing businesses, and most will be renting rather than investing in actual real estate, so they should in principle be able to adjust to a change of schedule, given reasonable advance notice. Therefore, the postponement from October 2011 to June 2012, announced in June 2010, should not have been a disaster for most companies affected, as they were still far from operational readyness.

However, until May 8, 2012, the official opening ceremony of the airport was announced for May 24, the beginning of regular flight operations for June 3rd. Less than four weeks before facilities for 70000 passengers a day had to be running, the beginning of operations was postponed to August, then to March 2013, to October 2013, then “until further notice”. Even for tenants and contracting parties of the airport operator, who can at least hope for some degree of compensation, such a process must be a threat to a company’s very existence. At this point, employees have been hired and must be paid, merchandise or material have been ordered and can only be canceled at additional cost, interest on investment must be paid, and equipment starts losing value, even if it isn’t used.

So what can planning contribute to limit the damage from such a timeline change? Is it primarily a matter of operative project planning or does it involve strategic questions?

From the point of view of uncertainty, this is in fact a rather simple planning problem, as there is one dimension of uncertainty dominating the whole process, and with the official dates fixed, the only direction the timeline can change to is backwards. The planning process, therefore, becomes a combination of traditional project planning with uncertainty-based strategic elements. The following points have to be addressed:

  1. Basic Project Plan: Given the official external timeline, what should the own project plan look like? What are the internal deadlines? In the case of a tenant planning to run a retail business at the new terminal, when should the furniture be ordered, when the merchandise, when is the workforce hired and trained, when must the IT be ready?
  2. Resulting Business Plan: What does the financial side of the project plan look like? What investments have to be made, when do running expenses start? When can the first revenues be generated?
  3. Dependencies: At which points could external uncertainties, in this case the timeline of the airport, affect the project plan?
  4. Modeling:

Interesting Interview on Game Theory – Kenneth Binmore in The European

The German issue of The European ran an interview on game theory (and more) with mathematician-turned-economist Kenneth Binmore today. Large parts of it are taken from an English version that appeared recently. Frank Schirrmacher’s book, mentioned here in How Powerful is Game Theory? Part 1, is discussed, or rather refuted, with a little more attention paid to it in the German version. The interview goes way beyond business implications and addresses mostly social and philosophical aspects, but it should be interesting to read for anyone with an interest in decisions and their implications.

How Powerful is Game Theory? Part 2 – A Powerful Tool for Strategic Planning?

In part 1 of this article, we took a skeptical look at game theory and the claims made by Frank Schirrmacher in his book Ego. So if game theory is not a satanic game that destroyed the Soviet Union and makes our economies a playground for selfish rational crooks, if it is, as I concluded, only one of many decision support tools available to managers, is it even the powerful tool described in management textbooks?

To get a clearer picture of the actual value of the concept, we will take a closer look at the advantages claimed for game theory in strategic planning and at the drawbacks directly associated with them:

  • Game Theory Studies Interactions with other Market Participants. In fact, game theory is the only commonly cited tool to explicitly study not only the effect of other market participants on one’s own success, but also the impact one’s own decisions will have on others and on their decisionmaking. However,
    Game Theory Focuses Unilaterally on These Interactions. In some special markets and for certain usually very large players, in-market interactions account for the majority of the business uncertainty, but most factors that most companies are exposed to are not interactive. The development of the economy, political, social or technological changes are not influenced much by a single company. If a tool focuses entirely on interactions and other important factors are hard to incorporate, they tend to be left out of the picture. With a highly specialized, sophisticated tool, there is always a danger of defining the problem to fit the tool.
  • Game Theory is Logical. The entire approach of game theory is derived from very simple assumptions (players will attempt to maximize their payoff values, specific rules of the game) and simple logic. Solutions are reached mathematically, usually in an analytical way, but for more complex problems, numerical approximations can be calculated, as well. But
    Game Theory is also Simplistic. Most of the problems for which analytical solutions are generally known are extremely simple and have little in common with real-life planning problems. Over the years, game theory has been extended to handle more complex problems, but in many cases even formulating a problem in a suitable way means leaving out most of the truly interesting questions. In most cases, information is much more imperfect than the usual approach to imperfect information, probabilistic payoff values, suggests. Most real problems are neither entirely single shot nor entirely repetitive, and often, it is not just the payoff matrices but even the rules that are unclear. Any aspect that doesn’t fit the logic of player interaction either has to be investigated beforehand and accounted for in the payoff matrix, or has to be kept aside to be remembered in the discussion of results. The analytical nature of game theory makes it different to integrate with other planning concepts, even other quantitative ones.
  • Game Theory Leads to Systematic Recommendations. Game theory is not just an analytical tool meant to better understand a problem – it actually answers questions and recommends a course to follow. On the other hand,
    These Recommendations are Unflexible. If a tool delivers a systematic recommendation, derived in a complex calculation, that recommendation tends to take on a life of its own, separating from the many assumptions that went into it. However, whichever planning tool is employed, the results will only be as good as the input. If there are doubts about these assumptions, and in most cases of serious planning there will be, sensitivities to minor changes in the payoff matrices are still fairly easy to calculate, but testing the sensitivity to even a minor change in the rules means the whole game has to be solved again, every time.
  • Game Theory Leads to a Rational Decision. Once the payoff values have been defined, game theory is not corruptible, insensitive to individual agendas, company politics or personal vanity. Although including the irrational, emotional factors in a decision can help account for factors that are difficult to quantify, like labor relations or public sentiment, being able to get a purely rational view is a value in and of itself. The drawback is,
    Game Theory Assumes Everybody else to be Rational, as Well. Worse than that, it assumes everybody to do what we consider rational for them. While some extensions to game theory are meant to account for certain types of irrationality of other players, the whole idea really depends on at least being able to determine how others deviate from this expectation.

These factors significantly impact the applicability of game theory as a decision tool in every day strategic planning. In that case, why is it taught so much in business schools? Why are many books on game theory extremely worthwhile reading material?

  • Game Theory Points in a Direction often Neglected. There are not that many other concepts around to handle interdependencies of different market participants. Just like having no other tool than a hammer makes many problems look like nails, not having a hammer at all tends to cause nails to be overlooked. Many dilemmas and paradoxes hidden in in-market interactions have only been studied because of game theory and will only be recognized and taken into account by knowing about them from game theory, even if the textbook solutions are hardly ever applicable to real life.
  • Game Theory Helps to Structure Interdependencies. Although the analytical solution may not lead to the ultimate strategy, even without seeking an analytical solution at all, trying to derive payoff matrices leads to insights about the market. Systematically analyzing what each player’s options are and how they affect each other is a useful step in many strategic processes, even if other factors are considered more influential and other tools are employed.
  • Game Theory Shows how the Seemingly Irrational may be Reasonable. Game Theory shows how even very simple, well-structured games can lead to very complex solutions, sometimes solutions that look completely unreasonable at first sight. This helps to understand how decisions by other market participants that look completely unreasonable at first sight may be hiding a method behind the madness.

In short, while game theory probably doesn’t provide all the answers in most business decisions, it sure helps to ask some important questions. Even if it is not most adequate everyday planning tool, it is a good starting point for thinking – which is not to be underestimated.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

A Reading Recommendation that Points Beyond Planning

There are some interesting thoughts on risk culture in an article posted by the folks from McKinsey & Company: Managing the people side of risk. Rather than addressing the planning perspective, the article focuses on the challenges uncertainty poses for a company’s organization and culture. That is interesting, considering that planning alone will not insure that a company is actually able to respond to uncertain developments. Three requirements for a company’s culture are pointed out:

  • The fact that uncertainty exists must be acknowledged
  • It must be possible and actively encouraged to talk about uncertainty
  • Uncertainty must be taken seriously, and respective guidelines must be followed

While the article stays mostly on the surface and never indicates that there also is a strategic perspective to the issue, it hints at some anonymized real-life cases and contains an important beyond-strategy viewpoint.

The authors use the term risk instead of uncertainty, following the colloquial sense of associating risk with a negative impact rather than a quantifiabe probability.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

Bayes’ Theorem and the Relevance of Warning Signs

One of the interesting things about working in the field of business strategy is that the most inspiring thoughts relevant for one’s work are usually not found in textbooks. Dealing with spaceflight, science fiction and fringe technology, the io9 portal is about as geeky as they come and can hardly be considered typical manager’s reading. It does, however, contain fascinating ideas like this one: How Bayes’ Rule Can Make You A Better Thinker. Being a better thinker is always useful in strategy, and the article points out a number of interesting thoughts about how what we believe to be true is always uncertain to a degree and about the fallacies resulting from that. Unfortunately, it just very briefly skips over what Bayes’ Theorem actually says, although that’s really not that difficult to understand and can be quite useful, for example in dealing with things like warning signs or early indicators.

Obviously, Bayes’ Theorem as a mathematical equation is based on probabilities, and we have pointed out before that in management, as in many other aspects of real life, most probabilities we deal with can only be guessed. We’re not even good at guessing them. Still, the theorem has relevance even on the level of rough estimates that we have in most strategic questions, and in cases like our following example, they can fairly reasonably be inferred from previous market developments. Bayes’ Theorem can actually save you from spending money on strategy consultants like me, which should be sufficient to get most corporate employees’ attention.

So, here is the equation in its simplest form and the only equation you will see in this post, with P(A) being the probability that A ist true and P(A l B) being the probability that A is true under the assumption that B is true. In case you actually want more math, the Wikipedia article is a good starting point. In case that’s already plenty of math, hang in there; the example is just around the corner. And in case you don’t want a business strategy example at all, the very same reasoning applies to testing for rare medical conditions. This equation really fits for a lot of cases where we use one factor to predict another:

Bayes Rule

What does that mean? Suppose we have an investment to make, maybe a product that we want to introduce into the market. If we’re in the pharmaceutical business, it may be a drug we’re developing; it can also be a consumer product, an industrial service, a financial security we’re buying or real estate we want to develop. It may even be your personal retirement plan. The regular homework has been done, ideally, you have developed and evaluated scenarios, and assuming the usual ups and downs, the investment looks  profitable.

However, we know that our investment has a small but relevant risk of catastrophic failure. Our company may be sued; the product may have to be taken off the market; your retirement plan may be based on Lehman Brothers certificates. Based on historical market data or any other educated way of guessing, we estimate that probability to be on the order of 5%.

That is not a pleasant risk to have in the back of your head, but help is on the way – or so it seems. There will be warning signs, and we can invest time and money to do a careful analysis of these warning signs, for example do a further clinical trial with the drug, hire a legal expert, market researcher or management consultant, and that analysis will predict the catastrophic failure with an accuracy of 95%. A rare event with a 5% probability being predicted with a 95% probability means that the remaining risk of running into that failure without preparation is, theoretically, 0.25%, or, at any rate, very, very small. The analysis of warning signs will predict a failure in 25% of all cases, so there will be false warnings, but they are acceptable given the (almost) certainty gained, right? Well, not necessarily. At first glance, the situation looks like this:

decision situation 1

If we have plenty of alternatives and are just using our analysis to weed out the most dangerous ones, that should do the job. However, if the only option is to do or not do the investment and our analysis predicts a failure, what have we really learned? Here is where Bayes’ Theorem comes in. The probability of failure, P(A), is 5%, and the probability of the analysis giving a warning, P(B), is 25%. If a failure is going to occur, the probability of the analysis predicting it correctly, P(B l A), is 95%. Enter the numbers – if the analysis leads to a warning of failure, the probability of that failure actually occurring, P(A l B), is still only 19%. So, remembering that all our probabilities are, ultimately, guesses, all we now know is that the negative result of our careful and potentially costly analysis has changed the risk of failure from “small but significant” to “still quite small”.

decision situation 2

If the investment in question is the only potential new product in our pipeline or the one piece of real estate we are offered or the one pension plan our company will subsidize, such a result of our analysis will probably not change our decision at all. We will merely feel more or less wary about the investment depending on the outcome of the analysis, which raises the question, is that worth doing it? As a strategy consultant, I am fortunate to be able to say that companies will generally just spend only about a permill of an investment amount on consultants for the strategic evaluation of that investment. Considering that a careful strategic analysis should give you insights beyond just the stop or go of an investment, that is probably still money well spent. On the other hand, additional clinical trials, prototypes or test markets beyond what has to be done anyway, just to follow up on a perceived risk, can cost serious money. So, unless there are other (for example ethical) reasons to try to be prepared, it is well worth asking if such an analysis will really change your decisions.

Of course, if the indicators analyzed are more closely related to the output parameters and if the probability of a warning is closer to the actual risk to be predicted, the numbers can look much more favorable for an analysis of warning signs. Still, before doing an analysis, it should always be verified that the output will actually make a difference for the decisions to be made, and even if the precise probabilities are unknown, at least the idea of Bayes’ Theorem can help to do that. Never forget the false positives.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

Seven Essences of Decision-Making

Background
One thing right away: Decisions are always made at a specific point in time and by people facing specific advantages and disadvantages. Thus, a decision can only be ”right“ at this particular point in time and for that person. Whether this decision sustains for a longer period cannot be projected when it is being made. This insecurity concerning decision outcomes cannot be avoided by avoiding decisions as such. Not taking a decision also represents a decision – to do nothing. In the present economic moment dominated by instability, managers often decide to follow “visual flight rules“ – certainly the worst option.

Vancore helps its clients structure and implement breakthrough decisions. As a specialized management consultancy, we put the focus on the right issues to develop customer led solutions that clients own and are passionate about. Ultimately, clients work with us because they achieve a significantly higher Return on Decision RoD. How do I take the right decision? What do I need to consider? Whom do I have to consult at which time? Vancore has identified seven central aspects which considerably impact the quality of decisions.

1.    People
They might really exist; ingenious single-minded decision-makers, people like Ferdinand Piëch and Jack Welch, who take so-called A1 decisions (i.e., they gather all important information and then decide self-sufficiently). Even though this decision might have been right, who takes care of the next step? Think about it carefully: What kind of expert knowledge do you need? Which interest groups need to be included? Is there resistance in the way? Change Management is not to be mentioned on the final page of a concept only. Start making changes now – right at the beginning of the decision-making process. Gather together the most brilliant people on the team and work on decision-making together.

2.    Facts
NDF – Numbers, data, facts. We live in a world in which everything needs to make sense by the numbers. This leads to the result that often, a flood of data keeps overwhelming CEOs and directors. Whoever offers more data is on the safe side, is well-prepared and wins the presentation battle by pulling out slide 126.
Well, this all sounds nice. How about the relevance of data loads, however? This question is hardly being raised. Only when decision-critical findings emerge from information, the previous analysis and all the preparation were useful.

Be courageous and together with your team address the question of relevance: Which information is really important? Where do we find white spots in the map? Do we know the real causes of past problems? Do we have to know them in order to be able to decide? Go ahead and create a reliable basis of previous and current findings. Learn from the past. Set limits and try to avoid data overload. Do not take neglectful decisions but decisions that are well-balanced.

3.    Systemic Procedure
A process is a logical sequence of certain steps. In the western world, this sequence usually proceeds from left to right, for example, from analysis via conception to implementation. Surprisingly, this logical sequence is followed less frequently the higher the respective decision is located in the corporate hierarchy. In addition, decision makers usually concentrate more on options and recommendations concerning content, not on the process of decision-making.     Often, lively discussions on side-topics therefore end up being mixed with crucial debates on distribution partners, relocations, and expansion strategies.

It makes a lot of sense to define a structured process and to follow this process when making a decision with considerable impact. In the end, one will notice that many topics, if seen in combination, reveal a very different nature than in the case of single inspection. Above all, the works of nobel prize winner Daniel Kahneman show us that big decisions in iteration processes emerge from insight, criteria, and options. Such a successful process is always transparent, comprehensible, and robust enough to be repeated for a number of times. One conclusion therefore remains: A process can only function well if it is adhered to accordingly.

4.    Insight
Many leadership circles equal soccer stadiums: Holding on to the ball, i.e., the total time someone has the floor, is what counts. Options for passing the ball are often ignored. Scoring, i.e., the final decision, is often neglected that way. Force yourself and your management team to draw specific conclusions. Engaging in discussions is nice but what are we to learn from them specifically? Where is the insight? Should we engage in the Chinese market? Which risks are connected to this decision? Which key capabilities are relevant? Does our portfolio need adjustment?

At the decision-making point, the leadership team needs to put its cards on the table by advocating comprehensible and communicable learning points. This is how you structure your decision step by step – seemingly random building blocks form to a structured pyramid. The crucial thing is that you and your employees can trace the decision-making process – it is the basis of successful implementation.

5.    Authenticity
In the realm of “ringi seido“ (Japanese process of decision-making), management simply receives an impetus. Central problems are then handed down to the lower management levels to be solved. This triggers a circulation process including all the respective levels and departments in order to then seek a corporate consensus. This entire process is accompanied by “nemawashi.“ This concept aims at informally involving all necessary people previous to making a decision. In consequence, the actual decision turns into a more or less formal step.
In the West, however, decision-making is usually carried out differently. Although key decision makers and opinion leaders are being informed on a regular basis previous to a meeting, the actual decision itself is being „fought out“ in the meeting itself.
As consultants who steer this process, we notice that the really important topics are only discussed indirectly or not at all. “As long as I do not hurt you, you will not hurt me,“ is a tacit motto which reigns many of these procedures. What we are missing is the passionate discussion of issues, not of persons. Dissent and constructive criticism are healthy. Some companies, such as the BMW Group, have even included dissent as value in their guidelines. It needs to be allowed to struggle for certain key decisions. Authenticity in a discussion culture therefore represents a key factor. How often do you leave a meeting with the awkward feeling that the major roots of problems have not been targeted? This is when a team needs to be honest to itself: Are we really willing to confront the uncomfortable issues/people? Only the addressing of deeper matters and their solving can contribute to a decision-making process.

6.    Common Ground
Everyone looks after themselves first. Unfortunately, this has to be the case, since performance systems often only consider the contribution of individuals or individual functions. However, the individual performance of single people hardly ever advances a corporation. Team process, however, take time. If done well, they create intensive self-identification with the organization and increase the motivation of employees. Team building does not happen accidentally. A short retreat – “we go to the countryside, hammer nails into planks and milk cows” – can trigger such a process. But what is the real value? The crucial question is: how important are culture and values to you?

7.    Consistency
Often, we hear that companies invest 100 euros in conception, idea and strategy while only spending 10 euros for implementation planning and conducting. We do not understand why this is the case. Thus, Peter Drucker’s saying still holds true: “Culture eats strategy for breakfast.“ Paper is patient, as experience teaches us.

Success depends on consistent and stringent acting. And actions here mean sustainably consistent behavior. A strategic project needs to be prioritized within a transparent project portfolio with clear targets, milestones, and resources. This prevents the magical multiplying of “submarine“ projects in the CEO office, i.e., intransparent and costly ventures. In a world of limited resources, thus, saying “yes“ to a project at the same time means saying “no“ to many other things which one, nevertheless, would like to do.

Our Conclusion:
There is not the one recipe for successful decision-making. This is why we also discourage you from following quick and easy checklists (also see Sibony O. for McKinsey in Harvard Business Manager, September 2011).
It is possible, however, to make a decision which is right for oneself, although the final result cannot be projected. Just take our Seven Essences of Decision-Making to heart. Your company and your employees will appreciate it.

Reinhard Vanhöfen
Vancore Group GmbH & Co. KG