How Powerful is Game Theory? Part 1 – A Satanic Game?

In his new book: Ego. Das Spiel des Lebens (The Game of Life), Frank Schirrmacher, famous German columnist and editor of Frankfurter Allgemeine Zeitung, attributes both the collapse of communism and the behavior of humans in modern capitalism to a combination of game theory and advanced computing. According to Schirrmacher, game theory has turned humans into completely rational egoists, running the entire economy in IT-controlled financial markets.

Whereas Ego is quite obviously meant to be an entertaining sensation story rather than a textbook, no business school lecture or book on strategic planning would be complete without a chapter on game theory. Already in their 1944 classic Theory of Games and Economic Behavior, the developers of game theory, John von Neumann and economist Oskar Morgenstern, pointed out the concept’s applicability to corporate strategic decisions.

To estimate the impact game theory can actually have in corporate strategy (to be discussed in part 2 of this article) or in the steering of whole economies, let us take a very brief look at what game theory actually does. Here is hardly the place for a complete introduction to rather large field of game theory. Therefore, we will simply recall some important aspects needed to outline the capabilities and limitations of the concept. To refresh your knowledge in more depth, there is a multitude of resources on the web, from articles or videos to presentations or whole books. The Wikipedia article also is a good starting point on various types and applications of game theory for readers with some memory of the basics.

Game theory models a decision in the form of a game with clearly defined rules, which can be mathematically modeled. The games consist of a specified number of players (typically two players in the games cited as examples) who have to make decisions (typically just one per game), and there are predefined payoffs for each player, which depend on the decisions of all players combined. Two-player games with only one decision can be described in the form of a payoff matrix, in which columns describe the options of one player, rows the options of the other. Each matrix field contains payoffs for each player. Game theory then derives each player’s decision leading to the highest payoffs. Commonly cited examples of games are the prisoner’s dilemma, the chicken game or battle of the sexes.

Various types of complexity can be added to such a game. There can be more players who may or may not have previous knowledge of the other’s decisions. Players can have the aim of cooperating and achieving the highest total payoff, or they can compete and even try to harm each other. There can be different consecutive or simultaneous decisions to make, and the game can be played once or repeatedly. Payoff chances can be the same for each player or different; they can be partially unknown or depend on probability. For games with several rounds, complex strategies can be derived. If a strategy only specifies probabilities for each option, while the actual decisions are made randomly according to these probabilities, it is called a mixed strategy. With the complexity of the game, analytically optimizing strategies becomes increasingly difficult.

Based on these principles, can game theory really deliver what is attributed to it? How powerful is this tool? Starting with Schirrmacher’s book, can game theory be the decision machine for a whole economy he describes?

First of all, Schirrmacher ascribes the economic collapse of the Soviet Union and the whole communist block to the superior use of game theory by the US. That would, however, mean that the Soviet Union’s dwindling economic strength should have been caused by at least some kind of influence from a methodically acting outside competitor. In fact, the omnipresent problems of socialist economies around the world – misallocation of resources, inefficiency, lack of motivation, corruption and nepotism – come from within the system. Trade restrictions were limited to goods with a potential military significance, and at least the East German economy was even kept alive with credits from the West.The only factor in which American influence really massively impacted the Soviet economy was the excessive transfer of resources to the military sector in the nuclear arms race. But did the United States really need intricate decision models to try to stay ahead technologically while maintaining at least a somewhat similar number of weapons as the potential enemy? Obviously not. Did it take game theory to understand the Soviet concept of outnumbering any opponent’s weapons by roughly a factor of three? That was simply the Red Army’s success formula from World War II and easily observable from the 1950s onward. Game theory has to quantify outcomes as payoffs, often in the form of money, at least in terms of utility. Can such a model help to predict the secret decision processes, more often than not driven by personal motives, in the inner circles of the Soviet leadership? Does it contribute anything more valuable than the output of classical political and military intelligence?  There is a reason why the military was much more interested in game theory as a tool for battlefield tactics than in terms of global strategy.

So if game theory contributed little or nothing to the end of communism, how about Schirrmacher’s second hypothesis? Has game theory turned our decision makers into greedy rational egoists ignoring all social responsibility? Indeed, game theory works for decisions to be made based on the payoff matrix, and in the simplest form, the payoffs just correspond to profits. Commentators point out that game theory can lead to cooperative as well as competitive strategies, but cooperative strategies will also be aiming at maximizing individual or shared payoffs.

The actual point is that game theory in no way implies that a decision maker must or even should aim to maximize profits (although if the decision maker is a manager paid by his company’s shareholders waiting for their dividends, there are good arguments that he should, with or without game theory). Game theory attempts to show to a decision maker which strategy should lead to maximizing an abstract payoff. That payoff may be profit, or it may result from any other utility function. For military officers, the payoff may for example correspond to minimizing the loss of own casualties or to the number of civilians evacuated from a danger zone. For a sales manager, it may be the number of products sold or customer satisfaction.

Even if game theory derives a stategy as leading to the maximum payoff, it still does not mean that the decision maker has to follow that strategy. For example, even if the payoff is identical to profit, game theory can for example be used to estimate how much short term profit must be sacrificed to follow a more socially accepted strategy.

In short, game theory is simply one of many decision support tools available to managers and as firmly or loosely linked to profit maximization as any other of these tools.

Where game theory, however, always relies on maximization of the assumed (monetary or other) payoff is to guess the probable decisions to be made by other parties involved, be they competitors or cooperation partners. Without further information on the other parties’ intentions, game theory has to assume they will maximize payoffs – otherwise there will be no basis for any calculation. In a situation where everyone uses game theory, maximizing payoffs should therefore even help the competitors because it makes one’s actions predictable. What that implies for the applicabilitiy of game theory in actual strategic planning will be discussed in part 2 of this article.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

The Role of Databases for Strategic Planning – Case Study in Palo/Jedox

After discussing the general capabilities and limitations of databases for strategic planning in The Role of Databases for Strategic Planning – Some General Remarks, we have looked at a case study in a relational database accessed through a special data mining tool: The Role of Databases for Strategic Planning – Case Study in Qlikview/Oracle. In this case, we will look at a case study very similar in the business perspective, but based on a different database concept. The database we will be looking at is Palo or Jedox (all product names mentiones are trademarks of the respective owners), accessed not through an external tool but through the database’s standard access mechanisms. Palo is an open source OLAP database, with an upgraded version under commercial license sold under the manufacturer’s name, Jedox. The case study was done using the latest open source version, Palo 3.1.

Again, we are addressing the issue that a planning database will generally contain input from very different people in different parts of the company, and possibly even from outside partners. Large planning databases may contain some replies to “what-if” questions, but discussing and testing the implications of a large number of future developments with so many contributors will usually be impractical. In most cases, information contributors from local marketing or external market research companies will not have enough time to contribute for extended strategic work. Therefore, it will be necessary to use the multitude of planning figures in the database as the basis for a calculation of scenarios and strategic options defined and evaluated later, at the corporate strategy level.

As in the Oracle/Qlikview example, the case study involves a manufacturer of electronic components, systems and services, operating in different regions with a number of product lines. The existing planning database contains basic sales and cost figures for own products and only sales data for the main competitor products or local competitor groups. Planning figures cover some years of back data plus four years of forecast. For the case study, we assume that all the data ist stored in one database cube. That is not the most efficient way of storing the data, as zeros will be stored for competitor cost data, but probably a realistic way such databases will have been set up for reasons of simplicity. Considering the limited data volume of 60 products in 32 regions for seven years, memory will hardly be a serious matter of concern in this case, anyway. The dimensions of the case study cube are product, region, year and figure (the figures being units sold, net revenue, direct cost and overhead cost).

As we will be using a standard Palo user interface, it will be fairly simple to write data back to the database. Therefore, calculated simulation results can be stored in the database as well, in a different cube to keep them separate from data accessed by other users. The new cube has the simulated business case as an additional dimension, the year dimension is extended by the extrapolation, and the figure dimension stores additional figures calculated in the course of the simulation. A plain database access screen, therefore, looks as follows (original forecast db on the top, simulation db on the bottom, click to enlarge):

The first question to answer in accessing data from a Palo/Jedox database is the interface platform to be used. There are two main data interfaces provided: One is Palo Web, a browser-based data access and manipulation tool that also allows calculations and macros, the other comes in the form of plugins for either MS Excel or OpenOffice Calc. The plugins allow simple access from both tables and macros to the data, which can then be manipulated using the full functionality of the respective program. As OpenOffice is less common in companies, the respective plugin was not tested for this case study.

In determining which solution works best for the simulation, we have to keep in mind which tasks will have to be performed by the tool. The simulation has to access relatively large amounts of data simultaneously, then perform complex calculations based on interactive assumptions. Most of the calculations will have to be done in macros.

With the Palo Excel plugin, a set of almost identical database access functions can be performed either in table cells when a table is recalculated or directly from a macro. As accessing many adjacent database entries from a table can be done simultaneously in an optimized way, this form of db access is much faster than from the macro. In fact, reading the whole data volume characterized above into a table is a matter of seconds. Table recalculation has to be set to manual and managed by macros after that to keep the tool performing at reasonable speed, but that can be done quickly and almost invisible to the user. Once the data is in Excel, the respective table can be copied to a Visual Basic array. The fast Visual Basic compiler with reasonable editing and debugging support allows the convenient development of all necessary macros. Running the extrapolation to the simulation timeframe or an interactive simulation in the described manner, writing the data back to Excel tables and updating the respective displays and graphs is a matter of less than five seconds for our example and thus easily fast enough for interactive work. Writing the calculated data back to the database takes a few minutes, as the writing, as opposed to reading data from the database, has to be done cell by cell. This step can therefore not be part of the regular interactive work, but should rather be offered as a possibility of storing the results at the end of an interactive session.

Palo Web provides a table calculation tool similar to Excel or OpenOffice Calc in a browser window:

Palo Web files are stored on the server rather than locally, which may be interesting if they are to be accessed by several users. Cell manipulation and data visualization capabilities are quite similar to Excel and relatively easy to adjust to, but have their peculiarities and restrictions in some details. The database access functions themselves are practically identical to the ones provided by the standard software plugins. An important difference is the macro engine. Palo Web offers macros in the web programming language php. With its C-like syntax, php is relatively easy for an experienced programmer to adjust to, and it is well documented online. Remarkably, when comparing calculation times with Excel’s rather fast  Visual Basic compiler, no significant differences were found. A major drawback of Palo Web’s macro capability, however, is the developing environment. The editor provides at least some basic support like automatic indentation of passages in curly brackets, but debugging is extremely inconvenient. For a developer experienced in Excel, designing the tool surface will also take longer because of differences in the details. After the case study development ran into stability issues with the php macro engine when writing larger sets of data to either a table or a database, a clear preference was given to the Palo for Excel plugin.

The extrapolation of the forecast data read from the Palo database to the full simulation timeline allows selection of assumptions in a way similar to the one described in the Qlikview-Oracle example. The ability to recalculate the extrapolation for single products rather than the whole market is only needed if the extrapolation assumptions are to be varied for different products. If all products are to be extrapolated using the same assumptions, the calculation for the whole market is so fast that the user would have no time advantage by only recalculating only one product. To account for product lifecycles, the default extrapolation is not done in a linear way, but by fitting standardized life cycle curves to the data. Using linear extrapolation instead may be reasonable for competitor data that includes whole product portfolios.

Each simulation calculates the combined effects of a selection of possible future scenarios and the company’s own strategic options. The possibility of combining scenarios is a deviation from classical scenario theory, possible because scenarios are treated as deviations from a baseline plan (the extrapolated numbers from the original planning database). A simulation with its selection of strategies and scenarios can also be stored as a business case. A business case stores the simulation results in the large extrapolation/simulation data cube and the selected scenarios and strategies with their properties in smaller cubes:

The planning tool is designed for a continous strategy process, which is to be used for several years. Over the course of this time, the expectations for the future can change significantly. New scenarios can become thinkable; existing scenarios can be ruled out, and the expected results for scenarios can change. The scenarios and their effects are therefore variable and can be changed interactively. Scenario properties include a name, verbal description and effects on sales potential, price, direct and overhead cost to be set globally or for products, markets, countries or regions. Each scenario can store a combination of up to 10 effects.

The strategy definition is very similar to the scenarios. The difference is that strategies can only affect the company’s own products directly and will only have an indirect effect on competitor products through redistribution of market shares. Strategies can also include adding complete new lifecycles to account for product innovation or the acquisition of competitors.

Once a simulation has been calculated, various visualizations are possible and will be automatically generated in the tool. The simplest visualization is the timeline view for a selected figure in a selected product and region. This view allows a detailed look at all the information calculated in the simulation.

For strategic decisions, more aggregated views may be reasonable. Portfolios are such aggregated displays that decisionmakers will be familiar with. Risk portfolios display the expected value vs. the associated risk for a figure. In this case, the associated risk can for example be defined by the spread of possible results over all scenarios for the selected strategy.

As already mentioned in the Qlikview case study, it must be kept in mind that the purpose of the strategic simulation is not to provide exact numbers on what sales will be in the year 2022 given a certain scenario, but rather to make it clear to what extent that value can vary over all included scenarios. No simulation can eliminate uncertainty, but a good simulation will make the implications of uncertainty more transparent.

If the data to be the basis for such simulations is stored in a Palo/Jedox database, it is reasonable to make this database a part of the simulation. In this case, separate cubes in the same database can be used to store simulation results and assumptions. In the case study, both Palo Web and Palo’s MS Excel plugin have been found to be usable interfaces to integrate the simulation tool into, but the plugin has turned out to be the slightly faster and significantly more stable solution, with advantages in development effort, as well.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

The Role of Databases for Strategic Planning – Case Study in Qlikview/Oracle

In The Role of Databases for Strategic Planning – Some General Remarks, we have looked at the increasing role databases seem to have in strategic planning and the difficulties that arise in including future uncertainties in this kind of planning.

It appears quite impractical to get all the contributors to include possible future uncertainties in their input. First of all, they would have to generate massive amounts of data, costing them significant time beside their everyday work, which is often in marketing, market resarch or sales rather than planning. Second, they will probably have very different ideas and approaches to what might change in the future, making the results difficult to interpret. In addition, database structures tend to resist change, so adjusting them to possible new developments someone foresees can be a lengthy and resource-intensive process.

Uncertainty based planning can, however, be implemented building a simulation on existing, single-future planning from a database, making it possible to vary assumptions ex-post and allowing the user to build his different scenarios and strategies based on the information provided by all the different contibutors to the database. The simulation should build on the infrastructure and user interfaces generally used to get information from the database, so the implementation will be quite different depending on the given framework.

For the case study, let us look at a manufacturer of electronics components, integrated systems of these components and services around these components for different industries. 32 sales offices define the regional structure, which is grouped in 2nd level and top level regions. There are 21 product lines with individual planning, and market research gathers and forecasts basic data for 39 competitor product lines or competitor groups. Product lines are grouped into segments and fields of business. Strategically relevant figures in the database are units sold, net revenue and, available only for own product lines, direct cost and overhead cost. There are a few years of historical data besides forecasts for the coming four years, which is sufficient for the operative planning the database was intended for, but rather short for strategic decisions like the building of new plants or the development or acquisition of new products. The strategically relevant total data volume therefore is small compared to what controlling tends to generate, but rather typical for strategy databases.

In this case, we will look at a relational (e.g. Oracle – all brands mentioned are trademarks of their respective owners) database accessed through the Qlikview business intelligence tool. As Qlikview defines the only access to the data generally used by the planner, the actual database behind it, and to a certain extent even its data model, will mostly be interchangeable. The basic timeline view of our case study database looks as follows:

To effectively simulate the effects of the company’s decisions for different future developments, we have to extrapolate the data to a strategic timescale and calculate the effects of different external scenarios and own strategies. Qlikview, however, is a data mining tool, not a simulation tool. Recent versions have extended its interactive capabilities, introducing and expanding the flexibility of input fields and variables, but generally, Qlikview has not been developed to do complex interactive calculations in it. Future versions may move even more in that direction, but the additional complexity needed to include a full-fledged simulation tool would be immense. To do such calculations, one has to rely on macros to create the functionality, but Qlikview macros are notoriously slow, their Visual Basic Script functionality is limited compared to actual Visual Basic, and the macro editing and debugging infrastructure is rather… let’s say, pedestrian. Some experts actually consider it bad practice to use macros in Qlikview at all.

The only way around this limitation is to move the actual calculations out of Qlikview. We use Qlikview to select the data relevant for the simulation (which it does very efficiently), export this data plus the simulation parameters as straight tables to MS Excel or Access, run the simulation there and reimport the results.

Now, why would one want to export and reimport data to do a calculation as a macro in Excel, in essentially the same language? VBScript in Qlikview is an interpreter: One line of the macro is translated to machine code and executed, then the next line is translated and executed, using massive resouces for translation, especially if the macro involves nested loops, as most simulations do extensively. VB in MS Office is a compiler, which means at the time of execution, the whole macro has already been translated to machine code. That makes it orders of magnitude faster. In fact, in our case study, the pretty complex simulation calculations themselves consume the least amount of time. The slowest part of the tool functionality is the explort from Qlikview, which es even slower than the reimport of the larger, extrapolated data tables. In total, the extrapolation of the whole dataset (which only has to be done once after a database update) takes well under than a minute on a normal business notebook, which should be acceptable considering the database update itself from an external server may also take a moment. Changing extrapolations for single products or simulating a new set of strategies and scenarios is a matter of seconds, keeping calculation time well in a reasonable frame for interactive work.

In most cases, a strategic planner will want to work with the results of his interactive simulations locally. It is, however, also possible to write the simulation results back into equivalent structures in the Oracle database, another functionality Qlikview does not provide. In that case (as well as for very large amounts of data), the external MS Office tool invoked for the calculation is Access instead of Excel. Through the ODBC interface, Access (controlled in Visual Basic, started by Qlikview) can write data to the Oracle database, making simulation results accessible to the selected users.

The market model used for the extrapolation and simulation in the case study as rather generic. Units sold are modeled based on a product line’s peak sales potential and a standardized product life cycle characteristical for the market. A price level figure connects the units to revenue; direct cost is extrapolated as percent of revenue and overhead cost as absolute numbers. The assumptions for the extrapolation of the different parameters can be set interactively. Market shares and contibutions margins are calculated on the side, leading to the following view for the extrapolation:

Of course, depending on the market, different and much more complex market models may be necessary, but the main difference will be in the external calculations and not affecting the performance visible to the user. Distribution driven markets can include factors like sales force or brand recognition, whereas innovation driven markets can be segmented according to very specific product features. Generally, the market model should be coherent with the one marketing uses in shorter term planning, but it can be simplified for the extrapolation to strategic timescales and for interactive simulation.

If a strategic simulation is developed for a specific, single decision, scenarios with very specific effects including interferences between different scenarios and strategies can be developed in the project team and coded into the tool. In the present case study, a planning tool for long-term use in corporate strategy, both scenario and strategy effects are defined on the level of the market model drivers and can be set interactively. Ten non-exclusive scenarios for future developments can be defined, and for each scenario, ten sets of effects can be defined for selected groups of regions and products. Scenarios affect both own and competitor products and can have effects on sales potential, price, direct and overhead cost.

Strategy definitions are very similar to scenario definitions, but strategies can only affect sales potential and price of own product lines. To account for the development or acquisition of new products, they can also add a life cycle effect, which can either expand the market or take market shares from selected or all competitors in the respective segment.

As opposed to classical scenario theory, the scenarios in this case study are non-exclusive, so the impacts of different scenarios can come together. Strategies can also be combined, so a strategy of intrinsic growth could be followed individually or backed up with acquisitions. A predefined combination of scenarios and strategies can be stored under a business case name for future reference.

Various visualizations of the evaluated data are automatically generated for interactive product and region selections using the Qlikview tools. In this case study, the linear timeline graph displays the baseline planning in comparison to the timeline after the current scenario and strategy settings for a selected figure. For certain figures the maximum and minimum values over all scenarios for the selected strategy are also displayed.

Risk portfolios can display expected values vs. risk (variation all scenarios) for a selected strategy. Qlikview makes creating these portfolios for selected product and regions and aggregated over adjustable parts of the timeline very convenient. As desired, other types of portfolio views (e.g. market share vs. market size) can also be created.

In all these cases, it must be kept in mind that the purpose of the strategic simulation is not to provide exact numbers on what sales will be in the year 2022 given a certain scenario, but rather to make it clear to what extent that value can vary over all included scenarios. No simulation can eliminate uncertainty, but a good simulation will make the implications of uncertainty more transparent.

Qlikview does not support these simulations directly, but using the workaround of external calculation, the user-friendly interface Qlikview provides allows convenient selection of values and assumptions as well as quick and appealing visualization of results.

In an upcoming case study, we will look at the same business background implemented in a very different database and interface environment.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

The Role of Databases for Strategic Planning – Some General Remarks

Large databases, traditionally the domain of the financial departments, are increasingly entering the world of strategic planners. Under the label “business intelligence”, database software and data mining tools are marketed to strategic planners, and their acceptance is quite obviously on the rise. Contributing factors could be a change of generations among planners, more user-friendly tools available, increasing technological experience among those contributing data (who often have a background in marketing rather than technology) and a narrowing cultural gap between strategic management and the technology people necessarily involved in setting up and running such databases.

The main driver behind the spread of strategic management information systems, decision support systems and strategic planning cockpits, however, is the decision makers’ insatiable hunger for definitive answers, clear recommendations and solid data. Where traditional strategic conceps like portfolios or SWAT analyses are highly aggregated and deliberately vague in their conclusions, a strategic database can assign aggregated discounted cashflow numbers to a selection of potential future products, based on data from product and region experts across the company. We have to be aware, however, that the origins of such information about the future remain essentially the same: extrapolation, projection, estimates and, more often than not, educated guesses.

Working with databases in strategic planning offers some obvious advantages:

  • Databases help to avoid the chaos of versions and formats that often occurs when strategic information is traded within the company using standard office tools like tables or presentations. The data can be located on a central server or even an external cloud under the control of the corporate IT experts and governed by corporate IT security guidelines. Adequate access rights for the different users can be set individually or by standard rules.
  • Database user interfaces and data mining programs provide convenient tools to aggregate and visualize the gathered data, speeding up the process of generating bite-size information for decision makers and potentially reducing the workload in planning departments typically short of resources.
  • The standardization of data going into the database and the tools employed to fill it force contributors to address a certain minimum of questions in their planning process, adhere to common conventions and summarize their results in a predefined form.
  • Everybody discussing a decision can argue based on one agreed set of data, representing the best available, up-to-date information from experts across the company’s network, which may include external sales partners, market researchers and consultants.

These advantages, however, come at a price:

  • The clarity of versions and formats is not so much the result of the database itself, but of the strictly implemented strategic planning process that necessarily comes with it. If the thoughts behind a changed estimate in the database or a quick summary for an executive still end up being communicated in spreadsheets sent by e-mail, the advantage is eroded and the database becomes just one more data format users have to deal with.
  • The reduced workload resulting from the use of business intelligence tools has to be compared to the additional resources needed to set up and run the systems. The needed expertise will often not be available within the company, and even for the most user-friendly tools, the actual planning cockpits will in many cases be programmed by external consultants.
  • While standardized data structures to be filled define a minimum of questions to be addressed in generating the data, they also discourage any planning going beyond that, which may not fit into the database. Such standardization is particularly detrimental to any qualitative, critical or out-of the-box thinking that could be priceless as an indicator of possible yet unknown threats or as a source of ideas for future growth not included in current planning.
  • The uniform view of the future defined by a planning database tends to reduce the awareness that the actual future will always be uncertain. The fact that the one future (or, at best, the generic base/best/worst case structure) defined in the database has been built from the input of many contributors and has been agreed upon between different departments makes it particularly difficult to argue against the results and ask the necessary “what ifs”.

Some of these challenges can be addressed early in the process of setting up the database. Looking for synergies with database solutions already in use in the company, for example in controlling, can reduce the workload and accelerate the learning curve in the introduction phase. However, it also may introduce a bias towards processes and structures that are not ideal for information that contains estimates for an uncertain future rather than numbers from a well-accounted past. Leaving space for unstructured information within the database costs technical efficiency, but it may end up containing the one piece of information that avoids the need for parallel data exachange by e-mail or the decisive warning about an external threat that might otherwise have been unheard. Asking in time if an external support is to work as a consultant or merely as a programmer can save time and effort later and can avoid implementing potentially inefficient structures.

It is important to be aware that databases, data mining tools and even strategic planning cockpits can be an interesting source of information to be taken into account in a decision, but they are not decision tools. Asking the many “what ifs”, evaluating alternative strategies, testing for different external scenarios or analyzing potential competitors’ strategies can be done including information from such a database, but these, the actually decisive steps of strategic planning, are not done by the database. In most cases, the user interfaces employed are optimized for visualizing what’s in the database and are not even very well suited for interactively calculating the effects of assumptions that go beyond the scope of the underlying data structures.

It is, however, possible to develop tools to interactively calculate the impact of many different “what ifs” on the agreed planning basis, draw all the necessary information from the database and even write results for different scenarios back to the database, usually in separate but linked structures. The implementation will depend on the framework used, which will usually be either relational databases or multidimensional cubes. Furthermore, it depends on whether a separate data mining interface is used to access and visualize the data and if it should also provide the interface to the simulation and calculation tool.

In the upcoming weeks, we will look at two case studies on such interactive planning tools linked to pre-existing databases, both allowing the same scenario and strategic alternative evaluations on the same data, but in different database environments. One will be a relational database accessed through a data mining tool, the other a multidimensional cube providing its own user interface. We will look at similarities and differences of the two implementations and suggest ways to work around their respective limitations.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

The Difficulty of Estimating Probabilities

The May Issue of the German Harvard Business Manager shows the results of a survey among corporate leaders on the importance of various risk factors (Die Sorgen der Konzernlenker, page 22). The survey was done at the World Economic Forum in Davos and asked about the estimated probability and expected effects for 50 predefined global risks. In the article, the results are plotted as the number of answers color coded in 5×5 grids (probability horizontal, effects vertical).

The striking result are not so much the five main problems identified in the article, but the similarity of the distributions of answers, particularly on the probability axis. Whereas the effects of some risks are actually (although not dramatically) seen as larger or smaller than the others, the probability estimates show a strong tendency towards the center of the scale. In other words, the 469 participating global leaders consider all but a handful of 50 global risks about equally probable. In fact, there are very similar distributions of probability estimates for things that are already happening, like the proliferation of weapons of mass destruction, deeply rooted organized crime or a rise of cronic diseases (the inevitable consequence of rising life expectations), versus largely imaginary risks like the vulnerability to geomagnetic storms or unwanted results of nanotechnology.

On the one hand, this shows a general problem with the use of standardized scales in surveys. There is, of course, a tendency towards the middle of the scale, and asked for numbers, different participants might actually assign very different numbers to a “medium” probability. Even the same participant might think of very different numbers for a “medium” probability dependent on the category the risk is assigned to.

But the strange probability distributions also show something else: We are simply not very good at estimating the probability of future events, especially if they are rare or even unprecedented. In fact, the only future events that we can assign a probability to with a certain degree of confidence are events that recur quite regularly and where there is no reason to expect the systematics to change. And, of course, any complicated probability calculations, most notably the value-at-risk approach commonly used in risk management, are voodoo if they hide the fact that the input probabilities are wild guesses.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

Preparing for an Uncertain Future in a Small Company: Coffee Table Talk at the World Skeptics Congress

After my talk at the World Skeptics Congress, I had an interesting conversation with the owner of a small, technology-driven company. The whole conversation lasted no longer than a cup of coffee. The main question was how a small to mid-size company can leverage the principles for dealing with uncertainty that I had outlined at the end of my conference talk. Here are the results of our talk (plus some explanations) in a few quick points:

  1. Work with the knowledge that exists in your company. If you needed outside experts to tell you how your own market works, you would have probably gone out of business already. Outside help may, however, be useful (and sometimes necessary) to moderate the planning and decision process, to calculate financial impacts and to ask critical questions. The more technical and market expertise a business leader displays, especially if he or she is also the owner or founder of the company, the more hesitant many employees may be to bring up risks they see or feel looming on the horizon.
  2. Make it clear what the purpose of the analysis is. Are there strategic decisions to make – if so, what are the options? If you want to test the viability of an ongoing strategy, what are the areas in which you could make adjustments? In addition, define a reasonable time range you want to plan. If you plan to build a production facility, that time range will be much longer than if you develop mobile phone software.
  3. Look at uncertainties inside-out, going from effects to possible causes. The number of things that can happen in the world around you is infinite. The number of significantly different impacts on your business is rather small. Start with the baseline plan for your business – you have one, explicitly or implicitly. Look systematically what could change, for example in a tree structure: Sales or cost could be impacted. On the sales side, demand or your ability to supply could change. A change in demand could come from the market size or your market share. Market size can change via volume or price level. Get rid of branches in the tree that are (even after critical questions) unrealistic, have negligible impact or would not be affected by the strategic options or adjustments you are evaluating. If you can’t prepare for it, there’s no point in planning it. Also, don’t continue into branches that have identical or very similar impact on your actual business.
  4. Identify key drivers of uncertainty and find possible values. Each branch of the tree gives you a driver of uncertainty. Compare their impact and get rid of the minor ones. You should end up with no more than ten key drivers (or ten for each market you are in, if there are several). Then assign two to five possible values each driver could have in the future. Think of normal as well al unusual developments, but try to avoid the generic base/best/worst assumptions.
  5. Condense possible developments into scenarios. The different values of the drivers open up a cone (see graph below) of possible future developments. Scenarios are roads through that cone. The real future will not be identical to any one scenario, but should be somewhere around or between them. A scenario contains one possible value for each driver. Start out by looking for reasonable combinations of values, then build scenarios around them. There should be at least three scenarios, and more than five or six are rarely necessary. At least one scenario should cover the center of the cone of possible developments mentioned above, but others should lead to the more extreme corners, as well.
    Isolate single factors that don’t fit into the scenario logic, either because they are independent from anything else (oil prices or tax rates sometimes fit in that category) or because there is a feedback (for example, in a local market, competitors may react to the strategy you choose). There should be no more than two or three such factors. Keep them separated and at the end of the whole process, check if varying them within reasonable limits changes the results of your analysis.
  6. Derive the impact on your strategy and options. In a larger company, I would develop an interactive business plan simulation to do this, but in smaller companies, a grid of results with manual calculations or estimates should do. One axis of the grid are your strategic options or your current strategy and possible adjustments. The other axis are the scenarios. Write down (with some basic numbers!) where your company will be at the end of the planning period with each combination of strategies and scenarios. If a strategy looks catastrophic in one scenario or survivable but bad in several scenarios, you may want to stay away from it. If you decide on a strategy and find that it runs into trouble in one of the scenarios, derive which of the scenario’s values in the key drivers could function as an early warning indicator.
  7. Do it! Here’s your main advantage over some of the multi-billion-Euro companies out there: Once you have come to a conclusion, actually implement it. Write down which steps you have to take to make it happen and check them off. If you have come up with early warning indicators, hang them on your office wall, put them on your computer desktop or into your calendar at regular intervals and test them. The only bad thing you can do with this analysis is to let it rot in your drawer.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

Risk Perception and Risk Management in Companies and Management Science – Talk at the 6th World Skeptics Congress in Berlin

The 6th World Skeptics Congress in Berlin in May 2012 had the topic “Promoting Science in an Age of Uncertainty”. While the main focus of the congress was on pseudoscience, creationism and alternative medicine, one of the aspects discussed was the rationality of human risk assessment. The Saturday afternoon session featured Professor Walter Krämer, mathematician and one of Germany’s most renowned authors on risk perception and statistics (“So lügt man mit Statistik”, “Die Angst der Woche: Warum wir uns vor den falschen Dingen fürchten”) and a presentation by Holm Hümmler.

Holm Hümmler’s talk focused on the question if the structures of a company and the teachings of business schools make people’s decisions more rational or just irrational in a different way than those of humans acting on their private behalf. The talk touched on the role of financial risk management as a way of dealing with future uncertainties, Nassim Taleb’s popular book The Black Swan and the decisions leading up to the 2009 financial crisis. The complete presentation is available for download here:

6th World Skeptics‘ Congress, May 19, 2012, Session 6, Risk Perception and Risk Management in Companies and Management Science – Observations about Managers Confronted with an Uncertain Future