Why Smart People Struggle with Strategy – Reading Recommendation in HBR

Harvard Business Review Online has some interesting thoughts by Roger Martin on strategy, uncertainty and the problems that people with top-notch academic credentials might have with them.

Martin certainly isn’t wrong when he points out that strategy is not about finding the right answer, but about choices for an uncertain future. His basic suggestions, having diverse teams including people who have experienced failure and treating each other with respect, sure are nothing to disagree with in principle, either.

Still, there is a stale aftertaste to the article. Blaming strategy consultants for having the need to feel right and for having that correctness validated is a bit absurd when in most strategy projects, that is exactly what the consultants are hired to do. The client pays for confidence and certainty, so for the consultant, pointing out that that doesn’t exist is a tough job. If the client is willing to accept and handle uncertainty, I am convinced most of my colleagues across a multitude of companies will be more than willing and able to incorporate that in their projects.

The first thing one has to do about uncertainty is to acknowledge that it is there. The next step is to use methods to still come to a strategic decision, without pretending to know exactly how the future will turn out. Those are actually the much more important issues than who the people working on a strategy are, or if they might possibly be too smart for the job…

 

On Forecasting New Product Sales, Experience, Artificial Intelligence and Statistics

Forecasting sales for new products is among the most difficult tasks in planning. There is nothing to extrapolate, and in the early stages, there is not even a finished product to present to customers and get their feedback. At the same time, sales forecasts for product innovations are inevitable. Some innovators say they cannot forecast, some say the do not forecast, but in the end, who doesn’t calculate sales forecasts explicitly will do it somewhere, somehow implicitly, often in a less thoughtful and therefore less careful way.

For the following case study, let us assume we are in an innovation-driven industry, which regularly develops new products. The current sales forecasts for product innovations have been done based on model assumptions and market research. Controlling has shown significant, unexplained discrepancies between planning and actual sales developments.

new product forecast demo controlling

There is a variety of methods available to forecast product innovation sales. For development products that are advanced enough to be presented to potential customers, conjoint analysis offers a tool to gauge perceived product advantage. While that product advantage is quite useful to determine what might be seen as a fair price, its connection to the achievable market share is obvious as a fact but not trivial in numbers. The Dirichlet Model of Buying Behavior links market penetration to market share and can thus be used to estimate the impact of marketing activities. It assumes market shares to be constant over the relevant period of time, but appears sufficiently tested to be considered valid at least for the peak market share to be reached by a product. Regarding the development of market share over time, the Fourt-Woodlock model differentiates between product trials and repeat purchases, whereas the Bass diffusion model describes the early phases of a product lifecycle in more mathematical terms, claiming to model the share of innovators and imitators among customers. For high repeat purchase rates and a vanishing share of imitators, both models are more and more similar.

The problem with all of these models is that in spite of all their assumptions, they still contain quite a few free parameters. In practical use, these parameters have to be derived from market research, taken from textbooks or guessed, introducing a significant degree of arbitrariness into the forecasts. The problem becomes apparent in this quote attributed to mathematician John von Neumann: “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” In fact, we will do just that, and the trick will be to find the proper elephant.

A totally different approach from these models would be to go by experience. What has been true for previous product launches, by one’s own company or by others, should have a good chance of being true for for future new products. Obviously, while having personal experience with the introduction of new products will help in managing such a process, a realistic forecast will have to be based on more than the experience of a few products and, therefore, individual managers. Individuals tend to introduce various kinds of bias in their judgements, which makes personal experience invaluable for asking the right questions but highly problematic for getting unbiased, reliable answers.

On the other hand, there is usually plenty of “quantitative experience” available. The company will have detailed data about its own product launches in the target market and in similar markets, and market research should be able to provide market volumes and market shares for competitors’ past innovations. Aligned, scaled to peak values and visualized in a forecast tool, the market share curves from past data could, for example, look like this:new product forecast demo research view

The simplest way to estimate new product sales would be to find a suitable model product in past data and assume the sales of the new product will develop in roughly the same way. That is relatively close to what an experienced individual expert asked for a judgement would probably do, with or without realizing it. Unfortunately, this approach can be thwarted by a variety of factors influencing the success of new products: newProductInfluences

The varying influence of all these factors will make it, at best, difficult to find and verify a suitable model in past data. Besides, judging from one model product yields no error bars or other indication of the forecast’s trustworthiness.

Apparently, to get a reasonable forecast, we need a more complex system, which should be able to learn from all the experience stored in past data. The term “learn” indicates artificial intelligence, and in fact, AI tools like a neural network could be used for such a task: It could be trained to link resulting sales or market share curves to a set of input parameters specifying the mentioned influences. The disadvantage of a neural network in this context is that the way it reaches a certain conclusion remains largely intransparent, which will not help the acceptance of the forecast. Try explaining to your top-level management that you have reached a conclusion with a tool without knowing how the tool came to that conclusion.

On the other hand, there is no need to model a new product exclusively from past data without any further assumptions. All the more analytical models and forecast tools cited above have their justification, and they define a well-founded set of basic shapes, which sales of new products generally follow. New product market shares will usually rise to a certain peak value in a certain time, after which they will either decrease or gradually level off, depending on market characteristics. The uptake curve to the peak value will be either r- or s-shaped, and can usually be well fitted by adjusting the parameters of the Bass model. The development after the peak is usually of lesser importance and depends strongly on future competitor innovations, which are more difficult to forecast. Often, relatively simple models will be able to describe the data with sufficient accuracy – if they use the right parameters.

Generally, all published sales forecast models use market research data from actual products to verify their validity and to tune their parameters. The question is to what extent that historical data actually relates to the products and markets we want to forecast. In this case, we are simply using model parameters to structure the information we will derive from recent market data from our own markets. Based on an analysis of the available data, we have selected the following set of parameters to structure the forecast:

  • Peak market share
  • Time from product launch to peak market share
  • Bass model innovation parameter p
  • Bass model imitation parameter q
  • Post-peak change rate per time period

The list may look slightly different depending on the market looked at. Tuning these parameters to the full set of the available data would lead to the average product. On the other hand, we have to take into account the influencing factors displayed in the graph above. These influencing factors can be quantified, either as simple numbers by scoring or by their similarity to the new product to be forecasted. This leads us to the following structure of influence factors, forecast parameters and forecasted market share development:dependency network

If the parameters were discrete numbers, this graph would describe a Bayesian network. In that case, forecasting could take the form of a probabilistic expert system like SPIRIT, which was an interesting research topic in the 1990s.

In our case, however, the parameters are continuous functions of all the influencing factors, which we approximate using simple, mostly linear, dependencies. These approximations are done jointly in a multidimensional numerical optimization. For example, rather than calculating peak market share as a function of product profile scoring, everything else being equal, we approximate it as a function of product profile, order to market, marketing effort and the other influencing factors simultaneously. The more market research data is available, the more detailed the functions can be. For most parameters, however, linear dependencies should be sufficient. As the screenshot from the case study tool shows, the multidimensional field leads to reasonable results, even if research data is missing in certain dimensions (right hand side graph). new product forecast demo parameter fit view

In addition, confidence intervals can be derived in the fitting process, leading to a well-founded, quantitative market share forecast implemented in an interactive model that can be used for all new products in the markets analyzed. Implemented in a planning tool, forecasts from that model could look as follows:

new product forecast demo forecast graph

Forecast numbers will depend on the values of the different influence factors selected for the respective product innovation.

This approach presented can be implemented for a multitude of different markets and products, provided there is sufficient market research data available. While using known and well-researched models to structure the problem, the actual information used to put numbers in the forecast stems entirely from from sales data on actual, marketed products. Besides this pure form, it can also be combined with more theory-based approaches, depending on the confidence decision makers in the company have in different theories.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

Interesting Interview on Game Theory – Kenneth Binmore in The European

The German issue of The European ran an interview on game theory (and more) with mathematician-turned-economist Kenneth Binmore today. Large parts of it are taken from an English version that appeared recently. Frank Schirrmacher’s book, mentioned here in How Powerful is Game Theory? Part 1, is discussed, or rather refuted, with a little more attention paid to it in the German version. The interview goes way beyond business implications and addresses mostly social and philosophical aspects, but it should be interesting to read for anyone with an interest in decisions and their implications.

How Powerful is Game Theory? Part 2 – A Powerful Tool for Strategic Planning?

In part 1 of this article, we took a skeptical look at game theory and the claims made by Frank Schirrmacher in his book Ego. So if game theory is not a satanic game that destroyed the Soviet Union and makes our economies a playground for selfish rational crooks, if it is, as I concluded, only one of many decision support tools available to managers, is it even the powerful tool described in management textbooks?

To get a clearer picture of the actual value of the concept, we will take a closer look at the advantages claimed for game theory in strategic planning and at the drawbacks directly associated with them:

  • Game Theory Studies Interactions with other Market Participants. In fact, game theory is the only commonly cited tool to explicitly study not only the effect of other market participants on one’s own success, but also the impact one’s own decisions will have on others and on their decisionmaking. However,
    Game Theory Focuses Unilaterally on These Interactions. In some special markets and for certain usually very large players, in-market interactions account for the majority of the business uncertainty, but most factors that most companies are exposed to are not interactive. The development of the economy, political, social or technological changes are not influenced much by a single company. If a tool focuses entirely on interactions and other important factors are hard to incorporate, they tend to be left out of the picture. With a highly specialized, sophisticated tool, there is always a danger of defining the problem to fit the tool.
  • Game Theory is Logical. The entire approach of game theory is derived from very simple assumptions (players will attempt to maximize their payoff values, specific rules of the game) and simple logic. Solutions are reached mathematically, usually in an analytical way, but for more complex problems, numerical approximations can be calculated, as well. But
    Game Theory is also Simplistic. Most of the problems for which analytical solutions are generally known are extremely simple and have little in common with real-life planning problems. Over the years, game theory has been extended to handle more complex problems, but in many cases even formulating a problem in a suitable way means leaving out most of the truly interesting questions. In most cases, information is much more imperfect than the usual approach to imperfect information, probabilistic payoff values, suggests. Most real problems are neither entirely single shot nor entirely repetitive, and often, it is not just the payoff matrices but even the rules that are unclear. Any aspect that doesn’t fit the logic of player interaction either has to be investigated beforehand and accounted for in the payoff matrix, or has to be kept aside to be remembered in the discussion of results. The analytical nature of game theory makes it different to integrate with other planning concepts, even other quantitative ones.
  • Game Theory Leads to Systematic Recommendations. Game theory is not just an analytical tool meant to better understand a problem – it actually answers questions and recommends a course to follow. On the other hand,
    These Recommendations are Unflexible. If a tool delivers a systematic recommendation, derived in a complex calculation, that recommendation tends to take on a life of its own, separating from the many assumptions that went into it. However, whichever planning tool is employed, the results will only be as good as the input. If there are doubts about these assumptions, and in most cases of serious planning there will be, sensitivities to minor changes in the payoff matrices are still fairly easy to calculate, but testing the sensitivity to even a minor change in the rules means the whole game has to be solved again, every time.
  • Game Theory Leads to a Rational Decision. Once the payoff values have been defined, game theory is not corruptible, insensitive to individual agendas, company politics or personal vanity. Although including the irrational, emotional factors in a decision can help account for factors that are difficult to quantify, like labor relations or public sentiment, being able to get a purely rational view is a value in and of itself. The drawback is,
    Game Theory Assumes Everybody else to be Rational, as Well. Worse than that, it assumes everybody to do what we consider rational for them. While some extensions to game theory are meant to account for certain types of irrationality of other players, the whole idea really depends on at least being able to determine how others deviate from this expectation.

These factors significantly impact the applicability of game theory as a decision tool in every day strategic planning. In that case, why is it taught so much in business schools? Why are many books on game theory extremely worthwhile reading material?

  • Game Theory Points in a Direction often Neglected. There are not that many other concepts around to handle interdependencies of different market participants. Just like having no other tool than a hammer makes many problems look like nails, not having a hammer at all tends to cause nails to be overlooked. Many dilemmas and paradoxes hidden in in-market interactions have only been studied because of game theory and will only be recognized and taken into account by knowing about them from game theory, even if the textbook solutions are hardly ever applicable to real life.
  • Game Theory Helps to Structure Interdependencies. Although the analytical solution may not lead to the ultimate strategy, even without seeking an analytical solution at all, trying to derive payoff matrices leads to insights about the market. Systematically analyzing what each player’s options are and how they affect each other is a useful step in many strategic processes, even if other factors are considered more influential and other tools are employed.
  • Game Theory Shows how the Seemingly Irrational may be Reasonable. Game Theory shows how even very simple, well-structured games can lead to very complex solutions, sometimes solutions that look completely unreasonable at first sight. This helps to understand how decisions by other market participants that look completely unreasonable at first sight may be hiding a method behind the madness.

In short, while game theory probably doesn’t provide all the answers in most business decisions, it sure helps to ask some important questions. Even if it is not most adequate everyday planning tool, it is a good starting point for thinking – which is not to be underestimated.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

A Reading Recommendation that Points Beyond Planning

There are some interesting thoughts on risk culture in an article posted by the folks from McKinsey & Company: Managing the people side of risk. Rather than addressing the planning perspective, the article focuses on the challenges uncertainty poses for a company’s organization and culture. That is interesting, considering that planning alone will not insure that a company is actually able to respond to uncertain developments. Three requirements for a company’s culture are pointed out:

  • The fact that uncertainty exists must be acknowledged
  • It must be possible and actively encouraged to talk about uncertainty
  • Uncertainty must be taken seriously, and respective guidelines must be followed

While the article stays mostly on the surface and never indicates that there also is a strategic perspective to the issue, it hints at some anonymized real-life cases and contains an important beyond-strategy viewpoint.

The authors use the term risk instead of uncertainty, following the colloquial sense of associating risk with a negative impact rather than a quantifiabe probability.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

Bayes’ Theorem and the Relevance of Warning Signs

One of the interesting things about working in the field of business strategy is that the most inspiring thoughts relevant for one’s work are usually not found in textbooks. Dealing with spaceflight, science fiction and fringe technology, the io9 portal is about as geeky as they come and can hardly be considered typical manager’s reading. It does, however, contain fascinating ideas like this one: How Bayes’ Rule Can Make You A Better Thinker. Being a better thinker is always useful in strategy, and the article points out a number of interesting thoughts about how what we believe to be true is always uncertain to a degree and about the fallacies resulting from that. Unfortunately, it just very briefly skips over what Bayes’ Theorem actually says, although that’s really not that difficult to understand and can be quite useful, for example in dealing with things like warning signs or early indicators.

Obviously, Bayes’ Theorem as a mathematical equation is based on probabilities, and we have pointed out before that in management, as in many other aspects of real life, most probabilities we deal with can only be guessed. We’re not even good at guessing them. Still, the theorem has relevance even on the level of rough estimates that we have in most strategic questions, and in cases like our following example, they can fairly reasonably be inferred from previous market developments. Bayes’ Theorem can actually save you from spending money on strategy consultants like me, which should be sufficient to get most corporate employees’ attention.

So, here is the equation in its simplest form and the only equation you will see in this post, with P(A) being the probability that A ist true and P(A l B) being the probability that A is true under the assumption that B is true. In case you actually want more math, the Wikipedia article is a good starting point. In case that’s already plenty of math, hang in there; the example is just around the corner. And in case you don’t want a business strategy example at all, the very same reasoning applies to testing for rare medical conditions. This equation really fits for a lot of cases where we use one factor to predict another:

Bayes Rule

What does that mean? Suppose we have an investment to make, maybe a product that we want to introduce into the market. If we’re in the pharmaceutical business, it may be a drug we’re developing; it can also be a consumer product, an industrial service, a financial security we’re buying or real estate we want to develop. It may even be your personal retirement plan. The regular homework has been done, ideally, you have developed and evaluated scenarios, and assuming the usual ups and downs, the investment looks  profitable.

However, we know that our investment has a small but relevant risk of catastrophic failure. Our company may be sued; the product may have to be taken off the market; your retirement plan may be based on Lehman Brothers certificates. Based on historical market data or any other educated way of guessing, we estimate that probability to be on the order of 5%.

That is not a pleasant risk to have in the back of your head, but help is on the way – or so it seems. There will be warning signs, and we can invest time and money to do a careful analysis of these warning signs, for example do a further clinical trial with the drug, hire a legal expert, market researcher or management consultant, and that analysis will predict the catastrophic failure with an accuracy of 95%. A rare event with a 5% probability being predicted with a 95% probability means that the remaining risk of running into that failure without preparation is, theoretically, 0.25%, or, at any rate, very, very small. The analysis of warning signs will predict a failure in 25% of all cases, so there will be false warnings, but they are acceptable given the (almost) certainty gained, right? Well, not necessarily. At first glance, the situation looks like this:

decision situation 1

If we have plenty of alternatives and are just using our analysis to weed out the most dangerous ones, that should do the job. However, if the only option is to do or not do the investment and our analysis predicts a failure, what have we really learned? Here is where Bayes’ Theorem comes in. The probability of failure, P(A), is 5%, and the probability of the analysis giving a warning, P(B), is 25%. If a failure is going to occur, the probability of the analysis predicting it correctly, P(B l A), is 95%. Enter the numbers – if the analysis leads to a warning of failure, the probability of that failure actually occurring, P(A l B), is still only 19%. So, remembering that all our probabilities are, ultimately, guesses, all we now know is that the negative result of our careful and potentially costly analysis has changed the risk of failure from “small but significant” to “still quite small”.

decision situation 2

If the investment in question is the only potential new product in our pipeline or the one piece of real estate we are offered or the one pension plan our company will subsidize, such a result of our analysis will probably not change our decision at all. We will merely feel more or less wary about the investment depending on the outcome of the analysis, which raises the question, is that worth doing it? As a strategy consultant, I am fortunate to be able to say that companies will generally just spend only about a permill of an investment amount on consultants for the strategic evaluation of that investment. Considering that a careful strategic analysis should give you insights beyond just the stop or go of an investment, that is probably still money well spent. On the other hand, additional clinical trials, prototypes or test markets beyond what has to be done anyway, just to follow up on a perceived risk, can cost serious money. So, unless there are other (for example ethical) reasons to try to be prepared, it is well worth asking if such an analysis will really change your decisions.

Of course, if the indicators analyzed are more closely related to the output parameters and if the probability of a warning is closer to the actual risk to be predicted, the numbers can look much more favorable for an analysis of warning signs. Still, before doing an analysis, it should always be verified that the output will actually make a difference for the decisions to be made, and even if the precise probabilities are unknown, at least the idea of Bayes’ Theorem can help to do that. Never forget the false positives.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

Seven Essences of Decision-Making

Background
One thing right away: Decisions are always made at a specific point in time and by people facing specific advantages and disadvantages. Thus, a decision can only be ”right“ at this particular point in time and for that person. Whether this decision sustains for a longer period cannot be projected when it is being made. This insecurity concerning decision outcomes cannot be avoided by avoiding decisions as such. Not taking a decision also represents a decision – to do nothing. In the present economic moment dominated by instability, managers often decide to follow “visual flight rules“ – certainly the worst option.

Vancore helps its clients structure and implement breakthrough decisions. As a specialized management consultancy, we put the focus on the right issues to develop customer led solutions that clients own and are passionate about. Ultimately, clients work with us because they achieve a significantly higher Return on Decision RoD. How do I take the right decision? What do I need to consider? Whom do I have to consult at which time? Vancore has identified seven central aspects which considerably impact the quality of decisions.

1.    People
They might really exist; ingenious single-minded decision-makers, people like Ferdinand Piëch and Jack Welch, who take so-called A1 decisions (i.e., they gather all important information and then decide self-sufficiently). Even though this decision might have been right, who takes care of the next step? Think about it carefully: What kind of expert knowledge do you need? Which interest groups need to be included? Is there resistance in the way? Change Management is not to be mentioned on the final page of a concept only. Start making changes now – right at the beginning of the decision-making process. Gather together the most brilliant people on the team and work on decision-making together.

2.    Facts
NDF – Numbers, data, facts. We live in a world in which everything needs to make sense by the numbers. This leads to the result that often, a flood of data keeps overwhelming CEOs and directors. Whoever offers more data is on the safe side, is well-prepared and wins the presentation battle by pulling out slide 126.
Well, this all sounds nice. How about the relevance of data loads, however? This question is hardly being raised. Only when decision-critical findings emerge from information, the previous analysis and all the preparation were useful.

Be courageous and together with your team address the question of relevance: Which information is really important? Where do we find white spots in the map? Do we know the real causes of past problems? Do we have to know them in order to be able to decide? Go ahead and create a reliable basis of previous and current findings. Learn from the past. Set limits and try to avoid data overload. Do not take neglectful decisions but decisions that are well-balanced.

3.    Systemic Procedure
A process is a logical sequence of certain steps. In the western world, this sequence usually proceeds from left to right, for example, from analysis via conception to implementation. Surprisingly, this logical sequence is followed less frequently the higher the respective decision is located in the corporate hierarchy. In addition, decision makers usually concentrate more on options and recommendations concerning content, not on the process of decision-making.     Often, lively discussions on side-topics therefore end up being mixed with crucial debates on distribution partners, relocations, and expansion strategies.

It makes a lot of sense to define a structured process and to follow this process when making a decision with considerable impact. In the end, one will notice that many topics, if seen in combination, reveal a very different nature than in the case of single inspection. Above all, the works of nobel prize winner Daniel Kahneman show us that big decisions in iteration processes emerge from insight, criteria, and options. Such a successful process is always transparent, comprehensible, and robust enough to be repeated for a number of times. One conclusion therefore remains: A process can only function well if it is adhered to accordingly.

4.    Insight
Many leadership circles equal soccer stadiums: Holding on to the ball, i.e., the total time someone has the floor, is what counts. Options for passing the ball are often ignored. Scoring, i.e., the final decision, is often neglected that way. Force yourself and your management team to draw specific conclusions. Engaging in discussions is nice but what are we to learn from them specifically? Where is the insight? Should we engage in the Chinese market? Which risks are connected to this decision? Which key capabilities are relevant? Does our portfolio need adjustment?

At the decision-making point, the leadership team needs to put its cards on the table by advocating comprehensible and communicable learning points. This is how you structure your decision step by step – seemingly random building blocks form to a structured pyramid. The crucial thing is that you and your employees can trace the decision-making process – it is the basis of successful implementation.

5.    Authenticity
In the realm of “ringi seido“ (Japanese process of decision-making), management simply receives an impetus. Central problems are then handed down to the lower management levels to be solved. This triggers a circulation process including all the respective levels and departments in order to then seek a corporate consensus. This entire process is accompanied by “nemawashi.“ This concept aims at informally involving all necessary people previous to making a decision. In consequence, the actual decision turns into a more or less formal step.
In the West, however, decision-making is usually carried out differently. Although key decision makers and opinion leaders are being informed on a regular basis previous to a meeting, the actual decision itself is being „fought out“ in the meeting itself.
As consultants who steer this process, we notice that the really important topics are only discussed indirectly or not at all. “As long as I do not hurt you, you will not hurt me,“ is a tacit motto which reigns many of these procedures. What we are missing is the passionate discussion of issues, not of persons. Dissent and constructive criticism are healthy. Some companies, such as the BMW Group, have even included dissent as value in their guidelines. It needs to be allowed to struggle for certain key decisions. Authenticity in a discussion culture therefore represents a key factor. How often do you leave a meeting with the awkward feeling that the major roots of problems have not been targeted? This is when a team needs to be honest to itself: Are we really willing to confront the uncomfortable issues/people? Only the addressing of deeper matters and their solving can contribute to a decision-making process.

6.    Common Ground
Everyone looks after themselves first. Unfortunately, this has to be the case, since performance systems often only consider the contribution of individuals or individual functions. However, the individual performance of single people hardly ever advances a corporation. Team process, however, take time. If done well, they create intensive self-identification with the organization and increase the motivation of employees. Team building does not happen accidentally. A short retreat – “we go to the countryside, hammer nails into planks and milk cows” – can trigger such a process. But what is the real value? The crucial question is: how important are culture and values to you?

7.    Consistency
Often, we hear that companies invest 100 euros in conception, idea and strategy while only spending 10 euros for implementation planning and conducting. We do not understand why this is the case. Thus, Peter Drucker’s saying still holds true: “Culture eats strategy for breakfast.“ Paper is patient, as experience teaches us.

Success depends on consistent and stringent acting. And actions here mean sustainably consistent behavior. A strategic project needs to be prioritized within a transparent project portfolio with clear targets, milestones, and resources. This prevents the magical multiplying of “submarine“ projects in the CEO office, i.e., intransparent and costly ventures. In a world of limited resources, thus, saying “yes“ to a project at the same time means saying “no“ to many other things which one, nevertheless, would like to do.

Our Conclusion:
There is not the one recipe for successful decision-making. This is why we also discourage you from following quick and easy checklists (also see Sibony O. for McKinsey in Harvard Business Manager, September 2011).
It is possible, however, to make a decision which is right for oneself, although the final result cannot be projected. Just take our Seven Essences of Decision-Making to heart. Your company and your employees will appreciate it.

Reinhard Vanhöfen
Vancore Group GmbH & Co. KG

 

How Powerful is Game Theory? Part 1 – A Satanic Game?

In his new book: Ego. Das Spiel des Lebens (The Game of Life), Frank Schirrmacher, famous German columnist and editor of Frankfurter Allgemeine Zeitung, attributes both the collapse of communism and the behavior of humans in modern capitalism to a combination of game theory and advanced computing. According to Schirrmacher, game theory has turned humans into completely rational egoists, running the entire economy in IT-controlled financial markets.

Whereas Ego is quite obviously meant to be an entertaining sensation story rather than a textbook, no business school lecture or book on strategic planning would be complete without a chapter on game theory. Already in their 1944 classic Theory of Games and Economic Behavior, the developers of game theory, John von Neumann and economist Oskar Morgenstern, pointed out the concept’s applicability to corporate strategic decisions.

To estimate the impact game theory can actually have in corporate strategy (to be discussed in part 2 of this article) or in the steering of whole economies, let us take a very brief look at what game theory actually does. Here is hardly the place for a complete introduction to rather large field of game theory. Therefore, we will simply recall some important aspects needed to outline the capabilities and limitations of the concept. To refresh your knowledge in more depth, there is a multitude of resources on the web, from articles or videos to presentations or whole books. The Wikipedia article also is a good starting point on various types and applications of game theory for readers with some memory of the basics.

Game theory models a decision in the form of a game with clearly defined rules, which can be mathematically modeled. The games consist of a specified number of players (typically two players in the games cited as examples) who have to make decisions (typically just one per game), and there are predefined payoffs for each player, which depend on the decisions of all players combined. Two-player games with only one decision can be described in the form of a payoff matrix, in which columns describe the options of one player, rows the options of the other. Each matrix field contains payoffs for each player. Game theory then derives each player’s decision leading to the highest payoffs. Commonly cited examples of games are the prisoner’s dilemma, the chicken game or battle of the sexes.

Various types of complexity can be added to such a game. There can be more players who may or may not have previous knowledge of the other’s decisions. Players can have the aim of cooperating and achieving the highest total payoff, or they can compete and even try to harm each other. There can be different consecutive or simultaneous decisions to make, and the game can be played once or repeatedly. Payoff chances can be the same for each player or different; they can be partially unknown or depend on probability. For games with several rounds, complex strategies can be derived. If a strategy only specifies probabilities for each option, while the actual decisions are made randomly according to these probabilities, it is called a mixed strategy. With the complexity of the game, analytically optimizing strategies becomes increasingly difficult.

Based on these principles, can game theory really deliver what is attributed to it? How powerful is this tool? Starting with Schirrmacher’s book, can game theory be the decision machine for a whole economy he describes?

First of all, Schirrmacher ascribes the economic collapse of the Soviet Union and the whole communist block to the superior use of game theory by the US. That would, however, mean that the Soviet Union’s dwindling economic strength should have been caused by at least some kind of influence from a methodically acting outside competitor. In fact, the omnipresent problems of socialist economies around the world – misallocation of resources, inefficiency, lack of motivation, corruption and nepotism – come from within the system. Trade restrictions were limited to goods with a potential military significance, and at least the East German economy was even kept alive with credits from the West.The only factor in which American influence really massively impacted the Soviet economy was the excessive transfer of resources to the military sector in the nuclear arms race. But did the United States really need intricate decision models to try to stay ahead technologically while maintaining at least a somewhat similar number of weapons as the potential enemy? Obviously not. Did it take game theory to understand the Soviet concept of outnumbering any opponent’s weapons by roughly a factor of three? That was simply the Red Army’s success formula from World War II and easily observable from the 1950s onward. Game theory has to quantify outcomes as payoffs, often in the form of money, at least in terms of utility. Can such a model help to predict the secret decision processes, more often than not driven by personal motives, in the inner circles of the Soviet leadership? Does it contribute anything more valuable than the output of classical political and military intelligence?  There is a reason why the military was much more interested in game theory as a tool for battlefield tactics than in terms of global strategy.

So if game theory contributed little or nothing to the end of communism, how about Schirrmacher’s second hypothesis? Has game theory turned our decision makers into greedy rational egoists ignoring all social responsibility? Indeed, game theory works for decisions to be made based on the payoff matrix, and in the simplest form, the payoffs just correspond to profits. Commentators point out that game theory can lead to cooperative as well as competitive strategies, but cooperative strategies will also be aiming at maximizing individual or shared payoffs.

The actual point is that game theory in no way implies that a decision maker must or even should aim to maximize profits (although if the decision maker is a manager paid by his company’s shareholders waiting for their dividends, there are good arguments that he should, with or without game theory). Game theory attempts to show to a decision maker which strategy should lead to maximizing an abstract payoff. That payoff may be profit, or it may result from any other utility function. For military officers, the payoff may for example correspond to minimizing the loss of own casualties or to the number of civilians evacuated from a danger zone. For a sales manager, it may be the number of products sold or customer satisfaction.

Even if game theory derives a stategy as leading to the maximum payoff, it still does not mean that the decision maker has to follow that strategy. For example, even if the payoff is identical to profit, game theory can for example be used to estimate how much short term profit must be sacrificed to follow a more socially accepted strategy.

In short, game theory is simply one of many decision support tools available to managers and as firmly or loosely linked to profit maximization as any other of these tools.

Where game theory, however, always relies on maximization of the assumed (monetary or other) payoff is to guess the probable decisions to be made by other parties involved, be they competitors or cooperation partners. Without further information on the other parties’ intentions, game theory has to assume they will maximize payoffs – otherwise there will be no basis for any calculation. In a situation where everyone uses game theory, maximizing payoffs should therefore even help the competitors because it makes one’s actions predictable. What that implies for the applicabilitiy of game theory in actual strategic planning will be discussed in part 2 of this article.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

The Role of Databases for Strategic Planning – Case Study in Palo/Jedox

After discussing the general capabilities and limitations of databases for strategic planning in The Role of Databases for Strategic Planning – Some General Remarks, we have looked at a case study in a relational database accessed through a special data mining tool: The Role of Databases for Strategic Planning – Case Study in Qlikview/Oracle. In this case, we will look at a case study very similar in the business perspective, but based on a different database concept. The database we will be looking at is Palo or Jedox (all product names mentiones are trademarks of the respective owners), accessed not through an external tool but through the database’s standard access mechanisms. Palo is an open source OLAP database, with an upgraded version under commercial license sold under the manufacturer’s name, Jedox. The case study was done using the latest open source version, Palo 3.1.

Again, we are addressing the issue that a planning database will generally contain input from very different people in different parts of the company, and possibly even from outside partners. Large planning databases may contain some replies to “what-if” questions, but discussing and testing the implications of a large number of future developments with so many contributors will usually be impractical. In most cases, information contributors from local marketing or external market research companies will not have enough time to contribute for extended strategic work. Therefore, it will be necessary to use the multitude of planning figures in the database as the basis for a calculation of scenarios and strategic options defined and evaluated later, at the corporate strategy level.

As in the Oracle/Qlikview example, the case study involves a manufacturer of electronic components, systems and services, operating in different regions with a number of product lines. The existing planning database contains basic sales and cost figures for own products and only sales data for the main competitor products or local competitor groups. Planning figures cover some years of back data plus four years of forecast. For the case study, we assume that all the data ist stored in one database cube. That is not the most efficient way of storing the data, as zeros will be stored for competitor cost data, but probably a realistic way such databases will have been set up for reasons of simplicity. Considering the limited data volume of 60 products in 32 regions for seven years, memory will hardly be a serious matter of concern in this case, anyway. The dimensions of the case study cube are product, region, year and figure (the figures being units sold, net revenue, direct cost and overhead cost).

As we will be using a standard Palo user interface, it will be fairly simple to write data back to the database. Therefore, calculated simulation results can be stored in the database as well, in a different cube to keep them separate from data accessed by other users. The new cube has the simulated business case as an additional dimension, the year dimension is extended by the extrapolation, and the figure dimension stores additional figures calculated in the course of the simulation. A plain database access screen, therefore, looks as follows (original forecast db on the top, simulation db on the bottom, click to enlarge):

The first question to answer in accessing data from a Palo/Jedox database is the interface platform to be used. There are two main data interfaces provided: One is Palo Web, a browser-based data access and manipulation tool that also allows calculations and macros, the other comes in the form of plugins for either MS Excel or OpenOffice Calc. The plugins allow simple access from both tables and macros to the data, which can then be manipulated using the full functionality of the respective program. As OpenOffice is less common in companies, the respective plugin was not tested for this case study.

In determining which solution works best for the simulation, we have to keep in mind which tasks will have to be performed by the tool. The simulation has to access relatively large amounts of data simultaneously, then perform complex calculations based on interactive assumptions. Most of the calculations will have to be done in macros.

With the Palo Excel plugin, a set of almost identical database access functions can be performed either in table cells when a table is recalculated or directly from a macro. As accessing many adjacent database entries from a table can be done simultaneously in an optimized way, this form of db access is much faster than from the macro. In fact, reading the whole data volume characterized above into a table is a matter of seconds. Table recalculation has to be set to manual and managed by macros after that to keep the tool performing at reasonable speed, but that can be done quickly and almost invisible to the user. Once the data is in Excel, the respective table can be copied to a Visual Basic array. The fast Visual Basic compiler with reasonable editing and debugging support allows the convenient development of all necessary macros. Running the extrapolation to the simulation timeframe or an interactive simulation in the described manner, writing the data back to Excel tables and updating the respective displays and graphs is a matter of less than five seconds for our example and thus easily fast enough for interactive work. Writing the calculated data back to the database takes a few minutes, as the writing, as opposed to reading data from the database, has to be done cell by cell. This step can therefore not be part of the regular interactive work, but should rather be offered as a possibility of storing the results at the end of an interactive session.

Palo Web provides a table calculation tool similar to Excel or OpenOffice Calc in a browser window:

Palo Web files are stored on the server rather than locally, which may be interesting if they are to be accessed by several users. Cell manipulation and data visualization capabilities are quite similar to Excel and relatively easy to adjust to, but have their peculiarities and restrictions in some details. The database access functions themselves are practically identical to the ones provided by the standard software plugins. An important difference is the macro engine. Palo Web offers macros in the web programming language php. With its C-like syntax, php is relatively easy for an experienced programmer to adjust to, and it is well documented online. Remarkably, when comparing calculation times with Excel’s rather fast  Visual Basic compiler, no significant differences were found. A major drawback of Palo Web’s macro capability, however, is the developing environment. The editor provides at least some basic support like automatic indentation of passages in curly brackets, but debugging is extremely inconvenient. For a developer experienced in Excel, designing the tool surface will also take longer because of differences in the details. After the case study development ran into stability issues with the php macro engine when writing larger sets of data to either a table or a database, a clear preference was given to the Palo for Excel plugin.

The extrapolation of the forecast data read from the Palo database to the full simulation timeline allows selection of assumptions in a way similar to the one described in the Qlikview-Oracle example. The ability to recalculate the extrapolation for single products rather than the whole market is only needed if the extrapolation assumptions are to be varied for different products. If all products are to be extrapolated using the same assumptions, the calculation for the whole market is so fast that the user would have no time advantage by only recalculating only one product. To account for product lifecycles, the default extrapolation is not done in a linear way, but by fitting standardized life cycle curves to the data. Using linear extrapolation instead may be reasonable for competitor data that includes whole product portfolios.

Each simulation calculates the combined effects of a selection of possible future scenarios and the company’s own strategic options. The possibility of combining scenarios is a deviation from classical scenario theory, possible because scenarios are treated as deviations from a baseline plan (the extrapolated numbers from the original planning database). A simulation with its selection of strategies and scenarios can also be stored as a business case. A business case stores the simulation results in the large extrapolation/simulation data cube and the selected scenarios and strategies with their properties in smaller cubes:

The planning tool is designed for a continous strategy process, which is to be used for several years. Over the course of this time, the expectations for the future can change significantly. New scenarios can become thinkable; existing scenarios can be ruled out, and the expected results for scenarios can change. The scenarios and their effects are therefore variable and can be changed interactively. Scenario properties include a name, verbal description and effects on sales potential, price, direct and overhead cost to be set globally or for products, markets, countries or regions. Each scenario can store a combination of up to 10 effects.

The strategy definition is very similar to the scenarios. The difference is that strategies can only affect the company’s own products directly and will only have an indirect effect on competitor products through redistribution of market shares. Strategies can also include adding complete new lifecycles to account for product innovation or the acquisition of competitors.

Once a simulation has been calculated, various visualizations are possible and will be automatically generated in the tool. The simplest visualization is the timeline view for a selected figure in a selected product and region. This view allows a detailed look at all the information calculated in the simulation.

For strategic decisions, more aggregated views may be reasonable. Portfolios are such aggregated displays that decisionmakers will be familiar with. Risk portfolios display the expected value vs. the associated risk for a figure. In this case, the associated risk can for example be defined by the spread of possible results over all scenarios for the selected strategy.

As already mentioned in the Qlikview case study, it must be kept in mind that the purpose of the strategic simulation is not to provide exact numbers on what sales will be in the year 2022 given a certain scenario, but rather to make it clear to what extent that value can vary over all included scenarios. No simulation can eliminate uncertainty, but a good simulation will make the implications of uncertainty more transparent.

If the data to be the basis for such simulations is stored in a Palo/Jedox database, it is reasonable to make this database a part of the simulation. In this case, separate cubes in the same database can be used to store simulation results and assumptions. In the case study, both Palo Web and Palo’s MS Excel plugin have been found to be usable interfaces to integrate the simulation tool into, but the plugin has turned out to be the slightly faster and significantly more stable solution, with advantages in development effort, as well.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH

The Role of Databases for Strategic Planning – Case Study in Qlikview/Oracle

In The Role of Databases for Strategic Planning – Some General Remarks, we have looked at the increasing role databases seem to have in strategic planning and the difficulties that arise in including future uncertainties in this kind of planning.

It appears quite impractical to get all the contributors to include possible future uncertainties in their input. First of all, they would have to generate massive amounts of data, costing them significant time beside their everyday work, which is often in marketing, market resarch or sales rather than planning. Second, they will probably have very different ideas and approaches to what might change in the future, making the results difficult to interpret. In addition, database structures tend to resist change, so adjusting them to possible new developments someone foresees can be a lengthy and resource-intensive process.

Uncertainty based planning can, however, be implemented building a simulation on existing, single-future planning from a database, making it possible to vary assumptions ex-post and allowing the user to build his different scenarios and strategies based on the information provided by all the different contibutors to the database. The simulation should build on the infrastructure and user interfaces generally used to get information from the database, so the implementation will be quite different depending on the given framework.

For the case study, let us look at a manufacturer of electronics components, integrated systems of these components and services around these components for different industries. 32 sales offices define the regional structure, which is grouped in 2nd level and top level regions. There are 21 product lines with individual planning, and market research gathers and forecasts basic data for 39 competitor product lines or competitor groups. Product lines are grouped into segments and fields of business. Strategically relevant figures in the database are units sold, net revenue and, available only for own product lines, direct cost and overhead cost. There are a few years of historical data besides forecasts for the coming four years, which is sufficient for the operative planning the database was intended for, but rather short for strategic decisions like the building of new plants or the development or acquisition of new products. The strategically relevant total data volume therefore is small compared to what controlling tends to generate, but rather typical for strategy databases.

In this case, we will look at a relational (e.g. Oracle – all brands mentioned are trademarks of their respective owners) database accessed through the Qlikview business intelligence tool. As Qlikview defines the only access to the data generally used by the planner, the actual database behind it, and to a certain extent even its data model, will mostly be interchangeable. The basic timeline view of our case study database looks as follows:

To effectively simulate the effects of the company’s decisions for different future developments, we have to extrapolate the data to a strategic timescale and calculate the effects of different external scenarios and own strategies. Qlikview, however, is a data mining tool, not a simulation tool. Recent versions have extended its interactive capabilities, introducing and expanding the flexibility of input fields and variables, but generally, Qlikview has not been developed to do complex interactive calculations in it. Future versions may move even more in that direction, but the additional complexity needed to include a full-fledged simulation tool would be immense. To do such calculations, one has to rely on macros to create the functionality, but Qlikview macros are notoriously slow, their Visual Basic Script functionality is limited compared to actual Visual Basic, and the macro editing and debugging infrastructure is rather… let’s say, pedestrian. Some experts actually consider it bad practice to use macros in Qlikview at all.

The only way around this limitation is to move the actual calculations out of Qlikview. We use Qlikview to select the data relevant for the simulation (which it does very efficiently), export this data plus the simulation parameters as straight tables to MS Excel or Access, run the simulation there and reimport the results.

Now, why would one want to export and reimport data to do a calculation as a macro in Excel, in essentially the same language? VBScript in Qlikview is an interpreter: One line of the macro is translated to machine code and executed, then the next line is translated and executed, using massive resouces for translation, especially if the macro involves nested loops, as most simulations do extensively. VB in MS Office is a compiler, which means at the time of execution, the whole macro has already been translated to machine code. That makes it orders of magnitude faster. In fact, in our case study, the pretty complex simulation calculations themselves consume the least amount of time. The slowest part of the tool functionality is the explort from Qlikview, which es even slower than the reimport of the larger, extrapolated data tables. In total, the extrapolation of the whole dataset (which only has to be done once after a database update) takes well under than a minute on a normal business notebook, which should be acceptable considering the database update itself from an external server may also take a moment. Changing extrapolations for single products or simulating a new set of strategies and scenarios is a matter of seconds, keeping calculation time well in a reasonable frame for interactive work.

In most cases, a strategic planner will want to work with the results of his interactive simulations locally. It is, however, also possible to write the simulation results back into equivalent structures in the Oracle database, another functionality Qlikview does not provide. In that case (as well as for very large amounts of data), the external MS Office tool invoked for the calculation is Access instead of Excel. Through the ODBC interface, Access (controlled in Visual Basic, started by Qlikview) can write data to the Oracle database, making simulation results accessible to the selected users.

The market model used for the extrapolation and simulation in the case study as rather generic. Units sold are modeled based on a product line’s peak sales potential and a standardized product life cycle characteristical for the market. A price level figure connects the units to revenue; direct cost is extrapolated as percent of revenue and overhead cost as absolute numbers. The assumptions for the extrapolation of the different parameters can be set interactively. Market shares and contibutions margins are calculated on the side, leading to the following view for the extrapolation:

Of course, depending on the market, different and much more complex market models may be necessary, but the main difference will be in the external calculations and not affecting the performance visible to the user. Distribution driven markets can include factors like sales force or brand recognition, whereas innovation driven markets can be segmented according to very specific product features. Generally, the market model should be coherent with the one marketing uses in shorter term planning, but it can be simplified for the extrapolation to strategic timescales and for interactive simulation.

If a strategic simulation is developed for a specific, single decision, scenarios with very specific effects including interferences between different scenarios and strategies can be developed in the project team and coded into the tool. In the present case study, a planning tool for long-term use in corporate strategy, both scenario and strategy effects are defined on the level of the market model drivers and can be set interactively. Ten non-exclusive scenarios for future developments can be defined, and for each scenario, ten sets of effects can be defined for selected groups of regions and products. Scenarios affect both own and competitor products and can have effects on sales potential, price, direct and overhead cost.

Strategy definitions are very similar to scenario definitions, but strategies can only affect sales potential and price of own product lines. To account for the development or acquisition of new products, they can also add a life cycle effect, which can either expand the market or take market shares from selected or all competitors in the respective segment.

As opposed to classical scenario theory, the scenarios in this case study are non-exclusive, so the impacts of different scenarios can come together. Strategies can also be combined, so a strategy of intrinsic growth could be followed individually or backed up with acquisitions. A predefined combination of scenarios and strategies can be stored under a business case name for future reference.

Various visualizations of the evaluated data are automatically generated for interactive product and region selections using the Qlikview tools. In this case study, the linear timeline graph displays the baseline planning in comparison to the timeline after the current scenario and strategy settings for a selected figure. For certain figures the maximum and minimum values over all scenarios for the selected strategy are also displayed.

Risk portfolios can display expected values vs. risk (variation all scenarios) for a selected strategy. Qlikview makes creating these portfolios for selected product and regions and aggregated over adjustable parts of the timeline very convenient. As desired, other types of portfolio views (e.g. market share vs. market size) can also be created.

In all these cases, it must be kept in mind that the purpose of the strategic simulation is not to provide exact numbers on what sales will be in the year 2022 given a certain scenario, but rather to make it clear to what extent that value can vary over all included scenarios. No simulation can eliminate uncertainty, but a good simulation will make the implications of uncertainty more transparent.

Qlikview does not support these simulations directly, but using the workaround of external calculation, the user-friendly interface Qlikview provides allows convenient selection of values and assumptions as well as quick and appealing visualization of results.

In an upcoming case study, we will look at the same business background implemented in a very different database and interface environment.

Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH