The German issue of The European ran an interview on game theory (and more) with mathematician-turned-economist Kenneth Binmore today. Large parts of it are taken from an English version that appeared recently. Frank Schirrmacher’s book, mentioned here in How Powerful is Game Theory? Part 1, is discussed, or rather refuted, with a little more attention paid to it in the German version. The interview goes way beyond business implications and addresses mostly social and philosophical aspects, but it should be interesting to read for anyone with an interest in decisions and their implications.
In part 1 of this article, we took a skeptical look at game theory and the claims made by Frank Schirrmacher in his book Ego. So if game theory is not a satanic game that destroyed the Soviet Union and makes our economies a playground for selfish rational crooks, if it is, as I concluded, only one of many decision support tools available to managers, is it even the powerful tool described in management textbooks?
To get a clearer picture of the actual value of the concept, we will take a closer look at the advantages claimed for game theory in strategic planning and at the drawbacks directly associated with them:
- Game Theory Studies Interactions with other Market Participants. In fact, game theory is the only commonly cited tool to explicitly study not only the effect of other market participants on one’s own success, but also the impact one’s own decisions will have on others and on their decisionmaking. However,
Game Theory Focuses Unilaterally on These Interactions. In some special markets and for certain usually very large players, in-market interactions account for the majority of the business uncertainty, but most factors that most companies are exposed to are not interactive. The development of the economy, political, social or technological changes are not influenced much by a single company. If a tool focuses entirely on interactions and other important factors are hard to incorporate, they tend to be left out of the picture. With a highly specialized, sophisticated tool, there is always a danger of defining the problem to fit the tool.
- Game Theory is Logical. The entire approach of game theory is derived from very simple assumptions (players will attempt to maximize their payoff values, specific rules of the game) and simple logic. Solutions are reached mathematically, usually in an analytical way, but for more complex problems, numerical approximations can be calculated, as well. But
Game Theory is also Simplistic. Most of the problems for which analytical solutions are generally known are extremely simple and have little in common with real-life planning problems. Over the years, game theory has been extended to handle more complex problems, but in many cases even formulating a problem in a suitable way means leaving out most of the truly interesting questions. In most cases, information is much more imperfect than the usual approach to imperfect information, probabilistic payoff values, suggests. Most real problems are neither entirely single shot nor entirely repetitive, and often, it is not just the payoff matrices but even the rules that are unclear. Any aspect that doesn’t fit the logic of player interaction either has to be investigated beforehand and accounted for in the payoff matrix, or has to be kept aside to be remembered in the discussion of results. The analytical nature of game theory makes it different to integrate with other planning concepts, even other quantitative ones.
- Game Theory Leads to Systematic Recommendations. Game theory is not just an analytical tool meant to better understand a problem – it actually answers questions and recommends a course to follow. On the other hand,
These Recommendations are Unflexible. If a tool delivers a systematic recommendation, derived in a complex calculation, that recommendation tends to take on a life of its own, separating from the many assumptions that went into it. However, whichever planning tool is employed, the results will only be as good as the input. If there are doubts about these assumptions, and in most cases of serious planning there will be, sensitivities to minor changes in the payoff matrices are still fairly easy to calculate, but testing the sensitivity to even a minor change in the rules means the whole game has to be solved again, every time.
- Game Theory Leads to a Rational Decision. Once the payoff values have been defined, game theory is not corruptible, insensitive to individual agendas, company politics or personal vanity. Although including the irrational, emotional factors in a decision can help account for factors that are difficult to quantify, like labor relations or public sentiment, being able to get a purely rational view is a value in and of itself. The drawback is,
Game Theory Assumes Everybody else to be Rational, as Well. Worse than that, it assumes everybody to do what we consider rational for them. While some extensions to game theory are meant to account for certain types of irrationality of other players, the whole idea really depends on at least being able to determine how others deviate from this expectation.
These factors significantly impact the applicability of game theory as a decision tool in every day strategic planning. In that case, why is it taught so much in business schools? Why are many books on game theory extremely worthwhile reading material?
- Game Theory Points in a Direction often Neglected. There are not that many other concepts around to handle interdependencies of different market participants. Just like having no other tool than a hammer makes many problems look like nails, not having a hammer at all tends to cause nails to be overlooked. Many dilemmas and paradoxes hidden in in-market interactions have only been studied because of game theory and will only be recognized and taken into account by knowing about them from game theory, even if the textbook solutions are hardly ever applicable to real life.
- Game Theory Helps to Structure Interdependencies. Although the analytical solution may not lead to the ultimate strategy, even without seeking an analytical solution at all, trying to derive payoff matrices leads to insights about the market. Systematically analyzing what each player’s options are and how they affect each other is a useful step in many strategic processes, even if other factors are considered more influential and other tools are employed.
- Game Theory Shows how the Seemingly Irrational may be Reasonable. Game Theory shows how even very simple, well-structured games can lead to very complex solutions, sometimes solutions that look completely unreasonable at first sight. This helps to understand how decisions by other market participants that look completely unreasonable at first sight may be hiding a method behind the madness.
In short, while game theory probably doesn’t provide all the answers in most business decisions, it sure helps to ask some important questions. Even if it is not most adequate everyday planning tool, it is a good starting point for thinking – which is not to be underestimated.
Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH
There are some interesting thoughts on risk culture in an article posted by the folks from McKinsey & Company: Managing the people side of risk. Rather than addressing the planning perspective, the article focuses on the challenges uncertainty poses for a company’s organization and culture. That is interesting, considering that planning alone will not insure that a company is actually able to respond to uncertain developments. Three requirements for a company’s culture are pointed out:
- The fact that uncertainty exists must be acknowledged
- It must be possible and actively encouraged to talk about uncertainty
- Uncertainty must be taken seriously, and respective guidelines must be followed
While the article stays mostly on the surface and never indicates that there also is a strategic perspective to the issue, it hints at some anonymized real-life cases and contains an important beyond-strategy viewpoint.
The authors use the term risk instead of uncertainty, following the colloquial sense of associating risk with a negative impact rather than a quantifiabe probability.
Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH
One of the interesting things about working in the field of business strategy is that the most inspiring thoughts relevant for one’s work are usually not found in textbooks. Dealing with spaceflight, science fiction and fringe technology, the io9 portal is about as geeky as they come and can hardly be considered typical manager’s reading. It does, however, contain fascinating ideas like this one: How Bayes’ Rule Can Make You A Better Thinker. Being a better thinker is always useful in strategy, and the article points out a number of interesting thoughts about how what we believe to be true is always uncertain to a degree and about the fallacies resulting from that. Unfortunately, it just very briefly skips over what Bayes’ Theorem actually says, although that’s really not that difficult to understand and can be quite useful, for example in dealing with things like warning signs or early indicators.
Obviously, Bayes’ Theorem as a mathematical equation is based on probabilities, and we have pointed out before that in management, as in many other aspects of real life, most probabilities we deal with can only be guessed. We’re not even good at guessing them. Still, the theorem has relevance even on the level of rough estimates that we have in most strategic questions, and in cases like our following example, they can fairly reasonably be inferred from previous market developments. Bayes’ Theorem can actually save you from spending money on strategy consultants like me, which should be sufficient to get most corporate employees’ attention.
So, here is the equation in its simplest form and the only equation you will see in this post, with P(A) being the probability that A ist true and P(A l B) being the probability that A is true under the assumption that B is true. In case you actually want more math, the Wikipedia article is a good starting point. In case that’s already plenty of math, hang in there; the example is just around the corner. And in case you don’t want a business strategy example at all, the very same reasoning applies to testing for rare medical conditions. This equation really fits for a lot of cases where we use one factor to predict another:
What does that mean? Suppose we have an investment to make, maybe a product that we want to introduce into the market. If we’re in the pharmaceutical business, it may be a drug we’re developing; it can also be a consumer product, an industrial service, a financial security we’re buying or real estate we want to develop. It may even be your personal retirement plan. The regular homework has been done, ideally, you have developed and evaluated scenarios, and assuming the usual ups and downs, the investment looks profitable.
However, we know that our investment has a small but relevant risk of catastrophic failure. Our company may be sued; the product may have to be taken off the market; your retirement plan may be based on Lehman Brothers certificates. Based on historical market data or any other educated way of guessing, we estimate that probability to be on the order of 5%.
That is not a pleasant risk to have in the back of your head, but help is on the way – or so it seems. There will be warning signs, and we can invest time and money to do a careful analysis of these warning signs, for example do a further clinical trial with the drug, hire a legal expert, market researcher or management consultant, and that analysis will predict the catastrophic failure with an accuracy of 95%. A rare event with a 5% probability being predicted with a 95% probability means that the remaining risk of running into that failure without preparation is, theoretically, 0.25%, or, at any rate, very, very small. The analysis of warning signs will predict a failure in 25% of all cases, so there will be false warnings, but they are acceptable given the (almost) certainty gained, right? Well, not necessarily. At first glance, the situation looks like this:
If we have plenty of alternatives and are just using our analysis to weed out the most dangerous ones, that should do the job. However, if the only option is to do or not do the investment and our analysis predicts a failure, what have we really learned? Here is where Bayes’ Theorem comes in. The probability of failure, P(A), is 5%, and the probability of the analysis giving a warning, P(B), is 25%. If a failure is going to occur, the probability of the analysis predicting it correctly, P(B l A), is 95%. Enter the numbers – if the analysis leads to a warning of failure, the probability of that failure actually occurring, P(A l B), is still only 19%. So, remembering that all our probabilities are, ultimately, guesses, all we now know is that the negative result of our careful and potentially costly analysis has changed the risk of failure from “small but significant” to “still quite small”.
If the investment in question is the only potential new product in our pipeline or the one piece of real estate we are offered or the one pension plan our company will subsidize, such a result of our analysis will probably not change our decision at all. We will merely feel more or less wary about the investment depending on the outcome of the analysis, which raises the question, is that worth doing it? As a strategy consultant, I am fortunate to be able to say that companies will generally just spend only about a permill of an investment amount on consultants for the strategic evaluation of that investment. Considering that a careful strategic analysis should give you insights beyond just the stop or go of an investment, that is probably still money well spent. On the other hand, additional clinical trials, prototypes or test markets beyond what has to be done anyway, just to follow up on a perceived risk, can cost serious money. So, unless there are other (for example ethical) reasons to try to be prepared, it is well worth asking if such an analysis will really change your decisions.
Of course, if the indicators analyzed are more closely related to the output parameters and if the probability of a warning is closer to the actual risk to be predicted, the numbers can look much more favorable for an analysis of warning signs. Still, before doing an analysis, it should always be verified that the output will actually make a difference for the decisions to be made, and even if the precise probabilities are unknown, at least the idea of Bayes’ Theorem can help to do that. Never forget the false positives.
Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH
The May Issue of the German Harvard Business Manager shows the results of a survey among corporate leaders on the importance of various risk factors (Die Sorgen der Konzernlenker, page 22). The survey was done at the World Economic Forum in Davos and asked about the estimated probability and expected effects for 50 predefined global risks. In the article, the results are plotted as the number of answers color coded in 5×5 grids (probability horizontal, effects vertical).
The striking result are not so much the five main problems identified in the article, but the similarity of the distributions of answers, particularly on the probability axis. Whereas the effects of some risks are actually (although not dramatically) seen as larger or smaller than the others, the probability estimates show a strong tendency towards the center of the scale. In other words, the 469 participating global leaders consider all but a handful of 50 global risks about equally probable. In fact, there are very similar distributions of probability estimates for things that are already happening, like the proliferation of weapons of mass destruction, deeply rooted organized crime or a rise of cronic diseases (the inevitable consequence of rising life expectations), versus largely imaginary risks like the vulnerability to geomagnetic storms or unwanted results of nanotechnology.
On the one hand, this shows a general problem with the use of standardized scales in surveys. There is, of course, a tendency towards the middle of the scale, and asked for numbers, different participants might actually assign very different numbers to a “medium” probability. Even the same participant might think of very different numbers for a “medium” probability dependent on the category the risk is assigned to.
But the strange probability distributions also show something else: We are simply not very good at estimating the probability of future events, especially if they are rare or even unprecedented. In fact, the only future events that we can assign a probability to with a certain degree of confidence are events that recur quite regularly and where there is no reason to expect the systematics to change. And, of course, any complicated probability calculations, most notably the value-at-risk approach commonly used in risk management, are voodoo if they hide the fact that the input probabilities are wild guesses.
Dr. Holm Gero Hümmler
Uncertainty Managers Consulting GmbH