“A great many people think they are thinking when they are merely rearranging their prejudices.”
– William James
While this was originally intended as nothing more than a brief cogitation of a multitude of concepts and notions which impressed upon my mind over the course of the past twelve months or so, I quickly came to the conscious realization that these deliberations have long festered in my subconscious and that now was as a good a time as ever to elaborate on them further for (hopefully) the benefit of others as well.
Learning To Think, Not Compute
The word “knowledge” stems from the Greek term “gnosis”. While “compute” comes from the Latin “computus”. One implies “to think” the other “to calculate”. Often and unfortunately, these two terms are as separate linguistically as they are in practicality.
Mathematicians, statisticians as well as other practitioners of numerically engaged professions (investors, economists, analysts, etc.) like to compute, but do not engage in much if any skeptical thought about their own disciplines or anything too closely related to them, while philosophers and academics like to think, but do very little in the way of trying to compute the practical applicability of their thought experiments. Logically, neither is ideal.
For brevity, we will deal with the former rather than the latter in this particular case. Empirical data on reasoning suggests that people are usually rational in principle, but err in practice. This may indeed have some implications to my inference made above, however something of far greater importance in this regard i believe comes from Jean Piaget’s Theory of Cognitive Development which describes how reasoning develops within us from infancy to adulthood through a sequence of stages.
Neo-Piagetian thought charges that increasing self-awareness (among other factors) is a critical factor in cognitive development. John Locke’s 1689 Essay on Human Understanding elaborates on this further by espousing that “personal identity depends on consciousness, not on substance, nor on the soul”. The individuals whom would benefit the most from such introspection (at least in my personal experience) would doubtless be investors and others of closely related professions suffering from a critical mass of “empty suits” due to what Albert Bandura came to call a well-developed Self-Efficacy as part of his social cognitive theory.
This overly cultivated self-efficacy, properly defined as one’s belief in one’s ability to succeed in specific situations coupled with Locke’s definition of a mad man as someone whom “reasons rightly from false premises” has contributed much to our erroneous thinking through the centuries.
Admittedly, I like countless others have fallen for such logical fallacies in the past. But there is hope after all, as becoming an empirical skeptic about not only one’s own self-efficacy but most especially theories, formulas and dogmas in general will allow for much clearer thought, reasoning and the suspension of belief necessary to avert similar pitfalls in the future.
We (The Human Species) Cannot Predict
To illustrate the inherent truth in this statement, i point to a Canadian Taxpayer Federation survey conducted in the latter part of 1997. This questionnaire found that participants on average mailed in their tax forms about a week later than they initially presumed. Despite having no misconceptions about their past record of the amount of time taken to mail forms in. Their underlying reasoning was simply that they could (and would) get it done more quickly next time around.
Such results illustrate what Daniel Kahneman and Amos Tversky came to eventually call the “Planning Fallacy”. The planning fallacy arose from research conducted by the pair in the late 1970’s, which uncovered the tendency for people and organizations to underestimate how much time they needed to complete a given task, even when past experience with similar tasks clearly indicated time over-runs.
A more recent scholarly paper entitled Expert Judgements: Financial Analysts Versus Weather Forecasters published in the Journal of Psychology & Financial Markets by Tadeusz Tyszka & Piotr Zielonka in 2002, also showed an overconfidence bias among an analyst group when asked to predict events, while simultaneously depicting that this same group also attached less importance to probability.
This is merely a small snippet of an abundance of empirical knowledge which overwhelmingly indicates that individuals simply do not possess the wherewithal or the tools to predict events, even those seemingly falling within a reasonable amount of their personal control as cited in the first example given, let alone the near future (days and weeks). It is simply too tough.
This leads us directly to the very irksome condition prevailing today across the financial/economic/political complex of forecasts, projections and estimates ad infinitum. Given all the empirical data and prognosticators own dreary history with such attempts (to speak nothing of the untold harm such presumptions have caused ) it appears in the face of all these factors, we completely discount the past and remain irrationally upbeat about our predictive capabilities. It is at such times that i sincerely have my doubts about humans indeed being the most intelligent of all the mammals;)
Very clearly mathematical formulas such as “R-Squared” and others can get one into heaps of trouble if taken at all seriously. What is the prescription to our mental block in this matter? Common sense or rather uncommon sense. In so far as acting under the presumption of uncertain knowledge as J.M. Keynes long ago prescribed and becoming more cognizant of our predicament by knowing that we don’t know.
Ambiguity & Backfire Effects
Ambiguity, by definition implies lack of information. Ambiguity Effect connotes a form of cognitive bias whereby decision-making is directly affected by this lack of information and conclusions are reached in favor of the outcome containing the greatest amount of information, over an outcome where the end is inherently unknown.
The effect was elucidated upon by the famous (or rather infamous) Daniel Ellsberg at the inception of 1961 in a paper for the Quarterly Journal of Economics entitled Rise, Ambiguity & The Savage Axioms.
The most plausible explanation for the effect came from Prof. Jonathan Baron and Deborah Frisch in their 1988 paper Ambiguity & Rationality published in the Journal of Behavioral Decision Making. The research generalized that people have a heuristic tendency to avoid options where information is missing or incomplete. This doubtless carries enormous consequences insofar as capital allocation is concerned.
If we are in fact so infatuated with precision (or what seems like precision) as these studies depict, this would certainly explain today’s proliferation of complex, Greek algebraic formulas throughout finance (via academia) in order to justify otherwise irrational decisions.
Note that valuation is inherently approximate and not precise, many attempts to obfuscate this matter have necessarily ended in disaster.
In this context, it is appropriate to end with the Backfire Effect. The term was coined by Brendan Nyhan and Jason Reifler, but the description can be traced back to Prof. David Redlawsk’s research on motivated reasoning.
The effect explains how some individuals (most individuals) when confronted with evidence that conflicts with their beliefs come to hold their original position even more staunchly. Alas, the term “digging in one’s heels” which prevails in business.
Once again, the causation of this effect in individuals can be directly traced back to Heuristics. Heuristics themselves are nothing more than a loosely assembled barrage of collective information, which are readily accessible and drawn upon by people unconsciously to ease the process of decision-making in everyday life.
These techniques may be more readily recognized by their nick names, including rules of thumb, educated guesses and intuitive judgement.
Empirical data on motivated reasoning and the small sub-category of the branch mentioned here, would certainly go towards explaining why “intellectual frauds” to borrow a term from Nassim Nicholas Taleb;) such as modern portfolio theory and beta as a measure of risk continue to be propagated by academia in spite of profound real life findings to their detriment. This is a prime example of “digging in one’s heels” in action.
Simply knowing that we are susceptible to supporting existing effect (strengthening our existing beliefs) which results in biases and generally degrades our thinking is enough to make us counter-argue them more often, thereby putting us on the path to clearer more accurate thought which even the most learned among us could benefit from.