Over a year into the pandemic, a lack of resources in science communication, abuse of bad epistemics and questionable global governance – such as vaccine distribution – are still resulting in thousands of avoidable deaths per day. Even in western democracies, politicians still struggle to understand the role of aerosol-based transmission and, as such, vital improvements that could be easily achieved by better ventilation. Meanwhile, scepticism around vaccines – whilst it may be getting better – remains as long-standing collateral damage caused by disorder in the information landscape or infodemic.
Limits of rationality
Of all human traits, rationality is arguably one we cherish most because we consider it a defining distinction between us and other animals. However, this unfortunately also comes with a recurrent overconfidence in our ability to be rational leading us to trust our intuition, believe in our gut-feeling and listen to common-sense. All of which go against rationality. In addition, nobody is born with rationality or inherent logic. Hence, it is up to society, through the accumulation of knowledge, to endow individuals with the ability to think objectively. As such, the future of humanity’s ability to perform collective problem solving requires a scale-up of the number of citizens well-equipped with external thinking strategies that include logic and the scientific method.
Of all human characteristics, rationality is probably the one we cherish most.
A possible first step could be to repair the populist impression that our technologically driven era is too advanced for ‘non-productive’ armchair philosophy. After all, the modern computer era was not instigated by engineers trying to build a gadget, but rather a group of philosophers who were literally thinking about thinking. It was the foundational crisis in logic in the late 19th Century that led a series of philosophers and mathematicians to question the very act of “processing information”. In doing so, they found useful loopholes in logic and set the right questions to which Kurt Gödel, Alan Turing, Alonzo Church and others would bring awareness – a precursor to many things today, in the form of laptops and smartphones1.
Deduction vs. induction
Another useful step might be to stress how logic and the scientific method, while still an endless work in progress, can be viewed through two of their most important components: deduction and induction. Simply put, deduction is “top-down” logic, meaning how to infer a conclusion from a general principle or a law. This includes launching a space rocket, curing a well-known disease, or applying a law in court. Whereas induction is “bottom-up” logic, that is, how to infer – based on observations – the laws to explain how these observations happen. This may be describing the laws of gravity, discovering the cure for a new disease or defining a law for society to abide by. All of which require an inductive mindset.
Deductive logic was historically the first to be established through algorithms. While today this term is primarily associated with technology, it should be stressed that it was originally derived from the name of a thinker, Al Khwarazimi. He was mostly trying to help lawyers by writing step-by-step rules that they could apply to reach comparable results2. Far from being a tool to render the decision-making process obscure, algorithms, such as written law, were historically a tool for transparency. We feel safer if we know we will be judged according to a well-defined rule or law, rather than according to the fluctuating mood of an autocrat.
Inductive processes are harder than deductive ones. Even though medieval thinkers such as Ibn Al Haytham (Alhazen), Jabir Ibn Hayan (Geber) and, of course, Galileo left early traces of progress in formalising the scientific method used today, we still do not have a widely adopted algorithm for induction as we do have for deduction. Important attempts to provide algorithms for induction were made by Bayes and Laplace3. The latter even produced an important, yet highly overlooked “Philosophical Essay on Probabilities”, decades after formalising the laws of probabilities (in the form of a course given at the then nascent Ecole Normale and Ecole polytechnique). Reading Laplace’s essay today, one finds pioneering ideas about what can go wrong with induction – something modern cognitive psychologists refer to as cognitive biases.
The problem with deduction
Once we look at the details, many cognitive biases fall under an excessive use of a deductive mindset in situations where an inductive one is more appropriate. The most common of which being confirmation bias: our brain would rather seek facts that prove the hypothesis it already has than to expend mental effort to go against it. There is also the other (less common) extreme, excessive relativism, where we refuse any causal interpretation, even when data justifies an explanation more appropriately than existing alternatives.
To compensate for the weaknesses of the human mind, scientists devised heuristics to better perform induction: testing a hypothesis, controlled experiments, randomised trials, modern statistics and so on. Bayes and Laplace went even further and gave us an algorithm to perform induction – the Bayes’ equation. It can be used to show that first order logic, where statements are either true or false, is a special case of the laws of probability, where useful room is left for uncertainty. Whilst the language of deduction is mostly answering with a pre-defined “because” to questions starting with “why”, rigorous induction requires a more probabilistic analysis that adds a “how much” to weigh every different possible cause.
To compensate for the weaknesses of the human mind and to make better use of induction, scientists have developed heuristics.
Philosopher Daniel Dennett4 describes some of our greatest scientific and philosophical revolutions as “strange inversions of reasoning”. Darwin was able to invert the logic that complex beings (i.e., humans) did not necessarily need a more complex creator to emerge. Turing showed that complex information processing does not need the agent (i.e., the computer) performing it to be aware of anything other than simple mechanical logical instructions. I would like to argue that what Dennett calls strange inversions of reasoning, are historical moves from a deductive (and somewhat creationist) framework to an inductive framework. The more complex the problem, the less a “why” is useful and the more a “how much” is needed.
Induction as a societal tool
While scientists were busy devising logic and the scientific method for the past millennia, the larger part of society realised the limits of the deductive mindset that comes with either autocracy, where a monarch sets the rule, or theocracy, where God, often a comfortable shield for the monarch, sets the rule. This led to the progressive development of democracy, where the aggregation of opinions helps society perform a better and more robust collective induction and, in principle, establish more effective rules. Yet, democracy lies in the hope that a significant fraction of society is well-informed and acting in its own interest.
Today, this assumption is at greater threat than it has ever been before. For the first time in human history, we are producing information dissemination tools that have the broadcast power of the most dystopian propaganda machine yet the fine-grained personalisation features of individual door-to-door campaigning – for the better or for worse. The digital tools we enjoy today are mostly the outcome of automating deduction (programming), which mostly happened during the past century. As we are entering a new phase of automation, which is this time data-driven, it is important to stress that, beyond the gadgets and the technology part, we are trying to automate induction, and, while doing so, better understand what induction is and how to do it right.
Keeping that in mind in how we design our courses on Data Science or communicate the advances of artificial intelligence to the public, might hopefully help produce a new generation of citizens that are not only able to build or use these tools, but able to join the larger conversation on the future of reasoning. A conversation in which induction, deduction, society’s diet of information and appropriate collective decision-making are empowered, and not corrupted by the very digital tools that were invented as mere side products of the human endeavour. Our endeavour to understand and automate what we cherish the most: our ability to think.