top of page
srijansanket

In pursuit of truth, rationality and happiness

We all like to think that we make logical choices when it really matters, but reality disagrees. In this post, we will explore the biases in human thinking, why we have them, how our biases are exploited, and what all of this means for us.

You wake up in the morning, grab your phone for your daily dose of social media, and there it is again.. another post which makes no sense. You go to the office, see a colleague making a mistake which should not have seen the light of the day. You see people defending social segregation and hierarchies and wonder about how some people's reality can be so different from yours.


You attribute it to a thousand different reasons, from ineptness, to social cliques or even maliciousness. But often, even the best of us can make seemingly out of character mistakes when confronted with a difficult decision, or a situation where we are highly invested.



But, I am a rational person…


We tend to think of information as a sharp divide between facts and opinions. It seems reasonable that if we are dealing with facts, then all of us should come to the same conclusions from those facts. In the domain of strictly deductive logic, that does hold true. However, we do not always think deductively.


Humans create their own subjective reality. A lot of our judgement is based on our experiences, our academic learning, what we have been told by authority figures, and what we think is acceptable to society. Even the same experiences are adapted differently by people based on their pre-existing beliefs.


Our thinking is “coloured” by our biases, and we can achieve “real” rationality, only when we are aware of these biases and take steps to counter them.


Before we dive deeper into why and how we get biased, let’s play a simple game to illustrate the assertion above:


Excuse me, what the fish?


The above game should have illustrated a few things:

  • The same input (like the image in game 4 earlier) can result in different interpretations. Some of you would have seen the old couple first. Others might have focused on the musicians. Some might even have noticed the lady at the gate first

  • We are creatures of habit. Familiar things, things we can repeat while half awake, become unfamiliar and difficult to process if we deviate from the norm

  • Even an incomplete skeleton can help us reconstruct the original intention based on patterns we are used to


Humans fundamentally learn by identifying patterns from sensory data. It is the reason why we trigger old memories when we find something familiar, why some languages are easier to learn than others, why a lot of our morality is based on anthropocentric (humans are the most important) and anthropomorphic (we give human like attributes to non-human organisms and objects) considerations, and why we become better at certain things when we practice.


We are very good at pattern recognition too. As a proof, humans can read what’s written below but machines apparently can’t.


However, the same experiences and patterns are interpreted differently by people because none of us see the full picture. We all take bits and pieces of the experiences and try to make sense of it. Like the parable of the Blind men and an elephant.



That doesn’t sound right.. If I didn't notice something, it probably wasn’t very important


A lot of times, you don’t get a choice. We are physically not built to be the perfect learners.

The journey from reality to learning is a lossy process, and learning is the cornerstone of building rationality.


Culprit 1: Limited information

The amount of sensory data we get from our surroundings is too much for us to fully process. Therefore, the brain ignores a large amount of that data and creates shortcuts to group similar things together while learning.


Just think of the different kinds of dogs you know about, realise how different they look, and yet we can still identify them as dogs. If you see a picture of a dog breed you have never seen before, it is very likely that you might still be able to identify it as a dog.

In effect, we have distilled the essence of “what is a dog” so that we can apply it successfully to a wide variety of cases.


Obviously, what you fail to notice, you will fail to act upon.


Culprit 2: Limited processing power

Our brain cannot process all the available information efficiently and focuses on some parts of the information more than others. Which part of the information you retain changes with who you are, why you are processing the information, and the shock/recall value of the data.


In fact, excessive information can paradoxically hamper decisions by making you feel reluctant to dive deep, sometimes leading to a choice of a sub-optimal less “known” option because it seems simpler.


Culprit 3: Limited time

There are many ways we can overcome the first two limitations. We can repeat an experience to get better and gain familiarity. We can spend more time analysing the data to avoid missing things. We can offload some processing (depending on the nature of that processing) to other people and/or machines.


However, our total time that we have for learning, for reinforcement, and for our actions is still limited. Our endeavours are therefore time-bound. We allocate our time across the multitude of options we have based on the expected value we will get out of it (Note that the “value” here need not be tangible).


Doing more of one thing or doing something better, might therefore mean doing less of another thing, and this trade off is something that will always exist.



The above limitations are encompassed in the idea of bounded rationality, that we can only make rational choices within the imperfect bounds of our decision making capabilities. Hence, we usually aim to satisfice (make a decision which is “good enough”) rather than to optimize for the absolute best case scenario.



I understand. I’ll be extra careful when I make big decisions. I know it won’t be perfect, but…


It is not just about the big decisions. Big incidents usually play the role of the straw that breaks the camel’s back in forcing us to reevaluate our learnings. However, small decisions and interactions, repeated relearning of ideas, and the reward/punishment cycle from small risks we take daily are the major contributors in building and reinforcing our belief systems. You are more likely to lose your confidence on being told that you are inept over years rather than due to a disastrous mistake you once did.


Time and repetition brings strength and closeness to the belief system, till it transforms from an active thought to an instinctive axiomatic (something that we take to be true without proof) value.

The big problem is that we hardly ever question our internalized values. We are happy as long as the “facts” we see in the world align with them. However, if we find/see/discover a fact that contradicts our own beliefs, we tend to reject the fact instead of rejecting the value. The more important the belief, the stronger the rejection.


This rejection also happens when our actions contradict our beliefs (ask any Karen). We often end up in denial and the fallout usually arrives as a change in behaviour or aggression.


This mental conflict is often termed as cognitive dissonance.


Moreover, apart from individual decision making, how we take in and process information also influences how we build communities and interact with other stakeholders. The rise of social media and recommendation algorithms, which show us more of what we want to see, has already created echo chambers and cliques, amplifying the dangers of not questioning the information that we get and what we end up concluding because of it.


This has profound real world implications on multiple fields including areas like politics, equality, and justice in addition to those conventionally linked with rational decision making (like stock markets and economics). Let’s discuss these in brief below.


Political campaigns made easy - The Cambridge Analytica scandal

Cambridge Analytica was a British political consulting firm that harvested millions of users’ personal data via Facebook and built their psychological profiles via a series of questions. This data (and the users’ friends network) was used for electoral campaigns in the US, including Donald Trump’s presidential campaign


The firm was also allegedly involved in the Brexit campaign [Cambridge Analytica: ex-director says firm pitched detailed strategy to Leave.EU]


The pitch was built through specific targeting of users with a combination of negative posts and reports about competing candidates and promoting the posts showing client candidates in line with the views that the users already held. The result as we know are two of the most surprising electoral upsets in recent times according to general consensus.


Whether these upsets were due to this targeting or not, we could be looking at a way to systematically utilize fake news, targeted posts and community biases to achieve personal ends. Methods to influence millions of people at once, in a way where even they might not even notice how they are being swayed, is literally more terrifying than weapons of war.


All humans are equal, but some humans are more equal than the others

If we are already more likely to believe new ideas that align well with our existing thoughts, think about what happens when tens of other people echo similar sentiments.

In the benign case, this leads to formation of interest groups and fraternities. In a much worse scenario, this leads to fanaticism and formation of dangerous cults.


After all, we don’t want to acknowledge that we have been wrong for so long. Therefore, even if the facts point against our beliefs, we just need an excuse to not believe it. Even if that excuse is a rumour, fake news or attempts to discredit the facts. If that doesn’t work, a diversion to another topic is also welcome.


This is the exact methodology via which news about racism, casteism, social inequality, police brutality, and subversion of laws by the people in power gets countered. When people have doubts, they are less likely to do drastic things which challenge the status quo, like protest and march for a cause.


The people involved develop blind spots to the obvious on repeated echoing of sentiments. Ideas that are clearly discriminatory or dangerous are reinterpreted as a much needed change, an unfortunate compromise, or a mistake. After all, “how can these many people be wrong, and all of these are good people”.


“There is no such thing as an impartial jury because there are no impartial people”

We already saw some of the challenges that we face when making a non-emotional, seemingly logical decision. But a lot of times, we choose not to operate in that thinking mode and prefer to let our emotions flow. Most often, this happens when we see something that hits close to our core values, values we hold as sacred and fundamental.


Let’s consider a news piece:

A husband murdered his wife? How brutal, give the highest punishment.

What? He did so because the wife cheated on him and was planning to set off with his money. So sad, maybe he deserves a pardon.

It was because the wife was neglected and abused? The animal deserves it.


Reading each line can drastically change your outlook towards the case and how you feel about giving a tough punishment to the accused. If you were able to not immediately make judgement changes, great job! In a lot of cases though, people will think irrationally based on how they felt based on first reports.


With news sensationalization becoming a norm rather than an exception, this also results in cases where an initial accusation can condemn someone as a social pariah and assumed criminal even before the facts are presented. On the other hand, guilty people, with a manufactured sob story can also gain public support and get away scot free or with reduced charges. This is even more pertinent in systems where a public jury decides the guilt.



Uh oh, that doesn’t sound good... What can I do to avoid being blindsided?


There are a lot of cognitive biases. Here we are going to mention some of the major ones that are either very strong or very easy to fall prey to. Some of these descriptions are taken from Wikipedia. Link to the relevant pages is at the end of the section.

Hindsight bias or “I knew it all along”

It is an inclination to see past events with clarity when analyzing them in the future. This clarity gives us an illusion of earlier events being more predictable than they actually were, and that we could have played a larger or different role than we did if we had just looked closer.


Self serving bias or “I am successful because I worked hard. I failed because I was unlucky”

Humans have a tendency to claim more responsibility for their successes rather than failures. It also manifests as a tendency to ignore issues and causes we subconsciously do not support by rationalizing it as not having the power to do anything worthwhile. It can also exhibit itself in the form of interpreting ambiguous data as more beneficial to self than it actually is.


Affinity bias or “birds of a feather, flock together”

The tendency to be biased toward people like ourselves. On a larger scale, it culminates in formation of cliques and echo chambers


Fundamental attribution error (FAE) or correspondence bias or “they behaved like that because that is how they are”

People tend to overemphasize personality-based explanations for behaviors observed in others while under-emphasize the role and power of situational influences on the same behavior.

Edward E. Jones and Victor A. Harris' (1967) study illustrates the FAE. Despite being made aware that the target's speech direction (pro-Castro/anti-Castro) was assigned to the writer, participants ignored the situational pressures and attributed pro-Castro attitudes to the writer’s own viewpoints when the speech represented such attitudes.


Confirmation bias or “see, 5 of the 20 data points support what i say, hence I am correct”

Proclivity to search for or interpret information in a way that confirms one's preconceptions. In addition, individuals may discredit information that does not support their views. This is different from Self serving bias, since confirmation bias is about us searching for patterns that we want to see (and finding them) in actual data rather than misinterpreting the data to seem more favourable.


Recency bias or “look at the last 5 microseconds, the trend has changed”

Recency bias is a skew towards short-term thinking. Impacted individuals put more emphasis on recent events and give less weight to those that have happened in the past.


Anchoring Bias or “the price mentioned was 50, how can I bargain it down to 30, 49 is good”

We set the first bit of information we see as a baseline and make incremental changes around that baseline instead of evaluating things from fundamentals. This can be seen in case of shopping, negotiations, corporate process improvements and a lot of other cases. Often, why consultants apparently succeed where the client failed, is just due to first-principles thinking.


Information bias or “look at the 200 page analysis I did about a situation I can’t control”

The illusion of having control is so important to us, that we often seek information about situations even when our decisions and control are restricted and limited.


Survivorship Bias or “all entrepreneurs start from a garage and become billionaires”

This is a form of Selection bias where we concentrate on the people or things that made it past some selection process and overlook those that did not, typically because of their lack of visibility.


Bizarreness effect or “Did you hear about that weird thing that happened?”

People remember outlandish and weird material better than the usual boring stuff.


Dunning–Kruger effect or “Nobody knows more about X than me, maybe in the history of the world”

Does this one really need an explanation??


If humans are so easily hoodwinked, let’s offload this to the machines!


Machines are also not free from the biases that humans have. Keep in mind that programs are written by humans. To keep the systems manageable, these programmers have to make assumptions. We therefore bake some of our biases into the system as well.


Here’s an easy assumption - people necessarily have a last/family name. The author of the article is an exception with 2 given names and no last name (yep, it is blank on the passport). None of the 10000 systems I have registered with agree on that point though.


[Here is a nice article listing other assumptions we have about names. If you want examples for this list, you can find them here]


This gets even more complicated when we talk about machine learning and deep learning systems. These systems are not explicitly instructed on how to solve problems, but are given what to solve and when to stop (i.e. when is the solution good enough). They then calibrate themselves based on the training data to achieve the “best fit”.


This immediately identifies a few areas of concern. Are we solving for the right thing? How do we define “good enough”? What if the training data is itself biased?


Indeed, it was noticed that many of the facial recognition systems that were being built were designed primarily for white faces and performed noticeably worse for people of colour. Multiple police departments are experimenting with using AI (artificial intelligence) for profiling likelihood to engage in crime. Surprise surprise, the likelihood is based on arrest data which is disproportionately skewed towards the minorities.


If the training data is not curated properly, then these systems, on their own, will transform correlation into causation in an opaque manner. Sometimes, these correlations will be representative of the very issues in the system that we are trying to fix by introducing a non-human arbiter.


Got it. Careful thinking good, heuristics bad. Wait, where does happiness come in?


Heuristics are not necessarily bad. Allowing cognitive biases enables faster decisions which can be desirable when timeliness is more valuable than accuracy. What we must take care about is not be blindsided or manipulated due to one of these traits that we are susceptible to.


Meanwhile, being rational is often confused with being cold and unfeeling. It is supposed to be all machine like and unemotional. That is not strictly true though. Most of us try to maximize our happiness. Us being who we are, our happiness is tied up with our emotions and its associated outcomes.


Sometimes, taking a tangible loss can still result in an intangible gain and make us happy (Hint: sending gifts and helping people at a cost to ourselves). Therefore, defining what is rational must be done with consideration of our emotions and cannot be a blind pursuit.


Life is not an optimization problem. As long as we don’t leave regrets, a choice of “good enough” can be the best choice :)



I am tired from all this reading, lemme go now.


Okay, but only because you asked nicely!


Here’s a few more places for you to get started if you want to explore more about cognitive biases and rationality.

  1. List of cognitive biases - Wikipedia link

  2. More on pattern recognition - Wikipedia link

  3. Subitizing - Wikipedia link

  4. What was I thinking - A nice article published in the New Yorker in 2008

  5. Thinking Fast and slow - a book by Nobel prize winner Daniel Kahneman whose work on irrationality of humans was paved open a lot of new paths in behavioural economics

  6. A MIT Technology review article on bias in Artificial Intelligence


PS: Wikipedia citations are a great rabbit-hole to dive into, to reach relevant papers & articles :)




429 views0 comments

Recent Posts

See All

Comentários


bottom of page