Welcome back! So far, in this series of articles, I pondered over people going through experience and observing. But if you observe a lot of data, it is not necessarily going to lead to learning. In actuality, it depends on what do you want to do with that data, with that knowledge. When we see a pattern in the data, we can generalize and form our theory. On my way to work, I observe that the days on which I get stopped at the first traffic signal, I often get stopped at most of the following traffic signals. So, I arrive at various conclusions: most probably, the signals are badly synchronized, or this route has too many short lanes and it is not possible to catch the next green signal, and so on.
All of us have some ability to discover
such trends and analyze the data to arrive at conclusions. This can be called
as ‘Cognitive Ability’. Although natural cognitive ability will differ from
person to person, most of us have adequate ability to survive normal work-life
demands. Even then, we repeatedly hear comments related to problem solving
ability, arriving at wrong conclusions and the inability assess trends. The
question that follows is what blocks our ability to comprehend the data,
analyze and generalize.
The reasons behind these lie within the
data itself. In my last article, I shared with you how participants view the
data. Let us now move a step further. The participants now have to first find
the right data and then discover the causal connection. However, simulations,
like real life, pose some problems. In some cases, there is an excess of data.
This is a problem even in real life. Just recall the following story from the Mahabharat.
Guru Dronacharya tried to test if his students had learnt archery. He put a pan
with still water in it. Right above the pan was a rotating target in the form
of a small bird. The student had to look in the water and aim at the eye of the
bird. As the students came forward to attempt this one by one, Dronacharya
asked them what they could see. Most of them gave an elaborate list of what they
saw looking into the water. It was only Arjun who focused only on the target
and soon, focused only on the eye. Similarly, when a lot of data is available
in front of the managers, they have to ignore unnecessary data. In fact, some
smart and cunning subordinates use this inability to create a ‘fog’ of data so
that the real bad news gets hidden.
Unfortunately, while lot of data is a
challenge, a lack of data is also a challenge. This is also a real-life
situation. All of us feel insecure when we do not have enough data. All of us
are scared these days because no one is sharing any concrete data about how COVID-19
will unfold. If there was data, such as these epidemics last for ‘n’
number of months, we would feel more secure. In such cases, we have to use an ‘educated
guess’. For such an educated guess, prior experience or knowledge is useful.
But sometimes, we have less data because we do not understand value of the
data. In our business simulation, one has to virtually pay for some reports.
The participants try to save such a cost, and later repent.
Additionally, there is ambiguity of data
and sometimes ‘false’ data. For these issues one needs to develop critical
thinking, forming a theory and inquiring if the data points match with it, asking
questions if it is not aligning. However, most often, we tend to believe the
data from certain sources, with certain formats etc. The sources may be
falsifying data on purpose, or given their restricted view, the data that they
are sharing may be incomplete or false. The managers particularly have to deal
with it. They have to know the symptoms of falsified data.
Another challenge with data is to draw a
conclusion with very few data points. I have worked with HR managers where
surveys were used for analysis of employee engagement or for assessment in 360
degrees. And often, I had to struggle with explaining to them that they should
not conclude with very few data points. It is quite likely that their
conclusion will go wrong. We form such opinions all the time. But what if the
sample itself is very small? Then you must defer the conclusion, and, even if
you conclude, always review your assumption as more experience is gathered.
The challenge is that the people do not
realize these pitfalls in data. They trust whatever is in front of them and try
to use it (or get overwhelmed with it). Analysis of such data requires skill
that can be acquired with practice. What is important is practice,
the investment of effort and use of the right tools.
So far, we looked at the pitfalls in
‘measurable’ data points. We observe many things that are abstract. However,
they are still ‘data points’. Now, let us try to see pitfalls in using the available
data.
The first step is to analyze the data. Assuming
you have chosen the right data, i.e. good data adequate to form
conclusion, the real challenge is to assess a causal connection and then, generalize.
Assessing the cause of the data requires deeper understanding of the links
between the phenomenon and result. There can be multiple trends that are
similar to the trend we analyze but lack a causal connection. The rooster crows
in the morning but it is not the reason for sunrise. Quite likely the rooster
is not crowing because of sunrise; it just wants to invite the hen. When we analyze
engagement data, performance data, data about various behaviors, we have to
arrive at a proper causal connection. People often form the wrong connection
because they have not developed the simple techniques to organize data, to ask ‘why’,
to look for influencing factors and be ready to challenge the prevalent assumption.
People tend to accept a particular theory and then try to align their data
analysis with it. Simply, either a wrong theory is formed, or a wrong theory is
endorsed. If the theory is proposed by someone who is an authority, then that
theory is not questioned.
As you form the theory, you need to also
consider the fact that your causal connection could be wrong. It is, thus,
important to get it reviewed. It could be a formal review, or it could be a review
by sounding off to your peers. In any case, it is essential to share and get
comments. The scientific community shares the findings, which are then tried by
many other researchers. As more and more researchers contribute with their
findings, the nuances of the theory get clarified and it becomes a rich and
useful learning resource. However, all of us suffer from a fear of failure.
This fear makes us reluctant to seek feedback. If you want to train your
subordinate, insist on regular reviews, so that they can learn through
generalization. Our mistakes also help us learn. These mistakes bring out
perspectives that we might not have considered. Unfortunately, we are victims
to a culture where mistakes are considered to be ‘horrible’. Consequently,
there is so much taboo surrounding sharing mistakes that a big opportunity
to learn is lost.
Just as we could start the learning cycle
through observation, by discussing someone else’s experience, we could start the
learning experience from picking a theory that already exists, learning more
about it through study and starting to use it. This is where knowledge of such
theory is critical. This theory could be based on theories explained in well-known
books, empirical studies, through experience documentation and so on.
The learning cycle will continue if we use
this theory to address a real-life problem. This usage will require
experimentation or innovation. So, in my next article, we explore why people
don’t learn through experimentation.
How do you like these articles? Do write to me with your opinions and suggestions.
Comments
Post a Comment