The difference between knowledge and understanding


This answer is from a Linkedin discussion forum and, judge for yourself, but I thought the answer was about as complete as it could get:

  • Knowing; having the ability to cite chapter and verse with reference to an issue / area / industry, facilitating one to project oneself as somewhat of a subject matter expert
  • Understanding; when the full depth and breadth of a given subject matter is inseparably interwoven / integrated within one’s core, resulting in seemingly instinctive thoughts and awareness

CRITICAL LOSS and FAILURE; refers to the impact at personal and group level (of immeasurable scope) where an individual with true understanding, is unable or unwilling to consistently translate that vast understanding into effective action.

Snake Oil Sales; refers to an individual without true understanding, yet portraying oneself as a having vast understanding of subject matter.

*NOTE: Both Critical Loss as well as Snake Oil Sales result in unequalled damage upon the greater group. Both equate to dysfunctional conditions & damage, and are thus in need of identification and immediate handling by truly effective leadership

We have all come across individuals, organisations and scenarios such as the above and, if you have anything further to add, please feel free to comment.

3 Responses to The difference between knowledge and understanding

  1. Hi David, what you define as knowledge and understanding is both no more than information in different depth. Knowledge is strongly related to (emotional and personal) experience. It is the ability to recognize a pattern and then imagine actions that will change the situation towards a more desirable one. Some think that has to be a detailed plan and that is another fallacy. A detailed as-is and detailed plans that won’t be changed are the reason for the most failures. Big data analysis will make this worse as past data do not allow to predict the future. Regards, Max

  2. Thanks for your “take” Max. Of course this is a much bigger issue than is contained in this article but something that began with “In the beginning there was information…” may have been pretty scary!!!

    I have been pondering the deeper aspects, intermittently, for a while and have been working up to penning something a bit “deeper”. What do you reckon to the thought that, as we develop and adapt tools that enable us to recognise and decipher information, we “convert” it to data from which, through analysis and an appropriate “problem statement”, we can extract the basis of new knowledge.

    I agree re “big data”! The big fear being that we continue to attempt to predict based upon the past and, with this mindset, restrict ourselves to recognising (seen) rather than striving to identify (unseen) patterns.

    I’m sure it was Dave Snowden (Cynefin) who talked about knowledge being the means by which we inform and not a higher order of information:

    “Human knowledge requires contextual stimulation”

    David

    • Thanks for the reply. Obviously data is another dimension. Data is just numbers and information is related data-sets in a contextual model. We do not decipher information. We create information by assuming working models and creating tools that gather data according to those models. Data gathering with a larger and longer predictive power is only possible in a small segment of classical physics. Above and below it becomes really difficult. In quantum physics the complete subject of ‘renormalization’ to get rid of infinities has to do with limiting the mathematical model to realistic values by ‘synchronizing them with values that have been measured, i.e the electrons energy. I won’t even go into the problem with Gauge invariance that is both a tool to verify data and to structure information. But it is a very valuable approach, while we still don’t know if it is right.

      The predictability on a quantum mechanics level and on a complex systems level diminishes radically with time and distance. In both cases it is a probability of interaction (i.e. for electromagnetic force between electrons and photons 1/137) for complex adaptive systems a probability distribution under the Gauss curve. So while there is a mathematical tool (a model) that works excellently and extremely precisely such a QED, it fails to allow prediction which photon will interact with which electron. Statistical accuracy in CAS predicts nothing except a global average regardless of the model and measurement chosen. You get a Gauss distribution then the sampling is good. It still doesn’t predict anything.

      And here we enter another domain if this subject:

      Prediction in the long term is useless and impossible because of the multiplication of probabilities and the (usually unknown) large number of interacting entities. There is no simple cause and effect chain because of it. Matching actions with before and after patterns of information models can create a library of knowledge that allows small modifications. The smaller the patterns and the lesser the actions the more accurate the knowledge will be. Individual agents in the system will do fine. The larger the data pattern and the action that should cause a distant-future desirable pattern the more failure is likely.

      Any sequence of past patterns does not predict them to reappear exactly the same way. This depends on the level of abstraction. It is fairly easy to predict if it will rain tomorrow in Vienna. Exactly when and how much is impossible. Looking at complexity simply means that you make some model of a real world construct and wonder how predictable it is. Reducing complexity may help to make it more predictable but at the same time you are changing the system, which means you have to relearn all the patterns that may have worked in the past. You are entering a new world. It is a lot better to not wonder about complexity but accept that we only have to modify our level of abstraction, learn how to make many small modification when the patterns are right continue to verify in the higher abstractions if the patterns are changing in a desirable way. Complex adaptive systems are assemblies of individual acting agents and each agent’s actions influences the overall direction of pattern changes. As usually agents (much like quantum entities) are in many ways interconnected with probability they influence each other. Thus complexity is not bad, it is the driver of emergence and the enabler of resilience. Remove the complexity and the system becomes predictable but stops to evolve and the stress tension with the changing rest of world will destroy it.

      Our inability to predict is in my humble opinion the true understanding. Proclaiming predictability is either arrogance or yes, snake oil sales.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s