An Analyst Toolkit: From Big Data to Big Knowledge

Data and knowledge

I recently read that big data to organizations is like sex to teenagers: They talk a lot about it but few are actually doing it. My experience, however, has taught me otherwise: Many public- and private-sector organizations implement big data tools in order to improve their value chains, increase engagement with clients and stakeholders, improve HR management, and enhance strategic decision-making processes.

What few organizations are actually doing, however, is utilizing “big knowledge”: the aggregation of insights generated by a constellation of analysts — and the next “big thing” after the mastery of big data.

Big knowledge sometimes starts with big data. There’s no doubt that big data tools sift through huge quantities of information that might not otherwise be relevant or interesting. But big data is merely one of many starting points for disparate groups or even masses of analysts to generate the insights which are later turned into big knowledge.

What separates big knowledge from big data is not just making sense of an influx of information, but a flood of insights coming from many different analytic sources. Organizations utilizing big knowledge are thus faced with the following questions: Are we fully utilizing the insights generated, or are some of them getting lost on the way? And can we even generate true knowledge out of this aggregation of insights?

As if these questions are not complicated enough, there’s another set of variables to take into consideration: the identity of those creating the insights and the context of their generation. Unlike “ordinary” data (which deals with objective facts), insights are interpretations of reality and therefore are subjective in nature. It is thus important to contextualize the aggregation of preliminary insights so as to build new layers of analysis thereupon. Indeed, ignorance of the basic conditions in which insights are initially generated may replicate biases, mental pathologies or just simple mistakes.

Understanding the context of insight generation is especially important when dealing with analysts examining similar or interconnected issues but who are all dispersed by location or field. Organizations facing this challenge include intelligence agencies, large consulting firms, research firms and risk assessment departments of large multinational corporations and many more. Such entities need to consistently generate new and impactful insights which transcend the mundane. They must also constantly examine the holistic relevance of knowledge generated for a specific purpose. Doing this manually (via human analytic capacity) is simply too labor-intensive – not to mention limited by cognitive limitations.

So how does one effectively digest analytic products produced in multiple contexts? How does one identify blind spots, contradictions or patterns that may or may not emerge? In what structured way can one “connect the dots” between multitudes of analytic products to produce new sets of insights? And how can one assess whether the context in which an analysis was conducted (e.g., time, location, culture, professional background) influenced the assessment?

This is where big knowledge comes into play — an analytic toolkit that enhances cognitive capabilities rather than making them redundant. Even if artificial intelligence and big data become productive, there will be an extended (and perhaps indefinite) period where human-in-the-loop analytics are needed to pick up where software falls short. The most interesting future use cases come in the illumination of unknown or hidden connections which become a kind of “force multiplier” for the analytic community. Such a toolkit will allow analytic communities to map knowledge, interconnect insights, uncover new relationships and ideas, and identify relevant experts for the task at hand.

We need to move beyond mere data and bring forth the next revolution of knowledge management: the big knowledge revolution. We need to develop tools that enhance the ability of individuals and organizations to reach a deeper level of analysis, thus improving decision-making processes. This will enable us to transition from “known unknowns” to “unknown unknowns” — i.e., to acquire real-time understanding of previously obscured connections and relationships and to use artificial intelligence to point analysts in the right direction. Such tools will be the ultimate analytic resources for creating synergy between data-information knowledge, software-hardware and human experts to produce “big” knowledge.

About the author

Shay

Dr. Shay Hershkovitz
Wikistrat Chief Strategy Officer
Director of Analytic Community

This article was originally published on LinkedIn.

Facebook Twitter Email