Introduction to Cognitive Computing

Can a computer develop an ability to think and reason without human intervention?


'Cognitive computing represents self-learning systems that utilize machine learning models to mimic the way brain works.' Eventually, this technology will facilitate the creation of automated IT models which are capable of solving problems without human assistance. The result is cognitive computing – a combination of cognitive science and computer science. Cognitive computing models provide a realistic roadmap to achieve artificial intelligence.

Cognition comes from the human brain. So what’s the brain of cognitive systems?

Evolution of Computing

Cognitive computing represents the third era of computing. In the first era, (19th century) Charles Babbage, also known as ‘father of the computer’ introduced the concept of a programmable computer. Used in the navigational calculation, his computer was designed to tabulate polynomial functions. The second era (1950) experienced digital programming computers such as ENIAC and ushered an era of modern computing and programmable systems. And now to cognitive computing which works on deep learning algorithms and big data analytics to provide insights.

Thus the brain of a cognitive system is the neural network, the fundamental concept behind deep learning. The neural network is a system of hardware and software mimicked after the central nervous system of humans, to estimate functions that depend on the huge amount of unknown inputs.

Neural Network for Image Recognition

What are the features of a cognitive computing solution?

With the present state of cognitive computing, basic solutions can play an excellent role of an assistant or virtual advisor. Siri, Google assistant, Cortana, and Alexa are good examples of personal assistants. In order to implement cognitive computing in commercial and widespread applications, Cognitive Computing Consortium has recommended the following features for the computing systems –

1. Adaptive

They must learn as information changes, and as goals and requirements evolve. They must resolve ambiguity and tolerate unpredictability. They must be engineered to feed on dynamic data in real time or near-real time.

2. Interactive

Similar to a brain, the cognitive solution must interact with all elements in the system – processor, devices, cloud services and user. Cognitive systems should interact bidirectionally. It should understand human input and provide relevant results using natural language processing and deep learning. Some intelligent chatbots such as Mitsuku have already achieved this feature.

3. Iterative and stateful

They must aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They must 'remember' previous interactions in a process and return information that is suitable for the specific application at that point in time.

4. Contextual

They must understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task, and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

Cognitive computing is definitely the next step in computing started by automation. It sets a benchmark for computing systems to reach the level of the human brain. But it has some limitations as AI is difficult to apply in situations with a high level of uncertainty, rapid change or creative demands. The complexity of problem grows with the number of data sources. It is challenging to aggregate, integrate and analyze such unstructured data. A complex cognitive solution should have many technologies that coexist to give deep domain insights.


Read the complete and updated article at


Read next:

Why Blockchain Hype Must End