Introducing Deep Learning - a paradigm where systems make accurate predictions, assessments and prioritized actionables through hierarchial representation of data.
Medical science has developed a number of diagnostic tests to identify specific types of cancer. The tests can also determine the severity of cancer and evaluate the efficacy of the treatment. These are complex procedures such as bone marrow aspiration and biopsy, computer tomography, endoscopy, colonoscopy, breast Magnetic Resonance Imaging (MRI), etc. They are carried out based on the symptoms shown by a person. It is unfortunate that the symptoms can sometimes go undetected until it is too late. Can we change this to improve our fight against cancer? Can we develop a system which, instead of detecting cancer, predicts it with a high degree of accuracy? That would be revolutionary. And it would go beyond the capabilities of even the most qualified, experienced and proficient teams of oncologists. Imagine a system that can predict cancer by analyzing tissue scans. Such a system would have the potential to tell the type of cancer a person was going to be affected with – along with the time frame in which the cancer would develop, say 2 years from now or 5 years from now. What kind of deep learning is required to render such systems that go beyond normal human capability?
Address Real Challenges
For a moment, let’s set aside the question of how to create such a system. Let’s take the idea a step further, to get a sense of the benefits it would deliver. For one, the system would help prioritize treatment based on urgency, age, impact, and a variety of other factors. And, more importantly, it would help ensure that treatment is made available when required and patients can prepare for it well in advance.
Using adequate genetic information, tissue scans, test results and other data, a system could be taught to predict cancer, aiding doctors and patients in managing it better. The system could be trained to scan data presented to it, and create a sophisticated body of learning that can predict independently. Now imagine extending the same methodology to predict and identify students who may need enhanced learning environments, market fluctuations, power outages, the type of support a specific customer may need, the fair price for a used car, or when a network will crash. The use cases are overwhelming. Who doesn’t need accurate predictions, assessments and prioritized actionables?
The Soul of a ‘Thinking’ System
Creating such a system appears to be a daunting task. This is because in each of the scenarios the number and variety of variables to be considered in real time is gigantic. But that is exactly what customers want. They don’t want to know how complex a system is. Or what the solution to the problem is. They simply want to know when systems -- such as power supply, IT assets, mobile networks or a transport system – will suffer a breakdown and how soon can they be restored.
Enterprises will, in addition, want to know the root cause and impact of such outages on performance. This will enable remediation to be prioritized based on business imperatives.
The need for such systems is driven by:
In essence what we need is data mining and analytics that enable systems to learn and figure out things. For example, a public security system should be able to identify known offenders on the run via a simple facial scan. The system then uses business rules and intelligence that allow it to make recommendations and prioritize action. In our example of the facial recognition system, when a large number of offenders are identified, the system makes intelligent recommendations to optimize the use of law keeping forces.
We could draw a parallel example using a business scenario. In a conventional IT network, a link could go down and generate a huge number of events across databases, applications, servers and storage. Typically, IT would make an effort to isolate the impact in each of the domains affected, rather than address the root cause. This happens because systems don’t provide the complete picture. But a system built around the tenets of Deep Learning would immediately seek out the root case and suppress action on all other events. In other words, a Deep Learning System would direct resources in an optimized, efficient and cost-effective manner.
The Arrival of Deep Learning
Deep Learning is defined as a new paradigm in machine learning where computers teach themselves to solve problems! Deep Learning leverages algorithms that make high level sense of data such as image, sound, texts and other forms of inputs. The system then recognizes patterns, generates models, and develops non-linear relationships using a combination of Supervised and Unsupervised Learning (see Quick Learning Primer) through hierarchical representations of data.
Quick Learning Primer
What are Supervised and Unsupervised Learning?
Supervised Learning: The systems have labels under which data is classified. The data provided to the system’s algorithms are fully labelled.
Unsupervised Learning: This is a self-learning system that analyzes raw, unlabeled data and begins to observe patterns. It then clusters the data and places data with similar patterns into buckets. The clusters form high level representations.
Few of the leading tools learnt to recognize animals using Unsupervised Learning systems. The unsupervised learning method uses a system akin to how our brain functions. If we analyze our brain closely it receives inputs from multiple sources of different kinds and uses a method to detect the shapes of the objects with its boundaries and finally comes to a conclusion about the object. Object identification which is done within a factor of nano seconds by the brain and the ability of the brain to detect a pattern forms the basis for introducing pattern recognition techniques in the machine so that systems that can emulate the brain can be developed. The mind is capable to prioritize and organize the intrinsically unorganized information in its surroundings. A good example of this is how a child, even without being taught the proper semantic concepts of language such as grammar or sentence structure, begins to communicate in the language. This happens largely through listening to parents use the language.
Electronic Perception + Learning = New Applications
The capabilities of Deep Learning Systems are dramatic. They, in fact, duplicate the abilities of human vision, thinking and deduction through electronic ‘perception’ and an ‘understanding’ of images. They can process raw pixels and independently derive meaning and insights within them. This can have several applications. For example a Deep Learning System could:
As the complexity of IT operations and infrastructure grows and they become more resource intensive, the need for Deep Learning Systems is becoming more pressing. Today’s interdependent and multi-vendor models spread across computing systems, networking devices, databases and applications that require frequent upgrades and tools would benefits tremendously from Deep Learning.
Customers don’t care about the complexity and dynamic nature of models and infrastructure. They want their systems to work. They want their systems to produce real insights in real time – those which their experience and knowledge cannot easily produce. For example, a utility would want to know when its grid will fail so that it can do an impact analysis, be proactive about remediation, and develop operational flexibility based on predictions and not on generalizations.
The technology behind Deep Learning relies upon intelligent analytical algorithms, machine learning, event pairing, event sequence, event condition and a variety of other event related factors. Based on these, the system then uses business logic and event rules to construct which/ when/ where/ what/ how for the organization to assess impact, prioritize action and deploy resources in a manner that is optimized for cost and efficiency.
Tomorrow’s organizations will keenly follow developments in the area of Deep Learning. However, some have begun to lay the foundation required to create such systems. It is only a matter of time before we have machines that resemble human thinking more closely than before.
Ramkumar Balasubramanian, Head - Infrastructure Automation, Wipro, Ltd.
Ramkumar Balasubramanian is the Head - Infrastructure Automation, Wipro BOTWORKS with Managed Services, Global Infrastructure Division in Wipro. Managed Services is the services arm of Wipro, focusing on providing Infrastructure Management Services. As part of his role he is responsible for conceptualization of innovative services and to take these services to the market.
Ramkumar has an overall experience of 22 years in the industry. He has been with Wipro for last 10 years and is a distinguished member of the technical staff. Prior to his tenure at Wipro Infotech, Ramkumar has worked with HP as a lead architect for the HP Worldwide Catalog Management project based out of the US. Ramkumar has also worked with three startup companies designing products and have filed patents on the same.
Ramkumar holds a degree MS from Madurai Kamaraj University.`