Shyam's Slide Share Presentations

VIRTUAL LIBRARY "KNOWLEDGE - KORRIDOR"

This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.

TO VIEW MORE CONTENT ON THIS SUBJECT AND OTHER TOPICS, Please visit KNOWLEDGE-KORRIDOR our Virtual Library

Wednesday, April 20, 2016

Just How Smart Are Smart Machines? 04-20


Just How Smart Are Smart Machines? 


The number of sophisticated cognitive technologies that might be capable of cutting into the need for human labor is expanding rapidly. But linking these offerings to an organization’s business needs requires a deep understanding of their capabilities.






If popular culture is an accurate gauge of what’s on the public’s mind, it seems everyone has suddenly awakened to the threat of smart machines. Several recent films have featured robots with scary abilities to outthink and manipulate humans. In the economics literature, too, there has been a surge of concern about the potential for soaring unemployment as software becomes increasingly capable of decision making. Yet managers we talk to don’t expect to see machines displacing knowledge workers anytime soon — they expect computing technology to augment rather than replace the work of humans. In the face of a sprawling and fast-evolving set of opportunities, their challenge is figuring out what forms the augmentation should take. Given the kinds of work managers oversee, what cognitive technologies should they be applying now, monitoring closely, or helping to build?

To help, we have developed a simple framework that plots cognitive technologies along two dimensions. (See “What Today’s Cognitive Technologies Can — and Can’t — Do.”) First, it recognizes that these tools differ according to how autonomously they can apply their intelligence. On the low end, they simply respond to human queries and instructions; at the (still theoretical) high end, they formulate their own objectives. Second, it reflects the type of tasks smart machines are being used to perform, moving from conventional numerical analysis to performance of digital and physical tasks in the real world. The breadth of inputs and data types in real-world tasks makes them more complex for machines to accomplish.

By putting those two dimensions together, we create a matrix into which we can place all of the multitudinous technologies known as “smart machines.” More important, this helps to clarify today’s limits to machine intelligence and the challenges technology innovators are working to overcome next. Depending on the type of task a manager is targeting for redesigned performance, this framework reveals the various extents to which it might be performed autonomously and by what kinds of machines.


Four Levels of Intelligence


Clearly, the level of intelligence of smart machines is increasing. The general trend is toward greater autonomy in decision making — from machines that require a highly structured data and decision context to those capable of deciphering a more complex context.


Support for Humans


For decades, the prevailing assumption has been that cognitive technologies would provide insight to human decision makers — what used to be known as “decision support.” Even with IBM Corp.’s Watson and many of today’s other cognitive systems, most people assume that the machine will offer a recommended decision or course of action but that a human will make the final decision.


Repetitive Task Automation


It is a relatively small step to go from having machines support humans to having the machines make decisions, particularly in structured contexts. Automated decision making has been gaining ground in recent years in several domains, such as insurance underwriting and financial trading; it typically relies on a fixed set of rules or algorithms, so performance doesn’t improve without human intervention. Typically, people monitor system performance and fine-tune the algorithms.


Context Awareness and Learning


Sophisticated cognitive technologies today have some degree of real-time contextual awareness. As data flow more continuously and voluminously, we need technologies that can help us make sense of the data in real time — detecting anomalies, noticing patterns, and anticipating what will happen next. Relevant information might include location, time, and/or a user’s identity, which might be used to make recommendations (for example, the best route to work based on the time of day, current traffic levels, and the driver’s preference for highways versus back roads).

One of the hallmarks of today’s cognitive computing is its ability to learn and improve performance. Much of the learning takes place through continuous analysis of real-time data, user feedback, and new content from text-based articles. In settings where results are measurable, learning-oriented systems will ultimately deliver benefits in the form of better stock trading decisions, more accurate driving time predictions, and more precise medical diagnoses.


Self-Awareness


So far, machines with self-awareness and the ability to form independent objectives reside only in the realm of fiction. With substantial self-awareness, computers may eventually gain the ability to work beyond human levels of intelligence across multiple contexts, but even the most optimistic experts say that general intelligence in machines is three to four decades away.


Four Cognitive Task Types


A straightforward way to sort out tasks performed by machines is according to whether they process only numbers, text, or images — the building blocks of cognition — or whether they know enough to take informed actions in the digital or physical world.


Analyzing Numbers


The root of all cognitive technologies is computing machines’ superior performance at analyzing numbers in structured formats (typically, rows and columns). Classically, this numerical analysis was applied purely in support of human decision makers. People continued to perform the front-end cognitive tasks of creating hypotheses and framing problems, as well as the back-end interpretation of the numbers’ implications for decisions. Even as analysts added more visual analytics displays and more predictive analytics in the past decade, people still did the interpretation.

Today, companies are increasingly embedding analytics into operational systems and processes to make repetitive automated decisions, which enables dramatic increases in both speed and scale. And whereas it used to take a human analyst to develop embedded models, “machine learning” methods can produce models in an automated or semiautomated fashion.


Analyzing Words and Images


A key aspect of human cognition is the ability to read words and images and to determine their meaning and significance. But today, a wide variety of technological tools, such as machine learning, natural language processing, neural networks, and deep learning, can classify, interpret, and generate words. Some of them can also analyze and identify images.

The earliest intelligent applications involving words and images involved text, image, and speech recognition to allow humans to communicate with computers. Today, of course, smartphones “understand” human speech and text and can recognize images. These capabilities are hardly perfect, but they are widely used in many applications.

When words and images are analyzed on a large scale, this comprises a different category of capability. One such application involves translating large volumes of text across languages. Another is to answer questions as a human would. A third is to make sense of language in a way that can either summarize it or generate new passages.

IBM Watson was the first tool capable of ingesting, analyzing, and “understanding” text well enough to respond to detailed questions. However, it doesn’t deal with structured numerical data, nor can it understand relationships between variables or make predictions. It’s also not well suited for applying rules or analyzing options on decision trees. However, IBM is rapidly adding new capabilities included in our matrix, including image analysis.

There are other examples of word and image systems. Most were developed for particular applications and are slowly being modified to handle other types of cognitive situations. Digital Reasoning Systems Inc., for example, a company based in Franklin, Tennessee, that developed cognitive computing software for national security purposes, has begun to market intelligent software that analyzes employee communications in financial institutions to determine the likelihood of fraud.

Another company, IPsoft Inc., based in New York City, processes spoken words with an intelligent customer agent programmed to interpret what customers want and, when possible, do it for them.
IPsoft, Digital Reasoning, and the original Watson all use similar components, including the ability to classify parts of speech, to identify key entities and facts in text, to show the relationships among entities and facts in a graphical diagram, and to relate entities and relationships with objectives. This category of application is best suited for situations with much more — and more rapidly changing — codified textual information than any human could possibly absorb and retain.

Image identification and classification are hardly new. “Machine vision” based on geometric pattern matching technology has been used for decades to locate parts in production lines and read bar codes. Today, many companies want to perform more sensitive vision tasks such as facial recognition, classification of photos on the Internet, or assessment of auto collision damage. Such tasks are based on machine learning and neural network analysis that can match particular patterns of pixels to recognizable images.

The most capable machine learning systems have the ability to “learn” — their decisions get better with more data, and they “remember” previously ingested information. For example, as Watson is introduced to new information, its reservoir of information expands. Other systems in this category get better at their cognitive task by having more data for training purposes. But as Mike Rhodin, senior vice president of business development for IBM Watson, noted, “Watson doesn’t have the ability to think on its own,” and neither does any other intelligent system thus far created.

Performing Digital Tasks

One of the more pragmatic roles for cognitive technology in recent years has been to automate administrative tasks and decisions. In order to make automation possible, two technical capabilities are necessary. First, you need to be able to express the decision logic in terms of “business rules.” Second, you need technologies that can move a case or task through the series of steps required to complete it. Over the past couple of decades, automated decision-making tools have been used to support a wide variety of administrative tasks, from insurance policy approvals to information technology operations to high-speed trading.

Lately, companies have begun using “robotic process automation,” which uses work flow and business rules technology to interface with multiple information systems as if it were a human user. Robotic process technology has become popular in banking (for back-office customer service tasks, such as replacing a lost ATM card), insurance (for processing claims and payments), information technology (IT) (for monitoring system error messages and fixing simple problems), and supply chain management (for processing invoices and responding to routine requests from customers and suppliers).

The benefits of process automation can add up quickly. An April 2015 case study at Telefónica O2, the second-largest mobile carrier in the United Kingdom, found that the company had automated over 160 process areas using software “robots.” The overall three-year return on investment was between 650% and 800%.


Performing Physical Tasks


Physical task automation is, of course, the realm of robots. Though people love to call every form of automation technology a robot, one of Merriam-Webster’s definitions of robot is “a machine that can do the work of a person and that works automatically or is controlled by a computer.”

In 2014, companies installed about 225,000 industrial robots globally, more than one-third of them in the automotive industry. However, robots often fall well short of expectations. In 2011, the founder of Foxconn Technology Co., Ltd., a Taiwan-based multinational electronics contract manufacturing company, said he would install one million robots within three years, replacing one million workers. However, the company found that employing only robots to build smartphones was easier said than done. To assemble new iPhone models in 2015, Foxconn planned to hire more than 100,000 new workers and install about 10,000 new robots.

Historically, robots that replaced humans required a high level of programming to do repetitive tasks. For safety reasons, they had to be segregated from human workers. However, a new type of robots — often called “collaborative robots” — can work safely alongside humans. They can be programmed simply by having a human move their arms.

Robots have varying degrees of autonomy. Some, such as remotely piloted drone aircraft and robotic surgical instruments and mining equipment, are designed to be manipulated by humans. Others become at least semiautonomous once programmed but have limited ability to respond to unexpected conditions. As robots get more intelligence, better machine vision, and increased ability to make decisions, they will integrate other types of cognitive technologies while also having the ability to transform the physical environment. IBM Watson software, for example, has been installed in several different types of robots.


The Great Convergence


Slowly but surely, the worlds of artificially intelligent software and robots seem to be converging, and the boundaries between different cognitive technologies are blurring. In the future, robots will be able to learn and sense context, robotic process automation and other digital task tools will improve, and smart software will be able to analyze more intricate combinations of numbers, text, and images.
We anticipate that companies will develop cognitive solutions using the building blocks of application program interfaces (APIs). One API might handle language processing, another numerical machine learning, and a third question-and-answer dialogue. While these elements would interact with each other, determining which APIs are required will demand a sophisticated understanding of cognitive solution architectures.

This modular approach is the direction in which key vendors are moving. IBM, for example, has disaggregated Watson into a set of services — a “cognitive platform,” if you will — available by subscription in the cloud. Watson’s original question-and- answer services have been expanded to include more than 30 other types, including “personality insights” to gauge human behavior, “visual recognition” for image identification, and so forth. Other vendors of cognitive technologies, such as Cognitive Scale Inc., based in Austin, Texas, are also integrating multiple cognitive capabilities into a “cognitive cloud.”

Despite the growing capabilities of cognitive technologies, most organizations that are exploring them are starting with small projects to explore the technology in a specific domain. But others have much bigger ambitions. For example, Memorial Sloan Kettering Cancer Center, in New York City, and the University of Texas MD Anderson Cancer Center, in Houston, Texas, are taking a “moon shot” approach, marshaling cognitive tools like Watson to develop better diagnostic and treatment approaches for cancer.

 Continued on page 2


Designing a Cognitive Architecture



























No comments:

Post a Comment