Shyam's Slide Share Presentations

VIRTUAL LIBRARY "KNOWLEDGE - KORRIDOR"

This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.

TO VIEW MORE CONTENT ON THIS SUBJECT AND OTHER TOPICS, Please visit KNOWLEDGE-KORRIDOR our Virtual Library

Sunday, October 1, 2017

Reshaping Business With Artificial Intelligence 2 10-01



Figure 5
Organizations expect to create competitive advantage from AI — but also anticipate increased competition. 

Executives simultaneously recognize that their organization will not likely be the sole beneficiary of AI in their markets. Respondents expect that both new entrants and incumbents would similarly see the potential for benefits. Three-quarters of respondents foresee new competitors using AI to enter their markets while 69% expect current competitors to adopt AI in their businesses. Furthermore, they realize that suppliers and customers in their business ecosystem will increasingly expect them to use AI.

4. Disparity in Adoption and Understanding

Despite high expectations, business adoption of AI is at a very early stage: There is a disparity between expectation and action. Although four in five executives agree that AI is a strategic opportunity for their organization, only about one in four has incorporated AI in some offerings or processes. Only one in 20 has extensively incorporated AI in their offerings or processes. (See Figure 6.)



Figure 6
Only about a quarter of all organizations have adopted AI so far.


The differences in adoption can be striking, particularly within the same industry. For example, Ping An, which employs 110 data scientists, has launched about 30 CEO-sponsored AI initiatives that support, in part, its vision “that technology will be the key driver to deliver top-line growth for the company in the years to come,” says the company’s chief innovation officer, Jonathan Larsen. Yet in sharp contrast, elsewhere in the insurance industry, other large companies’ AI initiatives are limited to “experimenting with chatbots,” as a senior executive at a large Western insurer describes his company’s AI program.

Organizations also report significant differences in their overall understanding of AI. For example, 16% of respondents strongly agreed that their organization understands the costs of developing AI-based products and services. And almost the same percentage (17%) strongly disagreed that their organization understands these costs. Similarly, while 19% of respondents strongly agreed that their organization understands the data required to train AI algorithms, 16% strongly disagreed that their organization has that understanding.

Combining survey responses to questions around AI understanding and adoption, four distinct organizational maturity clusters emerged: Pioneers, Investigators, Experimenters, and Passives.2

  • Pioneers (19%): Organizations that both understand and have adopted AI. These organizations are on the leading edge of incorporating AI into both their organization’s offerings and internal processes.

  • Investigators (32%): Organizations that understand AI but are not deploying it beyond the pilot stage. Their investigation into what AI may offer emphasizes looking before leaping.

  • Experimenters (13%): Organizations that are piloting or adopting AI without deep understanding. These organizations are learning by doing.

  • Passives (36%): Organizations with no adoption or much understanding of AI.
If expectations and sense of opportunity are so high, what prevents organizations from adopting AI? Even in industries with extensive histories of integrating new technologies and managing data, barriers to AI adoption can be difficult to overcome. In financial services, for example, Simon Smiles, chief investment officer, ultra high net worth at UBS, puts it this way: “The potential for larger-scale financial institutions to leverage technology more actively, including artificial intelligence, within their business, and to harness their data to deliver a better client experience to the end user, is huge. The question there is whether these traditional institutions will actually grab the opportunity.” Taking advantage of AI opportunities requires organizational commitment to get past the inevitable difficulties that accompany many AI initiatives.

These differences are less about technological limitations and much more about business. In aggregate, respondents ranked competing investment priorities and unclear business cases as more significant barriers to AI implementation than technology capabilities. Evans of Airbus makes the critical distinction: “Well, strictly speaking, we don’t invest in AI. We don’t invest in natural language processing. We don’t invest in image analytics. We’re always investing in a business problem.” Airbus turned to AI because it solved a business problem; it made business sense to invest in AI instead of other approaches.

Smiles at UBS notes that organizations do not all face the same challenges. With respect to incumbents and fintech startups, he says: “There is a bifurcation between the groups that have the scale needed to develop incredibly valuable platforms and those unencumbered by legacy business models and systems to arguably have the better model going forward, but don’t have the clients and accompanying data to capitalize fully on the opportunity.” Differences like these lead to differences in rates of AI adoption.

Barriers to Adoption


The clusters of organizations demonstrate how barriers to AI differ and affect rates of adoption. (See Figure 7.) Pioneers have overcome issues related to understanding: three-quarters of these companies have identified business cases for AI. Senior executives are leading organizational AI initiatives. Their biggest hurdles are grappling with the practicalities of developing or acquiring the requisite AI talent and addressing competing priorities for AI investment. They are also much more likely to be attuned to the security concerns resulting from AI adoption. Passives, by contrast, have yet to come to grips with what AI can do for them. They have not identified solid business cases that meet their investment criteria. Leadership may not be on board. Technology is a hurdle. Many are not yet even aware of the difficulties in sourcing and deploying talent with AI expertise.










Figure 7
While AI talent limits Pioneers, Passives don’t yet discern a business case for AI.

Our clustering also reveals nuanced differences in understanding among the clusters.


  • Business potential: AI may change how organizations create business value. Pioneers (91%) and Investigators (90%) are much more likely to report that their organization recognizes how AI affects business value than Experimenters (32%) and Passives (23%). Evans at Airbus reports that “there was no question of value; it was trying to address an in-service issue on one of our aircraft.”
  • Workplace implications: Integrating the capabilities of humans and machines is a looming issue. AI stands to change much of the daily work environment. Pioneers and Investigators better appreciate that the presence of machines in the workplace will change behavior within the organization. Julie Shah, an associate professor of aeronautics at MIT, says, “What people don’t talk about is the integration problem. Even if you can develop the system to do very focused, individual tasks for what people are doing today, as long as you can’t entirely remove the person from the process, you have a new problem that arises — which is coordinating the work of, or even communication between, people and these AI systems. And that interaction problem is still a very difficult problem for us, and it’s currently unsolved.”

  • Industry context: Organizations operate in regulatory and industry contexts; respondents from Experimenter and Passive organizations do not feel that their organization appreciates how AI may affect industry power dynamics.





5. The Need for Data, Training, and Algorithms

Perhaps the most telling difference among the four maturity clusters is in their understanding of the critical interdependence between data and AI algorithms. Compared to Passives, Pioneers are 12 times more likely to understand the process for training algorithms, 10 times more likely to understand the development costs of AI-based products and services, and 8 times more likely to understand the data that’s needed for training AI algorithms. (See Figure 8.)



























Figure 8
Organizations have different levels of understanding for AI-related technology and business context.


Most organizations represented in the survey have little understanding of the need to train AI algorithms on their data so they can recognize the sort of problem patterns that Airbus’s AI application revealed. Less than half of respondents said their organization understands the processes required to train algorithms or the data needs of algorithms.

Generating business value from AI is directly connected to effective training of AI algorithms. Many current AI applications start with one or more “naked” algorithms that become intelligent only upon being trained (predominantly on company-specific data). Successful training depends on having well-developed information systems that can pull together relevant training data. Many Pioneers already have robust data and analytics infrastructures along with a broad understanding of what it takes to develop the data for training AI algorithms. Investigators and Experimenters, by contrast, struggle because they have little analytics expertise and keep their data largely in silos, where it is difficult to integrate. While over half of Pioneer organizations invest significantly in data and training,
organizations from the other maturity clusters invest substantially less. For example, only one-quarter of Investigators have made significant investments in AI technology, the data required to train AI algorithms, and processes to support that training.

Misunderstandings About Data for AI


Our research revealed several data-related misconceptions. One misunderstanding is that sophisticated AI algorithms alone can provide valuable business solutions without sufficient data. Jacob Spoelstra, director of data science at Microsoft, observes:

I think there’s still a pretty low maturity level in terms of people’s understanding of what can be done through machine learning. A mistake we often see is that organizations don’t have the historical data required for the algorithms to extract patterns for robust predictions. For example, they’ll bring us in to build a predictive maintenance solution for them, and then we’ll find out that there are very few, if any, recorded failures. They expect AI to predict when there will be a failure, even though there are no examples to learn from.
No amount of algorithmic sophistication will overcome a lack of data. This is particularly relevant as organizations work to use AI to advance the frontiers of their performance.

Some forms of data scarcity go unrecognized: Positive results alone may not be enough for training AI. Citrine Informatics, a materials-aware AI platform helping to accelerate product development, uses data from both published experiments (which are biased toward successful experiments) and unpublished experiments (which include failed experiments) through a large network of relationships with research institutions. “Negative data is almost never published, but the corpus of negative results is critical for building an unbiased database,” says Bryce Meredig, Citrine’s cofounder and chief science officer. This approach has allowed Citrine to cut R&D time in half for specific applications. W.L. Gore & Associates, Inc., developer of Gore-Tex waterproof fabric, similarly records both successful and unsuccessful results in its push to innovate; knowing what does not work helps it to know where to explore next.

Sophisticated algorithms can sometimes overcome limited data if its quality is high, but bad data is simply paralyzing. Data collection and preparation are typically the most time-consuming activities in developing an AI-based application, much more so than selecting and tuning a model. As Airbus’ Evans says:
For every new project that we build, there’s an investment in combining the data. There’s an investment sometimes in bringing in new sources to the data platform. But we’re also able to reuse all of the work that we’ve done in the past, because we can manage those business objects effectively. Each and every project becomes faster. The upfront costs, the nonrecurring costs, of development are lower. And we’re able to, with each project, add more value and more business content to that data lake.
Pioneer organizations understand the value of their data infrastructure to fuel AI algorithms.
Additionally, companies sometimes erroneously believe that they already have access to the data they need to exploit AI. Data ownership is a vexing problem for managers across all industries. Some data is proprietary, and the organizations that own it may have little incentive to make it available to others. Other data is fragmented across data sources, requiring consolidation and agreements with multiple other organizations in order to get more complete information for training AI systems. In other cases, ownership of important data may be uncertain or contested. Getting business value from AI may be theoretically possible but pragmatically difficult.

Even if the organization owns the data it needs, fragmentation across multiple systems can hinder the process of training AI algorithms. Agus Sudjianto, executive vice president of corporate model risk at Wells Fargo & Co., puts it this way:
A big component of what we do is dealing with unstructured data, such as text mining, and analyzing enormous quantities of transaction data, looking at patterns. We work on continuously improving our customer experience as well as decision-making in terms of customer prospecting, credit approval, and financial crime detection. In all these fields, there are significant opportunities to apply AI, but in a very large organization, data is often fragmented. This is the core issue of the large corporation — dealing with data strategically.

Make Versus Buy

The need to train AI algorithms with appropriate data has wide-ranging implications for the traditional make-versus-buy decision that companies typically face with new technology investments. Generating value from AI is more complex than simply making or buying AI for a business process. Training AI algorithms involves a variety of skills, including understanding how to build algorithms, how to collect and integrate the relevant data for training purposes, and how to supervise the training of the algorithm. “We have to bring in people from different disciplines. And then, of course, we need the machine learning and AI people,” says Sudjianto. “Somebody who can lead that type of team holistically is very important.”

Pioneers rely heavily on developing internal skills through training or hiring. Organizations with less experience and understanding of AI put more emphasis on gaining access to outsourced AI-related skills, but this triggers some problems. (See Figure 9.)


















No comments:

Post a Comment