Shyam's Slide Share Presentations

VIRTUAL LIBRARY "KNOWLEDGE - KORRIDOR"

This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.

TO VIEW MORE CONTENT ON THIS SUBJECT AND OTHER TOPICS, Please visit KNOWLEDGE-KORRIDOR our Virtual Library

Thursday, June 22, 2017

Presidential Elections 2017


PRESIDENTIAL ELECTION INDIA

We have two excellent candidates this time for the position of the President of India. Either of them will make a very good President.

We already know, who is going to win.
...
Even promotion and campaigning is not needed as the members forming the voters in the electoral college will anyway blindly vote as per their party instructions. I really doubt if these voters know anything about the candidates credibility and credentials other than their being Dalits.

Both the political parties, the Congress and the BJP are projecting their being Dalit as the only qualification for them, which is wrong.

One has been a great lawyer, member of the parliament and a governor.
The other has been a successful career diplomat, a member of the parliament, minister and speaker of the Lok Sabha.

Their cast, which is just incidental, is being promoted and projected as their main virtue. When Pranab Mukerjee was selected as presidential candidate, no one said he is a Brahmin.
Being a 'Chatur Brahmin', it seems a liability to be hidden.

By selecting a Dalit as a candidate, both the political parties are giving an impression as if they giving alms to beggars (if not throwing crumbs.)

Dalits are not beggars.....

In case they are so worried about the Dalits,

Why have they not made Dalit as a Prime Minister????

Are there no Dalits in the elected MPs in both parties, who are capable of handling this position???
The concern of both parties for Dalits is both opportunistic and self serving and it is for display only.

It is time people of India create third political dispensation that can treat Dalits as normal human beings and create situation, where the Brahmins do not have to hide their caste, And Dalits do not have to display their caste.

or a political thought process that doesn't promote caste labels.

CASTE PROFILING IS THE WORST KIND OF CORRUPTION ANY POLITICAL PARTY CAN INDULGE IN....

HOW CAN CALL THIS PARTY CORRUPTION FREE...

Quite naturally, your views are required to keep this discussion inflamed.


Wednesday, June 21, 2017

China in quantum breakthrough as 'unhackable' experimental satellite sends first message. 06-22



  • Scientists in China used 'quantum satellite' to send entangled photons 1,200 km
  • The satellite produces entangled photon pairs which form an encryption key
  • These photons will theoretically remain linked over great distances
  • This means that any attempts to listen in will be detected on the other side. 
In a major breakthrough for quantum teleportation, scientists in China have successfully transmitted entangled photons farther than ever before, achieving a distance of more than 1,200 km (745 miles) between suborbital space and Earth.

Entangled photons theoretically maintain their link across any distance, and have potential to revolutionize secure communications – but, scientists have previously only managed to maintain the bond for about 100 km (62 miles).

Using the ‘quantum satellite’ Micius, the scientists were able to communicate with three ground stations in China, each more than 1,000 km (621 miles) apart. 


In quantum physics, entangled particles remain connected so that actions performed by one affects the behaviour of the other, even if they are separated by huge distances. This is illustrated in the artist's impression above.

The 1,300 pound craft satellite is equipped with a laser beam, which the scientists subjected to a beam splitter.

This gave the beam two distinct polarized states.

One of these beams was then used to transmit entangled particles, and the other used to receive the photons. 

Pairs of entangled photons fired to ground stations can then form a ‘secret key.’
Theoretically, any attempts to breach this type of communication would be easily detectable. 

The satellite launched from Jiuquan Satellite launch Center last year, and the new findings mark a promising step forward in the two-year mission prove successful, which could be followed by a fleet of others if all goes well, according to Nature.
To overcome the complications of long-distance quantum entanglement, scientists often break the line of transmission up, creating smaller segments that can then repeatedly swap, purify, and store the information along the optical fiber, according to the American Association for the Advancement of Science. 

The researchers sought to prove that particles can remain entangled across great distances – in this case, nearly 750 miles.

Earlier efforts to demonstrate quantum communication have shown this can be done up to just over 180 miles, and scientists hope that transmitting the photons through space will push this even farther.

When travelling through air and optical fibres, protons get scattered or absorbed, Nature explains, posing challenges to the preservation of the fragile quantum state.
But, photons can travel more smoothly through space.

Achieving quantum communication at such distances would enable the creation of secure worldwide communications networks, allowing two parties to communicate using a shared encryption key.

In quantum physics, entangled particles remain connected so that actions performed by one affects the behaviour of the other, even if they are separated by huge distances. 

So, if someone were to attempt to listen in on one end, the disruption would be detectable on the other. 

Over the course of the two-year mission, the researchers in China will conduct a Bell test to prove the existence of entanglement at such a great distance.

And, they will attempt to ‘teleport’ quantum states, according to Nature, meaning the quantum state of the photo will be rebuilt in a new location.

Researchers from Canada, Japan, Italy, and Singapore have also revealed plans to conduct quantum experiments in space, including one proposed aboard the International Space Station.

This experiment would attempt to create a reliable and efficient means for teleportation.

By achieving quantum teleportation, the researchers say they could create a telescope with an enormous resolution.

‘You could not just see planets,’ Paul Kwiat, a physicist at the University of Illinois at Urbana–Champaign involved with the Nasa project, ’but in principle read licence plates on Jupiter’s moons.’   

This working heart tissue is made from spinach 06-21




Researchers from the Worcester Polytechnic Institute (WPI) have transformed a spinach leaf into functional heart tissue. The team’s goal was to recreate human organ tissue down to the fragile vascular networks of blood vessels it can’t survive without. Scientists had previously attempted to 3D print intricate vascular networks without success. This breakthrough could mean that the delicate vascular systems of plants are the key.


To create the heart tissue, the scientists at WPI revealed the leaf’s cellulose frame by stripping away the plant cells. Then, they “seeded” the frame with human cells, causing tissue growth on the frame. Finally, they were able to pump microbeads and fluids through the veins to illustrate the functioning concept.

Repairing Damage, Creating Replacements


Although other scientists have been able to create small-scale artificial samples of human tissue, those samples required integration with existing blood vessels. The large-scale creation of working tissue infused with the vascular vessels critical to tissue health had proven impossible.


Because the technique could help people grow layers of stronger, healthier heart muscle, the team suggests that it could eventually be used to treat heart attack patients or others whose hearts have difficulty contracting. The researchers have also experimented with parsley, peanut hairy roots, and sweet wormwood as they believe the technique could make use of different kinds of plants to repair other types of tissues. For example, wood cellulose frames could one day help us repair human bones.


“We have a lot more work to do, but so far this is very promising,” Glenn Gaudette, a professor of biomedical engineering at WPI, told The Telegraph. “Adapting abundant plants that farmers have been cultivating for thousands of years for use in tissue engineering could solve a host of problems limiting the field.”



How HIV-1 puts itself to sleep 06-21

Read about the antisense ASP RNA acting as viral latency factor.


Image credit : Shyam's Imagination Library

Upon infection of a new cell, the HIV-1 genome integrates into the genome of the host cell, and in this form HIV-1 is known as a provirus. Under proper cellular conditions, the HIV-1 provirus produces the transactivator Tat that drives efficient expression of the viral genome, leading to the production of new viral particles.

Alternatively, the provirus remains silent in a status known as latency. In our study, we demonstrated that HIV-1 encodes an antisense transcript (ASP) that recruits the cellular Polycomb Repressor Complex 2 (PRC2) to the proviral 5’LTR. PRC2 promotes nucleosome assembly at the 5’LTR, leading to transcription silencing and proviral latency.

While active regulation of proviral expression by Tat has long been known, latency was thought to be a passive event caused primarily by the absence of key cellular and viral transcription factors. Our study demonstrated that – on the contrary – HIV-1 also regulates the establishment and maintenance of latency through ASP, and therefore it controls all aspects of its destiny.

The impetus for this study happened – as often is the case – very serendipitously. Our lab became interested in the presence of antisense transcription in human retroviruses, HTLV-1 and HIV-1 – a research area that was relatively unexplored. There was some evidence in the literature that these antisense transcripts play a role in viral expression, but the mechanism was yet to be described. During an informal discussion, a friend and colleague – Dr. Rosa Bernardi – brought to our attention that many cellular antisense transcripts suppress the expression of their cognate sense transcript by tethering chromatin modifying protein complexes to their promoter regions, and by inducing nucleosome formation and transcriptional silencing.

This inspired us to use RNA immunoprecipitation (RIP) assays to test whether the HIV-1 antisense RNA (ASP) interacts with members of the PRC2 complex. However, our initial efforts were repeatedly unsuccessful. Discouraged by these negative results, we decided to focus on other projects in the lab. After a few months we revisited these experiments, and we realized that there was a problem in the design of the RT-PCR portion of the RIP assay. After making the necessary modifications to the RT-PCR assay, we were finally able to demonstrate specific interaction between the ASP RNA and two components of the PRC2 complex. This important result encouraged us to further pursue this line of studies.

The “eureka” moment came shortly after that when functional studies showed that over-expression of the ASP RNA in vivo suppresses acute viral replication and promotes the establishment and maintenance of latency.

Our current efforts are focused on defining the structural and functional determinants of the ASP RNA. Since this transcript contains an open reading frame, we are also investigating the expression and function of the ASP protein.


Figure legend

The HIV-1 ASP RNA acts as a viral latency gene: it interacts with the cellular Polycomb Repressor Complex 2 (PRC2), and recruits it to the HIV-1 5’LTR. There, PRC2 catalyzes trimethylation (Me3) of lysine 27 (K27) on histone H3. The deposition of this repressing epigenetic mark leads to the assembly of the nucleosome Nuc-1, turning off transcription from the HIV-1 5’LTR, and promoting viral latency.

View at the original source

Wednesday, June 14, 2017

Mastering the Art of Communication: What Big Data Can Tell Us 06-15





Image credit: Shyam's Imagination Library

There’s plenty of anecdotal evidence about what makes a good communicator, but Noah Zandan is
more interested in the science behind it. That’s why he co-founded Quantified Communications, a firm that helps business leaders remake and refine their messages.

Zandan spoke recently to Cade Massey, Wharton practice professor of operations, information and decisions and co-director of the Wharton People Analytics Initiative, about how he applies research to the art of communication. Massey is co-host of the Wharton Moneyball show on Wharton Business Radio on SiriusXM channel 111, and this interview was part of a special broadcast on SiriusXM for the Wharton People Analytics Conference.

An edited transcript of the conversation appears below.

Cade Massey: Let’s understand what Quantified Communications is and how you got going in that direction.

Noah Zandan: The idea behind it is that communications has always been considered an art. How people talk to each other, how executives communicate, how we relate to other people, how we connect to the world around us, has always kind of been this art. Academics have been studying it for years, which is really exciting, and what we are trying to do at Quantified Communications is bring some of that research and apply it to a business environment. We work with corporations and organizations to really help their leadership, help the people moving the message of the business to deliver that message, and do it in a way where they are using objective data to know whether or not it works.

Massey: What is your background?

Zandan: I studied economics in college. Econometrics. I showed up on Wall Street, bright-eyed, and realized pretty quickly as I got further and further into Wall Street that we were modeling everything — obviously looking at risk and trying to make $1 billion decisions — off of data. But there was a missing factor from our model, and that was the people: The way that the executives communicate, the way they told the story, how confident they were was really one of the critical success factors on Wall Street. But there was no data behind it, and I’m an econ guy. [I thought,] “This isn’t rational.” I started looking and found some amazing research. Folks like James Pennebaker at the University of Texas, people who have been measuring this stuff for years, but nobody in the business environment knew this existed.

“We thought visionaries would really be complex thinkers, but in fact what they’re really concerned with is making things simple and breaking it down into steps.”

And so from there, we started. Our co-founder [Peter Zandan] has a Ph.D. in evaluation research and started finding all of this great stuff and then built a big database and a big platform to measure it. All of the big presidential speeches, all of the TED talks, media interviews — you name it, we’ve tried to go find it.

Massey: What are you doing with it?

Zandan: Well first, you have to be able to process it. So you’ve got to tag it; you’ve got to organize it; you’ve got to make sure that it’s useful. The New York Times calls it being a data janitor. It is a huge part of the job for a data scientist. We spent a long time doing that, and then we had to go understand it. Was it successful or not? Did it accomplish its purpose? Did the audience react to it in the appropriate way? Go out and ask a bunch of people what they think. Do you trust this person if they did this? Do you believe them? Do you want to engage with them more? And then measure the factors of the communication. What types of words did they use? Were they making eye contact? What were they doing with their hands? Then you can understand the factors that correlate with success.

Massey: How did you decide what factors to look for?

Zandan: Again, academic research. Folks in academics have been doing this for years. One of the best guys out is Albert Mehrabian out of UCLA. He created this model called the Three Vs — verbal, voice and visual. It breaks down someone’s communication into some of the important elements, and he did a bunch of research as to how those are correlated with whether or not I like you. You go talk to communications folks and researchers, and they understand eye contact, facial movement, features.

There are factors behind all this stuff.

Massey: As you said, it’s historically been an art. What is the disparity between what you’re bringing to this conversation versus what’s been in the conversation before? When you come to these academics with this unbelievable database and say, “I’ve run some tests of these ideas,” are they saying, “This is different than anything we’ve seen before?” Or is this just a bigger version of what they’ve done?

Zandan: I would probably say it’s just a different use case. The academics are doing it from a great research standard, really thinking about how to apply it for research validation. What we’re doing is trying to bring it in a more applied way — looking at how leaders can communicate, really thinking carefully about what their purposes and audience types are. And then we can also go a little bit further, in that we can build predictive models and just run them over and over, given that we’re a business and not held to kind of the research standards.

Massey: One question you’ve looked at is, what do visionary communicators or visionary leaders do? Can you give us a recap of your findings?

“If you think about Elon Musk talking about Tesla, he always talks about what it’s like to drive in the car, what it’s like to look at the car, how the doors work.”

Zandan: We looked at hundreds of transcripts of visionary leaders. It was just a linguistic analysis. We didn’t look at their faces or voices or things like that. What we identified was what separates these people who we consider to be visionaries, everybody from Amelia Earhardt to FDR to Elon Musk to TED Talks on innovation. What separates them from the average communicator? What distinguishes them from a factor model perspective?

There were three main findings that we had. One: We thought visionaries would talk a lot about the future, but in fact they talked about the present. Two: We thought visionaries would really be complex thinkers, but in fact what they’re really concerned with is making things simple and breaking it down into steps. Three: We thought that visionaries would be really concerned with their own vision, but in fact they’re more concerned with getting their vision into the minds of their audience.

Massey: What does that mean?

Zandan: That means using second-person pronouns and using a lot of perceptual language, talking about look, touch and feel. It really brings the audience into the experience with you. So if you think about Elon Musk talking about Tesla, he always talks about what it’s like to drive in the car, what it’s like to look at the car, how the doors work. It’s really less about the future of energy and transport. As this kind of theoretical vehicle, he really brings it and makes it tangible.

Massey: One thing that jumps out to me about that research is the present tense versus the future, especially when you’re talking about visionary leaders. You would have expected that to go the other way. Why do you think they are so much more effective?

Zandan: We saw it highly correlated with credibility. I think that people think if you’re talking so much about the future, then it’s going to be less credible. People aren’t going to believe you as much. So, you really want to [apply it to] today.

“The data can lead you down a path of replication. We don’t want to do that, because so much of what you communicate is your personality.”

Massey: How do you apply this research for your clients?

Zandan: What we often get asked to do is help people improve their communications, use the technology, use the analytics, allow them to make data-driven decisions on how to better impact their audiences. The No. 1 question Oprah Winfrey gets when the lights go off after her interviews with all of these amazing world leaders and celebrities is, “How did I do?” That’s what these people want to know. We can answer that not in a way that their team is going to — which is, “Hey, boss, you did great.” We can actually give them a lot of truth in the data — talk about how they are perceived, talk about how they can get better, and give them a very prescriptive plan to better impact their audiences and achieve their purposes.

Massey: When you work with people in that role, what data do you collect?

Zandan: We look at text, audio or video, which we can take in. We’ll break those down into the elements. So text is what you say, the words. For audio, we’ll look at the words as well as your voice. And then for video, which is our favorite, you’ve got the face and the gestures. You break down all of those into different behavioral patterns, you measure all of them, you benchmark them against what they would consider to be a measure of success. That could be themselves, that could be someone who is best in class, that could be a competitor they aspire to be. And then you could give them a road map for how to achieve that. We’ll give them some guidance on that, but a lot of times they know. The White House came to us and said, “We want to replicate one of Obama’s best speeches. We know which one was our favorite, and we want to understand the different factors behind that.”
Massey: Can you speak about what you found?

Zandan: No. But the speech was a eulogy in Arizona, which they considered to be one of the best ones he has given during his tenure.

Massey: Let’s put it this way, did you find anything interesting when you looked at that kind of speech from that level? That’s really championship-level rhetoric.

Zandan: Of course. You uncover stuff, but what’s worth saying here is that there is also the other side of the equation, which is authenticity. I am not President Obama. I do not speak like President Obama. If I did, it would seem very strange to an audience. Everybody has their authentic tone. We work really hard to measure authenticity. It’s one of the hardest problems.

Massey: Being able to do something like that would be a real advance.

Zandan: It would. And there is obviously authenticity to the way you deliver the message, and there are words that are considered authentic. But what we’re careful on is we don’t want to push people to be something that they’re not. The data can lead you down a path of replication. We don’t want to do that, because so much of what you communicate is your personality.

View at the original source

Choose staff wisely when planning a digital transformation 06-13





Plenty of large businesses are, justifiably, embracing innovation of all kinds. But, cautions HPE's Craig Partridge, consider whether IT staff from old-school backgrounds (and their "think conservatively" cultural values) are the right people for a successful digital transition.

Every business wants to enhance what it does to make its products more valuable to customers (and thus more profitable to the company) and work more efficiently (that is, save money). So just about every enterprise organization is motivated to augment or create a digital strategy.

It’s one thing for a business to say, “Let’s exploit new technologies to gain competitive advantage.” Reaching that goal—or at least avoiding being left behind—takes a strategic plan, a dose of shiny new technology, and most important, attention to the human beings who create and implement the plan

In a Hewlett Packard Enterprise Discover presentation, “Thriving in the Age of Digital Disruption,” HPE’s Craig Partridge, worldwide director of data center platforms consulting, shared real-life lessons of digital transformation based on customer use cases and successful projects. In the one-hour, high-speed session, Partridge detailed a blueprint highlighting the elements needed for success.

And regardless of the many technologies and business processes that may be involved, there’s one key lesson to take away from the exercise: Choose the right people for the job, and value your staff for their diverse abilities. Doing so creates tension, Partridge said. But that isn’t a bad thing.

Digital disruption is about data

Disruption might take the form of a car manufacturer that wants to build out a connected car. It may be a bank aiming to give customers a good mobile digital experience. Perhaps it’s a sports stadium that recognizes that attending a game now includes mobility and Wi-Fi, not just a hot dog. Or the Rio airport, which during the Olympics had to digitize its services to accommodate an extra 2 million passengers.

Most of these projects are powered by emerging technologies like the Internet of Things, cloud, machine learning, and data analytics.

Technologically speaking, the “edge” is about data: how you collect it, how you analyze it, and how you use it for competitive advantage. Each of us generates a huge amount of unstructured data, especially with our mobile devices. Nowadays, the "machine edge" (smart sensors and machine-to-machine communication) is adding even more data. “Going forward, I see people combining those two data sets to create a good experience,” Partridge said.

In the past, cloud computing discussions have focused on core-out issues: What should IT move out of the data center? Today, the conversation is about what data to bring in and how best to do so. That encourages a different viewpoint. “Hybrid IT is what powers that new experience at the edge,” Partridge said. And IT has to change the operating model to work in that new way.  

As organizations put together software-defined agendas to accelerate how and where they deliver services, the first step is recognizing that not every traditional business application needs to be changed or disrupted. Some big transactional systems don't need to be mobile. Other systems need to be bulldozed and replaced.

The drive to improve digital experiences is also forcing organizations to work with partners in the value chain (especially with API-based tools). It means adopting concepts like continuous integration and the DevOps agenda, cloud management tool sets, and open cloud stacks, all with quick feedback and quick iteration. This kind of thinking does not come naturally to many large IT shops.
Yet “new” often translates into “We haven’t figured this out yet.” (If it were otherwise, it wouldn’t be much of a disruption, right?) HPE has created blueprints for the business process to help organizations succeed—after all, you’d rather learn from others’ mistakes than your own, right?

Foster the people

“The No. 1 reason projects succeed or fail is people,” said Partridge, echoing sentiments long understood by developers and IT professionals, if not their managers. People processes, politics, and governance have a huge effect on project outcomes, even when you don’t think you are dealing with a so-called peopleware problem.

“Brokering the supply chain sounds like a technical issue,” Partridge noted. “What people miss is that it requires an organization shift.” A business’s CIO now has to place demand appropriately across the supply chain, which sometimes is in other parts of the organization.

Less obvious to many enterprise development teams are cultural issues. They spent years creating an organization based on repeatable processes and infrastructure, such as reliability, approval-based plans, and a waterfall development model that’s measured in months.

That predictability and resilience are strengths. “These are big deals to IT,” Partridge said. “We can’t lose that DNA. These systems of record need to maintain that integrity.”

But the new systems that are part of the digital disruption move a lot faster. Innovation-optimized projects emphasize flexibility, working on small teams that are business-centric and close to the customer, with short-term goals and a willingness to embrace uncertainty. “That technical documentation is six months old, so it’s out of date,” one DevOps consultant said to me during the conference, just in passing.

The development process for imagining disruption requires a different mind-set. Central IT pros can generally learn new tech, but learning new values and mind-sets can be much more challenging. “We can be retrained, but we have habits ingrained from years of work,” Partridge said.
For example, when the automobile manufacturer launched its digital transformation project, it initially staffed the team from its central IT department, whose "cadence didn't lend itself to rapid iterative development,” Partridge said.

The company ended up starting over with a new IT group that operated in parallel with the existing central IT team. Although that might seem like a recipe for bickering and dysfunction, Partridge characterized the relationship as one of “creative tension,” because the friction led both teams to come up with ideas that helped one another. 

Digital transformation: Lessons for leaders

  • “New” often translates into “We haven’t figured this out yet.”
  • No matter how brilliant the idea is, success depends on putting the right personnel in place and supporting them properly. 
  • Value existing systems, and recognize what doesn’t benefit from changing. 

Tuesday, June 13, 2017

A Favorite Subject Returns to Schools: Recess. 06-14



After playtime was dropped amid focus on academic performance, educators now take playground breaks seriously




Kindergarten students take to the playground at Oak Point Elementary, in Oak Point, Texas, where recess went from 30 minutes a day to one hour a day. Photo: Brandon Thibodeaux for The Wall Street Journal Three kindergarten girls looked close to taking a spill as they sat on the high back of a bench on a playground at Oak Point Elementary. Feet away, several administrators looked on, not making a move to stop them because at this school outside of Dallas, playtime is revered.

“As long as they’re safe, we allow kids to be kids,” said Daniel Gallagher, assistant superintendent for educational services in the Little Elm Independent School District.

That’s the mantra in this small school district, where schoolchildren are transitioning from one daily 30-minute recess to one hour a day, taken in four 15-minute increments. School officials say children are better focused with more unstructured breaks and do better in school.

School districts throughout the country are reassessing recess—with some bringing back the pastime or expanding it, citing academic and health benefits.

On Tuesday, the Minneapolis school board is expected to consider moving from a recommended 20 minutes of daily recess to a required 30 minutes daily. And in Florida, parents are hoping the governor will soon sign an education bill that includes a required 20 minutes of daily recess for elementary-school students in traditional public schools.

In the past year, the state of Rhode Island and school districts in Dallas, Portland, the Jefferson Parish Public School System in Louisiana, and Orange County and Manatee County school districts in Florida, are among those to implement a daily-recess requirement.About 21% of school districts required recess daily for elementary-school students in the 2013-2014 school year, according to the latest study from the Centers for Disease Control and Prevention and Bridging the Gap Research Program. That’s an increase from 16% of school districts with the requirement in 2006-2007.
It’s a change after years of recess taking a back seat to testable core subjects like math and reading, with a noticeable decline in playtime after the rollout of the now-defunct 2002 No Child Left Behind education law that put more focus on holding schools accountable for academic performance.

The Center on Education Policy, a national research group, found in a 2007 report that 58% of school districts increased time spent teaching English language arts, while 45% increased math time, after the 2002 education law. Meanwhile, 20% of school districts decreased the amount of time spent on recess, at an average of 50-minutes less a week. (The CDC recommends at least 20 minutes of daily recess for elementary-school students.)

Supporters of daily recess often point to a 2013 study by the American Academy of Pediatrics, which says in part that “recess serves as a necessary break from the rigors of concentrated, academic challenges in the classroom.” The study also found that “safe and well-supervised recess offers cognitive, social, emotional, and physical benefits that may not be fully appreciated when a decision is made to diminish it.”

“Recess resets their brain,” said Lowell Strike, superintendent in the Little Elm district, where children have recess as long as the wind chill is at least 13 degrees and the heat index is no higher than 103.

But there has been some pushback. Some school administrators and lawmakers have spoken against state bills to mandate recess, saying it takes away flexibility from schools. This year, the Arizona School Boards Association opposed a bill in the state that would have required 50 minutes of daily recess in elementary schools.

“We are absolutely not against school recess,” said Chris Kotterman, the association’s director of governmental relations. “But when it comes to how the school day should be structured, it should be left up to the local school board. We generally try to keep state policy mandates to a minimum.”
Parents in areas around the country are advocating for daily recess.

Angela Browning is among “recess moms” in Florida pushing for a statewide recess mandate. She said the group has successfully pushed for daily recess in a few Florida school districts, including Orange County Public Schools, where her three children attend school. Ms. Browning said she got active several years ago upon finding out from her children that their school didn’t offer daily recess.

“I was stunned,” she said. “Children learn on the playground—leadership skills, social skills, negotiating skills. With all the testing, recess, along the way, got squeezed out.”

Orange County Public Schools started requiring 20 minutes of recess daily for students in kindergarten through fifth grade in the 2016-17 school year.

Recess requirements are usually decided at the campus level, and to a lesser extent at the district level. Studies have found that a majority of schools offer some type of recess, but not always regularly nor with set timespans—and sometimes in conjunction with school lunch. Those who linger over lunch get less playtime.

The CDC advises against taking recess in conjunction with lunch breaks and physical-education classes, saying that it should be unstructured and on a regular schedule.

In Little Elm, teacher Nicole Beal said she has seen firsthand the benefits of her kindergarten students having recess breaks during the school day.

“Their reading is better, they’re more focused,” she said. “Getting outside, it’s a nice break.”
When the children were asked who likes the extra playtime, every hand shot up.

View at the original source

Brain Architecture: Scientists Discover 11 Dimensional Structures That Could Help Us Understand How the Brain Works 06-14




Scientists studying the brain have discovered that the organ operates on up to 11 different dimensions, creating multiverse-like structures that are “a world we had never imagined.”

By using an advanced mathematical system, researchers were able to uncover architectural structures that appears when the brain has to process information, before they disintegrate into nothing.
Their findings, published in the journal Frontiers in Computational Neuroscience, reveals the hugely complicated processes involved in the creation of neural structures, potentially helping explain why the brain is so difficult to understand and tying together its structure with its function.

The team, led by scientists at the EPFL, Switzerland, were carrying out research as part of the Blue Brain Project—an initiative to create a biologically detailed reconstruction of the human brain. Working initially on rodent brains, the team used supercomputer simulations to study the complex interactions within different regions.

In the latest study, researchers honed in on the neural network structures within the brain using algebraic topology—a system used to describe networks with constantly changing spaces and structures. This is the first time this branch of math has been applied to neuroscience.

"Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures—the trees in the forest—and see the empty spaces—the clearings—all at the same time," study author Kathryn Hess said in a statement.

In the study, researchers carried out multiple tests on virtual brain tissue to find brain structures that would never appear just by chance. They then carried out the same experiments on real brain tissue to confirm their virtual findings.

They discovered that when they presented the virtual tissue with stimulus, groups of neurons form a clique. Each neuron connects to every other neuron in a very specific way to produce a precise geometric object. The more neurons in a clique, the higher the dimensions.

In some cases, researchers discovered cliques with up to 11 different dimensions.

The structures assembled formed enclosures for high-dimensional holes that the team have dubbed cavities. Once the brain has processed the information, the clique and cavity disappears.

The left shows a digital copy of a part of the neocortex, the most evolved part of the brain. On the right is a representation of the structures with different dimensions. The black hole in the middle symbolizes a complex of multi-dimensional spaces, or cavities.

"The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner," said one of the researchers, Ran Levi.

"It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates," he said.

Henry Markram, director of Blue Brain Project, said the findings could help explain why the brain is so hard to understand. "The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly,” he said.

"We found a world that we had never imagined. There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions."

The findings indicate the brain processes stimuli by creating these complex cliques and cavities, so the next step will be to find out whether or not our ability to perform complicated tasks requires the creation of these multi-dimensional structures.

In an email interview with Newsweek , Hess says the discovery brings us closer to understanding “one of the fundamental mysteries of neuroscience: the link between the structure of the brain and how it processes information.”

By using algebraic topology, she says, the team was able to discover “the highly organized structure hidden in the seemingly chaotic firing patterns of neurons, a structure which was invisible until we looked through this particular mathematical filter.”

Hess says the findings suggest that when we examine brain activity with low-dimensional representations, we only get a shadow of the real activity taking place. This means we can see some information, but not the full picture. “So, in a sense our discoveries may explain why it has been so hard to understand the relation between brain structure and function,” she explains.

“The stereotypical response pattern that we discovered indicates that the circuit always responds to stimuli by constructing a sequence of geometrical representations starting in low dimensions and adding progressively higher dimensions, until the build-up suddenly stops and then collapses: a mathematical signature for reactions to stimuli.

“In future work we intend to study the role of plasticity—the strengthening and weakening of connections in response to stimuli—with the tools of algebraic topology. Plasticity is fundamental to the mysterious process of learning, and we hope that we will be able to provide new insight into this phenomenon,” she added.



The waning days of Indian IT workers being paid to do nothing 06-13





The bench, the Indian IT industry’s resource bank, is thinning.

For long considered a key strength of India’s tech majors, the bench is losing its relevance even as just-in-time contract hiring is gaining popularity. More companies are hiring techies on relatively short, fixed-term contracts, rather than employing them full-time even when there are no projects.
Automation, creeping unionism, and a global closing of borders for techies have in recent times accelerated this process. So much so that the average IT company’s bench strength has progressively fallen from between 8% and 10% of the billable employees to between 4% and 5% now, human resources (HR) experts believe.

So what exactly is happening?

What is the bench?

In the IT industry, the bench refers to the section of a company’s employees that isn’t working on any project for the time being but remains on the rolls and receives regular salary.

“The best way to answer this question is with an analogy…In football or cricket there are only 11 players allowed on the pitch/ground. So there are 5/6 players out as subs ready to come on in case of injuries. These players usually sit (or at least used to) on a bench and hence the expression ‘Sitting on the bench,'” Bhavish Parkala, a developer with General Electric, posted on Quora.

It is, indeed, a bank of personnel. (Interestingly, the term “bank” itself comes from the Italian banchiere, the foreign exchange dealers of 14th century Italy. They were called so because they “did their business literally seated on ‘benches’ behind tables in the street,” Nial Fergusson writes in his book The Ascent of Money.)

This bank could consist of fresh graduates or senior techies. A person could spend anywhere between a couple of weeks to up to six months on the bench.

“At Infosys, we have a small percentage of employees on bench at all times…This is a planned period where the employee gets time to learn as well as focus on some internal initiatives,” Richard Lobo, EVP & head of HR at Infosys, told Quartz in an email.

This is largely an Indian phenomenon. After all, Indians tend to prefer secure full-time jobs over contract positions. In other places, people don’t hesitate to take up short-term projects, says Alka Dhingra, assistant general manager at staffing firm TeamLease Services. So, companies don’t have to hire full-time employees and then bench them when there are no projects.

The bench, like a hologram, looks different from every angle.

For IT firms, it is often an important factor their clients consider. A strong bench is an indication that the firm has ready resources and can begin execution immediately. But having too many people on the bench doesn’t reflect well either. It would mean employees are underutilised, and this would impact the profitability of the firm. Firms rarely speak about their bench size and are always working towards high utilisation rates.

As for employees, some say it is fun initially—you get paid to do nothing and have the opportunity to learn new skills, prepare for projects or for competitive exams. But fatigue sets in soon enough. “I spent around 16 months (on the) bench. Initially it was like getting paid to enjoy the life…It takes some time to figure out the company is not responsible for your growth,” Indira Raghavan, a techie, wrote on Quora last year. In recent times, with layoff fears lingering, not having a project to work on could be reason enough to lose your job.

Meanwhile, companies have been actively working at improving utilisation rates. “Earlier 30% of employees were on the bench and utilisation ratio was about 70% . This has now gone up to 80-81%,” Kris Lakshmikanth, the managing director at recruitment firm Head Hunters India, said.
Vishal Sikka, CEO & MD of Infosys, India’s second-largest IT services company, said last year that he was pushing automation to reduce bench strength. “Despite being here (at Infosys) for 18 months, I can’t still find an answer around the idea of a bench,” Sikka had told the Business Standard newspaper (paywall). Zero Bench is an initiative Infosys launched in 2015 to help employees find short-term assignments. Under this, employees who have tasks to perform can post their requirements based on their projects and benched employees can sign up to help finish the task. According to data from Infosys’s annual reports, the company’s utilisation rate (which refers to the part of the workforce actively working on projects and not on the bench) for the financial year 2012 (pdf) was 76.6% and this went up to 81.7% in the financial year 2017 (pdf).

Infosys isn’t alone. “All these (IT) companies have a resource management team which links bench people and internal teams to different projects and mobilises the bench people to different projects,” Dhingra of Teamlease said.

The beginning of the end

Now, IT companies are increasingly seeking “just-in-time” employees on contract, and industry experts see this as a trend that will replace the bench.

The idea behind having a bench was to ensure that employees are available to start working on projects as soon as the customer assigns a task to the IT firm. Instead, now they are seeking techies who can come on board in quick time only for specific projects, after which they either move on to other jobs, join the same company’s next project or, at times, get absorbed into the company as a full-time employee.

HR experts believe contract employees are a better alternative to the bench. They are as effective in terms of deployment, they help cuts down costs, the company can pick professionals with better skills, and, finally, helps the companies avoid mass layoffs and subsequent protests.

In their latest annual reports, IT firms Wipro and Cognizant say that “profitability could suffer” if “favorable utilisation rates” are not maintained.

All IT companies are under cost pressures now,” Lakshmikanth says. So, hiring techies only when they’re needed makes sense. Most companies pay contract workers more than they would other full-time employees—about 20% more, according to Lakshmikanth—but this would still economical compared to having a large bench.

“Contingent hiring would be the way forward across the industry…Typically, contract employees will be the virtual just-in-time bench,” said Thammaiah BN, managing director of recruitment services company Kelly Services India.

While Indian techies still prefer full-time jobs, acceptance for contract-based employment is rising. It is seen as an opportunity to first get one’s foot in and then get absorbed into a company. Then there’s the advantage of getting good brand names on one’s resumes, learning new technologies, and gaining knowledge and experience, Dhingra pointed out.

Besides, it is good money. IT firms pay contract employees more than the regular employees. So, for someone who is benched, between jobs, or just out of college, contract work is a good deal. Of course, there are drawbacks, too. “Recession? Layoff? Contract employees are the worst hit,” says Shashank CG, a software engineer. 

View at the original source

Artificial intelligence: here’s what you need to know to understand how machines learn 06-13


From Jeopardy winners and Go masters to infamous advertising-related racial profiling, it would seem we have entered an era in which artificial intelligence developments are rapidly accelerating. But a fully sentient being whose electronic “brain” can fully engage in complex cognitive tasks using fair moral judgement remains, for now, beyond our capabilities.

Unfortunately, current developments are generating a general fear of what artificial intelligence could become in the future. Its representation in recent pop culture shows how cautious – and pessimistic – we are about the technology. The problem with fear is that it can be crippling and, at times, promote ignorance.

Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.

The core foundation of artificial intelligence is rooted in machine learning, which is an elegant and widely accessible tool. But to understand what machine learning means, we first need to examine how the pros of its potential absolutely outweigh its cons.

Data are the key

Simply put, machine learning refers to teaching computers how to analyse data for solving particular tasks through algorithms. For handwriting recognition, for example, classification algorithms are used to differentiate letters based on someone’s handwriting. Housing data sets, on the other hand, use regression algorithms to estimate in a quantifiable way the selling price of a given property.



Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.

Data have two key elements – samples and features. The former represents individual elements in a group; the latter amounts to characteristics shared by them.

Look at social media as an example: users are samples and their usage can be translated as features. Facebook, for instance, employs different aspects of “liking” activity, which change from user to user, as important features for user-targeted advertising.

Facebook friends can also be used as samples, while their connections to other people act as features, establishing a network where information propagation can be studied.



Outside of social media, automated systems used in industrial processes as monitoring tools use time snapshots of the entire process as samples, and sensor measurements at a particular time as features. This allows the system to detect anomalies in the process in real time.

All these different solutions rely on feeding data to machines and teaching them to reach their own predictions once they have strategically assessed the given information. And this is machine learning.

Human intelligence as a starting point

Any data can be translated into these simple concepts and any machine-learning application, including artificial intelligence, uses these concepts as its building blocks.

Once data are understood, it’s time to decide what do to with this information. One of the most common and intuitive applications of machine learning is classification. The system learns how to put data into different groups based on a reference data set.

This is directly associated with the kinds of decisions we make every day, whether it’s grouping similar products (kitchen goods against beauty products, for instance), or choosing good films to watch based on previous experiences. While these two examples might seem completely disconnected, they rely on an essential assumption of classification: predictions defined as well-established categories.

When picking up a bottle of moisturiser, for example, we use a particular list of features (the shape of the container, for instance, or the smell of the product) to predict – accurately – that it’s a beauty product. A similar strategy is used for picking films by assessing a list of features (the director, for instance, or the actor) to predict whether a film is in one of two categories: good or bad.

By grasping the different relationships between features associated with a group of samples, we can predict whether a film may be worth watching or, better yet, we can create a program to do this for us.

But to be able to manipulate this information, we need to be a data science expert, a master of maths and statistics, with enough programming skills to make Alan Turing and Margaret Hamilton proud, right? Not quite.

We all know enough of our native language to get by in our daily lives, even if only a few of us can venture into linguistics and literature. Maths is similar; it’s around us all the time, so calculating change from buying something or measuring ingredients to follow a recipe is not a burden. In the same way, machine-learning mastery is not a requirement for its conscious and effective use.
Yes, there are extremely well-qualified and expert data scientists out there but, with little effort, anyone can learn its basics and improve the way they see and take advantage of information.

Algorithm your way through it

Going back to our classification algorithm, let’s think of one that mimics the way we make decisions. We are social beings, so how about social interactions? First impressions are important and we all have an internal model that evaluates in the first few minutes of meeting someone whether we like them or not.
Two outcomes are possible: a good or a bad impression. For every person, different characteristics (features) are taken into account (even if unconsciously) based on several encounters in the past (samples). These could be anything from tone of voice to extroversion and overall attitude to politeness.
For every new person we encounter, a model in our heads registers these inputs and establishes a prediction. We can break this modelling down to a set of inputs, weighted by their relevance to the final outcome.
For some people, attractiveness might be very important, whereas for others a good sense of humour or being a dog person says way more. Each person will develop her own model, which depends entirely on her experiences, or her data.
Different data result in different models being trained, with different outcomes. Our brain develops mechanisms that, while not entirely clear to us, establish how these factors will weight out.
What machine learning does is develop rigorous, mathematical ways for machines to calculate those outcomes, particularly in cases where we cannot easily handle the volume of data. Now more than ever, data are vast and everlasting. Having access to a tool that actively uses this data for practical problem solving, such as artificial intelligence, means everyone should and can explore and exploit this. We should do this not only so we can create useful applications, but also to put machine learning and artificial intelligence in a brighter and not so worrisome perspective.

There are several resources out there for machine learning although they do require some programming ability. Many popular languages tailored for machine learning are available, from basic tutorials to full courses. It takes nothing more than an afternoon to be able to start venturing into it with palpable results.

All this is not to say that the concept of machines with human-like minds should not concern us. But knowing more about how these minds might work will gives us the power to be agents of positive change in a way that can allow us to maintain control over artificial intelligence and not the other way around.

Monday, June 12, 2017

Figuring out superconductors 06-13


Physicists create anti ferromagnet that may help develop, monitor key materials.  


 
   

From the moment when physicists discovered superconductors — materials that conduct electricity without resistance at extremely low temperatures — they wondered whether they might be able to develop materials that exhibit the same properties at warmer temperatures.

The key to doing so, a group of Harvard scientists say, may lie in another exotic material known as an antiferromagnet.

Led by physics professor Markus Greiner, a team of physicists has taken a crucial step toward understanding those materials by creating a quantum antiferromagnet from an ultracold gas of hundreds of lithium atoms. The work is described in a May 25 paper published in the journal Nature.

“We have created a model system for real materials … and now, for the first time, we can study this model system in a regime where classical computers get to their limit,” Greiner said. “Now, we can poke and prod our antiferromagnet. It’s a beautifully tunable system, and we can even freeze time to take a snapshot of where the atoms are. That’s something you won’t be able to do with an actual solid.”

But what, exactly, is an antiferromagnet?

Traditional magnets, the kind that you can stick to your refrigerator, work because the electron spins in the material are aligned, allowing them to work in unison. In an antiferromagnet, however, those spins are arranged in a checkerboard pattern. One spin may be pointed north, while the next is pointing south, and so on.

Understanding antiferromagnets is important, Greiner and physics professor Eugene Demler said, because experimental work has suggested that, in the most promising high-temperature superconductors — a class of copper-containing compounds known as cuprates — the unusual state may be a precursor to high-temperature superconductivity.

Currently, Demler said, the best cuprates display superconductivity at about minus 160 degrees Fahrenheit, which is cold by everyday standards, but far higher than for any other type of superconductor. That temperature is also warm enough to allow practical applications of cuprate superconductors in telecommunications, transportation, and in the generation and transmission of electric power.

“This antiferromagnet stage is a crucial stepping-stone for understanding superconductors,” said Demler, who led the team providing theoretical support for the experiments. “Understanding the physics of these doped antiferromagnets may be the key to high-temperature superconductivity.”
To build one, Greiner and his team trapped a cloud of lithium atoms in a vacuum and then used a technique they dubbed “entropy redistribution” to cool them to just 10 billionths of a degree above absolute zero, which allowed them to observe the unusual physics of antiferromagnets.

“We have full control over every atom in our experiment,” said Daniel Greif, the postdoctoral fellow working in Greiner’s lab. “We use this control to implement a new cooling scheme, which allows us to reach the lowest temperatures so far in such systems.”

View at the original source

Sunday, June 11, 2017

Mitigating offensive search suggestions with deep learning 06-12



Image credit : Shyam's Imagination Library


The humble search bar is the window through which most Internet users experience the web. Deep
learning is set to enhance the capabilities of this simple tool such that search engines can now anticipate what the user is looking for whilst moderating offensive suggestions before the query is complete.

A lack of contextual and deeper understanding of the intent of search queries often leads to inappropriate or offensive suggestions. Striving for a safer and saner web, the Microsoft team began its research with deep learning techniques to help detect and automatically prune search suggestions.

Our paper titled “Convolutional Bi-Directional LSTM for Detecting Inappropriate Query Suggestions in Web Search” received the “Best Paper Award” at the recent Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) 2017. It was picked amongst a record-breaking 458 submissions at the leading international conference on knowledge discovery and data mining. Winning the award has been a humbling experience for the team. It upholds our efforts to establish deep machine learning as a powerful weapon against online vitriol.

Here’s a brief overview of what we found:

The Challenge

Human beings intuitively detect offensive language and derogatory speech online. Algorithms, however, struggle to clearly distinguish toxic language from benign comments. Systems often misjudge the level of toxicity or hate in certain phrases that require a deeper understanding of cultural norms, slang and context.

The problem is usually rendered difficult due to unique challenges posed by search queries such as lack of sufficient context, natural language ambiguity and presence of spelling mistakes and variations. For example, Marvin Gaye’s classic hit ‘If I Should Die Tonight’ could be deemed offensive simply because of the inclusion of the phrase ‘should die tonight’. Similarly, algorithms frequently misclassify offensive words and phrases as ‘clean’ simply because they were misspelled or used euphemisms. For example, the phrase ‘shake and bake’ is both a registered trademark for a popular food brand and street code for preparing illegal drugs on the move.

Safeguarding the Online Experience

The impact of inappropriate language and offensive speech online cannot be overstated. Internet access is ubiquitous and users cover diverse age groups and cultural backgrounds. Making search queries, online communication, and instant messaging safe for children, minorities, and sensitive communities is essential to preserve the integrity of the digital world.

Inappropriate suggestions on search queries or offensive comments on news articles could cause significant harm to vulnerable groups such as children and marginalized communities. Unsuitable suggestions could tarnish the reputation of corporations, inadvertently help someone cause harm to themselves with risky information, or lead to legal complications with authorities and regulators. Problems such as intimidation, threats, cyber bullying, trolling, explicit and suggestive content, and racist overtones need to be curtailed to help keep the Internet open and safely accessible to everyone.

The Solution

Conventional solutions to this problem have typically involved using –
  1. A manually curated list of patterns involving such offensive words, phrases and slangs or
  2. Classical Machine Learning (ML) techniques which use various hand-crafted features (typically words etc.) for learning the intent classifier or
  3. Standard off-the-shelf deep learning model architectures such as CNN, LSTMs or Bi-directional LSTMs (BLSTMs).
In our current work, we propose a novel deep learning architecture called, “Convolutional Bi-Directional LSTM (C-BiLSTM)” – which combines the strengths of Convolutional Neural Networks (CNNs) with Bi-directional LSTMs (BLSTMs). Given a query, C-BiLSTM uses a convolutional layer for extracting feature representations for each query word which is then fed as input to the BLSTM layer which captures the various sequential patterns in the entire query and outputs a richer representation encoding them. The query representation thus learnt passes through a deep, fully connected network which predicts the target class – whether it is offensive or clean. C-BiLSTM doesn’t rely on hand-crafted features, is trained end-to-end as a single model, and effectively captures both local features as well as their global semantics.

Applying the technique on 79041 unique real-world search queries along with their class labels (inappropriate/clean), revealed that this novel approach was significantly more effective than conventional models based on patterns, classical ML techniques using hand-crafted features. C-BiLSTM also outperformed standard off-the-shelf deep learning models such as CNN, LSTM and BLSTM when applied to the same dataset. Our final C-BiLSTM model achieves a precision of 0.9246, recall of 0.8251 and an overall F1 score of 0.8720.

Although the focus of the paper was detecting offensive terms in Query Auto Completion (QAC) in search engines, the technique can be applied to other online platforms as well. Comments on news articles can be cleaned up and inappropriate conversations can be flagged up for abuse. Trolling can be detected and a safe search experience can be enabled for children online.  This technique could also help make chatbots and autonomous virtual assistants more contextually aware, culturally sensitive, and dignified in their responses. More details about this technique and implementation could be found in the actual paper.

Final Thoughts

The deep learning technique detailed in this study could be a precursor to better tools that can fight online vitriol. When APIs based on this system are applied to social media platforms, email services, chat rooms, discussion forums, and search engines, the results will be parsed through multiple filters to ensure users are not exposed to offensive content.

Curtailing offensive language can transform the Internet, making it safely accessible to a wider audience. Making the web an open, safe and secure place for ideas and innovations is a core part of our mission to empower everyone, everywhere through the power of technology.



View at the original source

Saturday, June 10, 2017

How to start building your next-generation operating model 06-10

Each company’s path to a new operating model is unique. But successful transformations are all constructed with the same set of building blocks.



A North American bank took less than two years to shift 30 percent of its in-branch customer traffic to digital channels and dramatically reduce its brick-and-mortar footprint. A European cruise line redesigned and relaunched five core products in nine months to increase digital conversions by three to five times and sales by 150 percent.

These companies have been able to transform because they have developed next-generation operating models that provide the speed, precision, and flexibility to quickly unlock new sources of value and radically reduce costs. The operating model of the future combines digital technologies and process-improvement capabilities in an integrated, sequenced way to drastically improve customer journeys and internal processes.

Lean management has already played a significant role in putting in place processes, capabilities, and tools to improve how businesses operate. But the digital age has increased both the opportunities for businesses who know how to react and the difficulty of getting it right. For one thing, tasks performed by humans are more complex, whether it’s accessing information in multiple formats from multiple sources or responding to changing market and customer dynamics at ever-increasing speeds. And as an increasing number of tasks become automated or are taken over by cognitive-intelligence capabilities, companies will need to take many of the lessons learned from lean management and update them. Like a sprinter who needs all her muscles to be finely tuned and working in concert to reach top speeds, fast-moving institutions must have a system to continually synchronize their strategies, activities, performance, and health.
But how? Many institutions understand the need to change how they work and have embarked on numerous initiatives, yet few have been able to get beyond isolated success cases or marginal benefits.
We have found that companies that successfully build next-generation operating models do two things well. They focus on putting in place the building blocks that drive change across the organization, and they select a transformation path that suits their situation. These practices don’t apply only to companies that have yet to start their digital transformation. In our experience, even companies that are well along their transformation journey can pivot to putting in place a next-generation model that delivers massive value while significantly reducing costs.

Building blocks of the next-generation operating model

Whatever the path companies choose to develop their next-generation operating model (a subject we return to later), we have found there is a set of building blocks of change that successful leaders put in place. Think of them as the mechanics of change—elements needed to underpin the development of the operating model. Given the dynamic nature of digitization and the fast pace of change, it’s important not to think about perfecting the implementation of each building block before the operating model can function. The process is highly iterative, with elements of each building block tested and adapted to grow along with the model through a constant evolutionary cycle.

Building Block #1: Autonomous and cross-functional teams anchored in customer journeys, products, and services

Successful companies constantly rethink how to bring together the right combination of skills to build products and serve customers. That means reconfiguring organizational boundaries and revisiting the nature of teams themselves, such as creating more fluid structures in which day-to-day work is organized into smaller teams that often cut across business lines and market segments. This approach includes empowering teams to own products, services, or journeys, as well as to run experiments. These organizations are also becoming nimble in how they build skills across their teams by making “anchor hires” for key roles, setting up rotational and “train the trainer” programs, and committing to ongoing (often weekly) capability building and training for key roles.

Many insurers, for example, are dismantling traditional claims and underwriting units and reconstructing them to embed subject-matter experts such as lawyers and nurses into service groups. In the best companies, these teams also work side by side every day with technologists to design the tools and technology to improve efficiency and effectiveness.

Iteration is crucial to making this approach work. Leaders test various team configurations and allow flexibility in response to changing customer needs. One credit-card company, for example, shifted its operating model in IT from alignment around systems to alignment with value streams within the business. Cross-functional teams were pulled together to work on priority journeys and initiatives to deliver on the value stream. These changes dramatically simplified the operating model, lowered direct leadership expenses, and contributed to a 200 percent increase in software-development productivity within three months.

Building Block #2: Flexible and modular architecture, infrastructure, and software delivery

Technology is a core element of any next-generation operating model, and it needs to support a much faster and more flexible deployment of products and services. However, companies often have trouble understanding how to implement these new technologies alongside legacy systems or are hampered by outdated systems that move far too slowly.

To address these issues, leaders are building modular architecture that supports flexible and reusable technologies. Business-process management (BPM) tools and externally facing channels, for example, can be shared across many if not all customer journeys. Leading technology teams collaborate with business leaders to assess which systems need to move faster. This understanding helps institutions decide how to architect their technology—for example, by identifying which systems should be migrated to the cloud to speed up builds and reduce maintenance.

This approach both accelerates development and prioritizes the use of common components, which in turn leads to development efficiency and consistency. Another important reason for building more flexible architecture is that it enables businesses to partner with an external ecosystem of suppliers and partners.

Similarly, leaders are investing heavily in DevOps and combining people, process, and technology changes to automate software testing, security, and delivery processes as well as infrastructure changes.

Building Block #3: A management system that cascades clear strategies and goals through the organization, with tight feedback loops

The best management systems for next-generation operating models are based on principles, tools, and associated behaviors that drive a culture of continuous improvement focused on customer needs. Leading companies embed performance management into the DNA of an organization from top to bottom, and translate top-line goals and priorities into specific metrics and KPIs for employees at all levels. They make visible the skills and processes needed for employees to be successful, put clear criteria in place, and promote the sharing of best practices.

The best institutions are evolving their management systems to create feedback mechanisms within and between the front line, back-office operations, and the product teams that deliver new assets. They are also using their management systems to harvest the surfeit of data generated by day-to-day activities to create user-friendly dashboards and reports, some of them in real time.

Performance management is becoming much more real time, with metrics and goals used daily and weekly to guide decision making. These metrics are supported by joint incentives—not just for individuals—that are tailored to each level of the organization and reinforce behaviors to support customers regardless of organizational boundaries.

One North American insurer struggled to make the predictive analytics models developed by central teams relevant to its front-line claims adjusters, who therefore failed to adopt the new capability. Knowing it was leaving significant value on the table, the company established daily feedback sessions between the central development team and the claims adjusters and embedded analytics specialists into customer-service teams to develop better insights into customer issues.

The teams created shared goals based on customer value that were consistent with the organization’s strategy and the daily work of adjusters. Under this new management system, the analytics specialists and claims adjusters shortened cycle times and dramatically improved the effectiveness of assignment. This freed up time for leaders to coach, problem solve, and iterate on the next opportunities for the teams to pursue.

Building Block #4: Agile, customer-centric culture demonstrated at all levels and role modeled from the top

Successful companies prioritize speed and execution over perfection. That requires agility in delivering products to customers and quickly learning from them, as well as willingness to take appropriate risks. The best organizations have already made agility a cornerstone of how they work beyond IT. One credit-card company brought together law and compliance personnel to sit in with marketing teams to intervene early in processes and have daily conversations to identify and resolve issues. Law and compliance functions have also begun to adopt agile methodologies to change their own work. As functions and teams collaborate, they are on track to reduce effective time to market by 90 percent for some core processes while also reducing operational risk.

Critical to success is leading the change from the top and building a new way of working across organizational boundaries. Senior leaders support this transformation as vocal champions, demonstrating agility through their own choices. They reinforce and promote rapid iteration and share success stories. Importantly, they hold themselves accountable for delivering on value quickly, and establish transparency and rigor in their operations. Many manage the change aggressively, often changing performance incentives, mothballing outdated processes, assembling communication campaigns to reinforce culture, and writing informal blogs. At one asset-management company, the top team jettisoned its legacy budgeting process and asked leaders to be aggressive about capturing more value. They established an ongoing process for redistributing funding to the highest-value experiments that were working.

Defining the path for your organization

There is no one way to develop a next-generation operating model. It depends on a company’s existing capabilities, desired speed of transformation, level of executive commitment, and economic pressure. We have seen four paths that leading companies take to drive their transformation, though organizations often move to a different path as their capabilities mature. These paths offer a guide for the first 12 months of a transformation journey.

An innovation outpost is a dedicated unit set up to be entirely separate from the historical culture, decision-making bureaucracy, and technical infrastructure of the main business. It creates inspiring products that illuminate the digital art of the possible (sometimes with questionable economic impact), and hatches new business models in informal settings such as over foosball tables. This path has traditionally been popular as a first move, but is now less common.

One retailer with an ineffective online business chose to open such an outpost. It introduced next-gen analytics, focused on customer experience rather than technology, and drove the mobile interface. Staying largely separate from the main business, the outpost created a buzz around innovation, attracted better talent, and repatriated many of its creations into the broader organization.

This path works well when there is limited alignment among executives on the importance and value of transformation, a need to move very quickly in response to market pressures, and significant legacy culture challenges to overcome. However, it is less effective as the “tip of the spear” for changing the culture or building sustainable capabilities, and often yields a low return on investment.

A fenced-off digital factory is a group of groundbreakers that works in partnership with businesses and functions (such as IT infrastructure and security, legal, compliance, and product development) while enjoying a high degree of autonomy. It typically houses specialized capability groups in technologies such as robotics or analytics, and deploys them to support the development of specific journeys in concert with business and functional partners. It both models a new way of working and integrates developed capabilities into the main business. As such, it focuses internally on integrating with and shifting the culture of the organization.
This is the most common starting point, as it balances the need for incubation with that of broader transformation. One European bank built a digital factory in a building on a campus. Each of the lower floors is dedicated to a separate journey, while the top floor is dedicated to creating reusable components and utilities—such as customer identification and verification or esignature—that the other journeys can deploy in a modular way.
Business and functional colleagues come together to work with teams in the factory. Each of these teams develops products and services, moves them quickly from prototype to deployment, and then transfers them into the main business. As part of the management system, the team continues to monitor and iterate the product or service based on economic performance and customer feedback.
This path works well when there is a broad-based belief in and commitment to transformation, and a need to incubate a critical mass in internal capabilities. Many organizations have used this approach to attract digital talent, combat large-project inertia within IT groups, and speed transformation. Culture change is slower within the rest of the organization, but it happens over time as business and functional specialists partner with the factory for each journey. It can, however, also create a “have and have not” split within the business if not managed appropriately, and can require significant initial C-suite support and funding. (For more on the digital factory, see “Scaling a transformation culture through a digital factory,” forthcoming on McKinsey.com.)
A business-unit accelerator is a scaled-down digital factory that incubates a transformation inside a business unit to tackle local customer journeys and business functions. The business unit builds its own skills, such as process-redesign and robotics capabilities, and has control over specific capabilities and investments. This means it doesn’t need central funding or organization-wide agreement on a host of issues to get going.
One North American bank shifted to a business-unit accelerator model after the first few years of its transformation. It found that this move gave it more control and a closer connection to business strategy and the customer—benefits that outweighed centralized scale and capability building. The bank invested heavily in talent and tools with the aim of building a reputation among customers as a digital business that happens to produce banking products and experiences.
This path works well for organizations with large business units that operate independently. It’s also a good starting point when one business unit is particularly far ahead in its thinking and belief, or where digital services have disproportionate value-creation potential. However, companies that choose this model must mitigate several risks. When business units choose their own digital tools and processes, for instance, complexity and costs increase for IT teams managing maintenance, licensing, and enterprise architecture. This model can also make it harder to build and share capabilities across the organization since the skills developed are specific to the business unit.
A full-scale evolution is a comprehensive transformation in which the enterprise reorganizes itself almost entirely around major journeys. This is the natural operating model for many digital natives, as technology, digital services, and product delivery are basically inextricable. Companies focus on specific digital initiatives that deliver on business priorities, deploying specialized talent and cross-functional teams to support each one. The model is highly attuned to the customer, and rapidly develops, tests, and iterates on new products or services. Team members may be managed through a center of excellence or by business-unit leaders. This path is the aspiration for many incumbents, especially those that deliver services rather than physical products.
In one European bank undergoing a full-scale evolution, agile has become the default way for people to work, with colleagues from multiple functions including IT sitting sit side by side. Results are measured by value streams—the sources of the value being generated—and journeys, flowing from the customer need back to the performance of the bank. Prioritization and resourcing take the form of active daily and weekly conversations about the next most important thing to work on. This approach is initially almost like shock treatment, but it offers important benefits, allowing companies to shake up the traditional management system and achieve culture change quickly and at scale. The organization builds agile skills broadly, identifies high and low performers, and pinpoints valuable and missing skills.
This path works well when there is a broad and top-down organizational mandate for change. Given the time it takes to move the needle, there should be no pressing near-term economic imperative. Companies that choose this model need to mitigate several risks, such as ensuring that best practices are shared across the operating model rather than being confined to individual teams. In addition, organizations must share any scarce resources across business functions to drive impact, and ensure coordination with IT as it seeks to keep up with the technical architecture.

No-regret steps leaders should take

Every organization’s transformation journey will be different. However, a simple set of immediate, no-regret steps can help leaders shape their first set of priority decisions and provide clarity on the way forward. These often include:
  • Creating clarity on enterprise strategy and on where digital services can quickly enable sustainable value creation. (For more on this, see “The next-generation operating model for the digital world.”)
  • Challenging the board to be explicit about the importance of the transformation and its support for investment; or, as a board, making this decision and challenging the executive team for a bold vision.
  • Building top-team excitement and belief in change through visits to leading digital natives or incumbents pursuing their own transformation paths.
  • Assessing the maturity of the management system using benchmarking against other organizations to identify strengths to build on and risks to mitigate.
  • Investing in targeted capability building, especially for the top 50 leaders in the organization. Exploring core concepts such as digitization, agile, design thinking, and advanced analytics can create a shared vocabulary and spur action.
  • Making an honest objective assessment of talent and capabilities within the organization, benchmarked against peers and cross-sector leaders. Disruption often comes from outside an industry rather than within.
  • Surveying the cross-sector landscape for ideas and inspiration. It’s easier than ever to learn from others, and a rapid inventory of ideas can shed light on potential execution challenges to resolve.
  • Assessing the level of change that the organization can realistically absorb in the near and long term given its other priorities.

Most companies recognize the need for a next-generation operating model to drive their business forward in the digital age. But how well they actually develop it makes all the difference between reinventing the business and just trying to do so.

View at the original source