Last week, I was at the AI Summit in New York (as co-chair and presenter), and I am happy to report that everyone is now comfortable and excited about artificial intelligence.
Okay, sorry, that’s a skewed sample of people who naturally would be comfortable and excited about AI — data scientists, AI developers, AI vendors, and the like. For mainstream business leaders and professionals, comfort with and acceptance of AI is a tad bit more muddled.
Maybe there are fewer misgivings as AI develops and proves its worthiness, but people are still nervous about it. One of the most pronounced factors holding back on artificial intelligence adoption is fear of the unknown. This includes justifiable concerns about bias, mistrust of data, and reluctance about handing over control to machines have made decision-makers nervous about AI. Of course, real money — and lots of it — is on the line, and, ultimately, there is fear that AI may be more fad than substance.
That lingering suspicion about AI was recently encapsulated in a study published in Harvard Business Review by Rebecca Karp and Aticus Peterson, both with Harvard. “Based on our ongoing research with dozens of companies, AI solutions most frequently fail to gain adoption because leaders worry how the deployment of AI might affect their company,” the co-authors note. “They fear the new technology might displace work, disrupt workplace dynamics, or require new skills to master, and they hesitate.”
There’s the matter of throwing money at a new approach, but then letting it whither. “Walking up to the edge of deploying new technology only to lose your nerve — wasting time and resources — isn’t the solution,” Karp and Peterson state. “Rather, leaders need to strategically pace the deployment of AI technologies. Too often, organizations spend significant resources developing or acquiring transformative innovations, but don’t think enough about how to successfully deploy them.”
Industry experts across the spectrum of professions agree that AI stirs mixed emotions within the executive ranks. “Often, one of the largest hurdles in AI adoption is ignorance and fear of the unknown,” says Elad Tsur, founder of BlueTail (later sold to Salesforce, now known as Salesforce Einstein), and now founder and CEO of Planck.
“There are two diametrically opposed forces keeping AI at bay: fear and irrational exuberance,” agrees Danny Tobey, partner with global law firm DLA Piper. “People don’t understand AI, so they worry about its unintended consequences, which leads many people to bury their heads in the sand when they could be creating value for the enterprise.”
Conversely, buying into the hype also leads to crushed expectations, Tobey continues. “There’s so much excitement around AI that some people have unrealistic expectations about what it can and cannot do. They are working from the science fiction view of AI as a truly autonomous thinking machines with creative capability, but the reality is AI’s power today is deep but narrow. It can look for patterns in data to solve problems, but it doesn’t yet know what a problem is.”
It’s going to take time until nervousness around AI dissipates — and that may be when it is no longer is “AI,” but a standard piece of a process. “Until AI is fully incorporated as a standard into all business applications, there will remain, for many organizations, a fear of the power and complexity of the technology,” says Sharad Varshney, CEO of OvalEdge. “Many business users may be wary of adopting AI technologies because they feel overwhelmed by the proposition of using them for critical business tasks. That is why, in my opinion, increased integration is fundamental.”
The key is helping business leaders understand that “AI is fully-manageable,” Varshney continues. “There is a misconception that when you incorporate AI into your IT infrastructure, you somehow lose control of that aspect. Instead, the opposite is true. While AI and machine learning enable technology to grow and develop independently, ultimate control always remains with the administrators. AI technologies support specific business processes and are designed to achieve these outcomes based on users’ instructions.”
AI fears can gradually be alleviated through demonstrating value — rather than peril — to the business. For example, when it came to AI-based facial recognition systems, skepticism was initially very high,” Tsur relates. “Even with an accuracy rating north of 99%, people doubted that AI could match or outperform human ability. However, a recent National Institute of Standards and Technology [NIST] research study on facial recognition technology found beyond improvements in statistical accuracy, AI also eliminates potential distractions present in the monotonous manual review of repetitive tasks.” While there have been concerns about bias in facial-recognition systems as well, this is being corrected, Tsur adds. “It is possible to train facial recognition models and create processes to address all groups, including those with physical disabilities or religious coverings that limit typical data intake.”
There are two fundamental rules when deploying AI. “Garbage in, garbage out,” and “correlation doesn’t necessarily imply causation,” says patent attorney Andrew (AJ) Tibbetts, intellectual property attorney with Greenberg Traurig. “Both add up to an overarching rule against a company merely putting data through a model and then blindly trusting the output. Before AI can be reliably used, the problem has to be well understood and sufficient data collected in view of that comprehensive understanding of the problem, and the data has to be prepared for AI processing. A comprehensive understanding of the problem also helps double-check the output of the AI. The adage that ‘correlation not being causation’ may be well known, but can be overlooked by some impressed by the promise of AI and racing toward roll-out. If you fully understand the problem to which you are applying the AI, you can more easily sanity-check the answer you get from it, avoiding downstream misunderstandings.”
Keep humans in the loop, Tibbetts also advises. “AI can be particularly helpful in making recommendations or making initial decisions that can be countermanded by a human operator. There is always risk that an AI system could mis-identify a pattern or trend and thus risk that a system acting on its own could make a wrong decision. As such, while an AI system may be able to evaluate credit applications for whether an applicant demonstrates a sufficient creditworthiness, having a human double-check the recommendation can be critical.”