Are insurance customers ready for generative AI? | Insurance Business America
There is a “fundamental misunderstanding” about what ChatGPT and AI can do
Insurance companies are increasingly interested in exploring the benefits of generative artificial intelligence (AI) tools like ChatGPT for their business.
But are customers willing to use this technology as part of the insurance experience?
A new survey commissioned by software company InRule Technology shows customers aren’t thrilled to encounter ChatGPT in their insurance journey: nearly three in five (59%) say they tend to distrust generative AI, or totally.
Even as cutting-edge technology aims to improve the experience for insurance customers, most respondents (70%) said they still prefer interacting with a human.
Generation gap in attitudes towards AI
InRule’s survey, conducted via Dynata with PR firm PAN Communications, found striking generational differences in customer attitudes towards AI.
Most Boomers (71%) do not use or are not interested in chatbots like ChatGPT. For Gen Z, that number drops to just a quarter (25%).
Younger generations are also more likely to believe that AI automation helps improve privacy and security through tighter compliance (40% of Gen Z vs. 12% of Boomers).
In addition, the survey revealed the following:
- 67% of Baby Boomers believe automation reduces interaction between people, compared to 26% of Gen Z.
- 47% of Boomers find automation impersonal, compared to 31% of Gen Z.
- A data leak would deter 70% of Boomers and make them less likely to return as customers, but the same is only true for 37% of Gen Z
Why do customers distrust AI and ChatGPT?
Danny Shayman, product manager for AI and machine learning (ML) at InRule, isn’t surprised that customers are wary of generative AI. Chat robots have been around for years and have had mixed results, he stressed.
“In general, interacting with chatbots is a frustrating experience,” Shayman said. “Chatbots can’t do anything for you. You could do a rough semantic search of existing documentation and pull out some answers.
“But you could talk to a human and explain it for 15 seconds and a capable human could do it for you.”
In addition, AI-driven tools rely on high-quality data for efficient customer service. Users may still see poor results when interacting with generative AI, leading to a degradation in customer experience.
“If anything in that record is wrong, inaccurate, or misleading, the customer is often frustrated. We feel like we’re spending an hour going nowhere,” said Rik Chomko, CEO of InRule Technology.
The Chicago-based company provides process automation, machine learning, and decision-making software for more than 500 financial services, insurance, healthcare, and retail companies. His clients include companies such as Aon, Beazley, Fortegra and Allstate.
“I believe [ChatGPT] will be better technology than what we’ve seen in the past,” Chomko told Insurance Business. “But we still run the risk of someone adopting [the AI is right]thinking that a claim will be accepted and then finding out that is not the case.”
The Risks of Connecting ChatGPT to Automation
According to Shayman, there is a basic misconception among consumers about how ChatGPT works.
“There’s a huge gap between creating writing that says something and doing that thing. People have been working on connecting APIs so ChatGPT can connect to a system to do something,” he said.
“But in the end, there’s a gap between the tool’s ability to generate text and the ability to perform tasks efficiently and accurately.”
Shayman also warned of a significant risk for companies setting up automation around ChatGPT.
“If you’re an insurer and you’ve set up ChatGPT so that someone can come by and ask for a quote, ChatGPT can write the policy, send it to the policy database and create the appropriate documentation,” he said. “But that depends heavily on ChatGPT getting the offer right.”
Ultimately, insurance companies still need human control over AI-generated text – be it for insurance quotes or customer service.
“What happens when someone knows they’re interacting with a ChatGPT-based system and understands that by making minor changes to prompts you can make them change the output?” asked Shayman.
“If you’re trying to set up automation around a generative language tool, you need validations of its output and security mechanisms to ensure that nobody is able to make it do what the user wants and not what what the company wants. ”
What do you think of InRule Technology’s insights into customers and ChatGPT? Share your comments below.
Stay up to date with the latest news and events
Join our mailing list, it’s free!