Keep humans in the loop
Although having human employees be part of genAI workflows can slow operations down and therefore reduce the efficiency that was the reason for using genAI in the first place, Taylor said sometimes a little spot checking by a human can be effective.
He cited the example of a chatbot that told an Air Canada customer they could buy a ticket immediately and get a bereavement credit later, which is not the airline’s policy. A Canadian civil tribunal ruled that the airline was responsible for reimbursing the customer because the chatbot was presented as part of the company’s website.
“Although having a human in the loop may not be technically feasible while the chat is occurring, as it would defeat the purpose of using a chatbot, you can certainly have a human in the loop immediately after the fact, perhaps on a sampling basis,” Taylor said. “[The person] could check the chatbot to see if it is hallucinating so that it can be quickly detected to reach out to affected users and also tweak the solution to prevent (hopefully) such hallucinations happening again.”
Prepare to geek out with regulators
Another compliance consideration with genAI is going to be the need to explain far more technical details than CIOs have historically had to when talking with regulators.
“The CIO needs to be prepared to share a fairly significant amount of information, such as talking through the entire workflow process,” said Three Arc’s Anzelc. “‘Here is what our intent was.’ Listing all of the underlying information, detailing what actually happened and why it happened. Complete data lineage. Did genAI go rogue and pull data from some internet source or even make it up? What was the algorithmic construction? That’s where things get really hard.”
After an incident, enterprises have to make quick fixes to avoid repeats of the problem. “It could require redesign or adjustment to how the tool operates or the way inputs and outputs flow. In parallel, fix any gaps in monitoring metrics that were uncovered so that any future issues are identified more swiftly,” Anzelc said.
It’s also crucial to figure out a meaningful way to calculate the impact of an incident, she added.
“This could be financial impact to customers, as was the case with Air Canada’s chatbot, or other compliance-related issues. Examples include the potentially defamatory statements made recently by X’s chatbot Grok or employee actions such as the University of Texas professor who failed an entire class because a generative AI tool incorrectly stated that all assignments had been generated by AI and not by human students,” Anzelc said.
“Understand additional compliance implications, both from a regulatory perspective as well as the contracts and policies you have in place with customers, suppliers, and employees. You will likely need to re-estimate impact as you learn more about the root cause of the issue.”
Source link