Chris Hillman, international data science director at data management firm Teradata, has recently seen more attention directed towards the cost of data science and AI teams, as businesses seek to demonstrate value from their investments in emerging technology.
However, he believes that data scientists are capable of building AI models on a technical level, and it is often business stakeholders who are thwarting successful AI projects when they do not understand how AI models work or fail to turn model recommendations into action.
“In the data science world, everything’s a technical problem and we solve it with tech,” Hillman explained. “But I fully believe that a lot of the reason this stuff isn’t going into the business processes is basically a cultural, political, or people problem — not a technical problem.”
Teradata’s experience building models for a range of international clients suggests:
- Business executives must understand AI to champion and achieve project success.
- Executives learn better through use case examples rather than “data science 101” courses.
- Businesses should conduct impact assessments before AI projects start.
Culture, politics, and people: hurdles to AI project success
Hillman argues that the failure of AI projects can often be caused by business stakeholders:
- Not trusting the AI model’s results because they weren’t part of the process.
- Failing to take model outputs and convert those into real processes and actions.
As long as the data is provided to a data science and AI team, Hillman explained, the AI problem is not technical. Instead, there are more often difficulties with business stakeholders understanding this technology and turning AI outputs into business actions.
Business execs should be engaged in AI development process
As long as the data is there, Hillman’s team can successfully train, test, and evaluate AI models.
“We write the output of that model somewhere, and that’s job done,” he said. “Production is that model running every month and sticking something in a table somewhere.”
However, this is where it can fail.
“It falls down because business owners have got to be in the process,” Hillman added. “They’ve got to take that score and decide, ‘what is the signal?’ If I’m saying something’s a 90% probability of fraud, what does that actually mean?
SEE: Evidence of Australian innovation in pursuit of generative AI scale
“If the signal is to block the payment, and they decide to do that, someone’s got to do that. In a lot of companies, that means having at least three if not four teams involved; the data engineers and data scientists, the business owners and the application developers.”
This can turn into a dysfunctional process, where teams fail to communicate effectively, AI fails to influence business processes, and AI fails to create the desired value.
Business owners must understand how AI models work
The rise of AI means all business execs must know how these models are created and function, Hillman said.
“They should understand the output, because they need to guide the process,” he explained. “They are the ones who need to ask: ‘What does it mean to my customer or my business processes?’”
While a technical understanding of algorithms may not be necessary, business executives should understand the basic mathematics involved in AI, such as the probabilistic nature of AI models. Business stakeholders need to understand why the accuracy of AI models will differ from what is expected of traditional business intelligence reporting tools.
“If I went to the finance director with a report and they asked ‘how accurate is it?’ and I said, ‘about 78% accurate,’ I’d probably be kicked out,” Hillman said. “But for an AI model to be 78% accurate, that’s good. When it’s more than 50% accurate, you’re already winning.
“We’ve had some customers put in requirements saying, ‘we want this model, and we want 100% accuracy with no false positives.’ And we have to tell them, ‘well, we can’t do it, because that’s impossible.’ And if you do get that kind of model, you’ve done something wrong.”
Use cases: effective tools when training business execs in AI models
Hillman does not believe business owners should be put through “data science 101” courses, which could be “useless” to them in practice. Instead, he said AI use cases can be leveraged to demonstrate how AI models work for business people much more effectively.
“I think the use case driven approach is definitely better for the people on the business side because they can relate to it and then you can get involved in the conversation,” he said.
Tips for ensuring your AI project actually gets up and running
Hillman offered several recommendations for business owners to ensure their AI projects make it from idea and proof of concept through production:
Conduct an impact assessment
An impact assessment should be conducted upfront. This evaluation should include key considerations, such as why the business is pursuing the AI project and the fleshed-out business benefits.
“I very rarely see that in the original specs,” Hillman noted.
Rather, impact assessments are often being commenced when a project is underway or after the technical work is done, which can contribute to projects being shelved and not making it into production.
Choose the right use cases
Although transformer models were gaining popularity prior to ChatGPT, the hype caused by OpenAI’s launch of the chatbot led to businesses kicking off generative AI projects to remain relevant. This has resulted in some use-case selections that may be misguided.
SEE: 9 innovative use cases of AI in Australian businesses in 2024
Hillman often asks businesses whether they can “build a report instead,” as there are typically easier ways of achieving business objectives than creating an AI model. He said AI models usually fail to launch because of the lack of an impact assessment or because the use case was wrong.
Have a strong business sponsor
AI projects are better off when they have a strong business sponsor driving them forward. A business champion can ensure the potential impact of an AI project is understood by other teams in the business and ensure they work together to implement AI data into processes.
“IT might own the budget for the tech, and somebody else might own the data, and the security and privacy side of it, but really, the driver always has to come from the business side,” Hillman said.
Source link