Maintaining institutional memory is hard enough – AI knowledge systems will make it worse


AI can hallucinate many things, but are we headed for a world where tools like ChatGPT and Gemini are crafting standard operating procedures or employee handbooks? Have we begun to entrust AI not just with our menial and creative tasks, but also given it the responsibility to maintain companies’ viability?

God, I hope not.

If the last few years are anything to go by, someone – backed by a fund with more money than sense, most likely – is going to try. Since I started writing for ITPro, I’ve covered the AI Bill of Rights, automation of the back office, accessibility and AI, and a host of other related topics. What has struck me during that reporting is just how much all these innovations feel at once temporary and permanent.

Let me explain: one of the hardest things to do when it comes to keeping an organization going is maintaining institutional memory. We see this all the time in a variety of industries; where companies tend to forget why they were successful in the first place, they are vulnerable to shifting geopolitical winds and market chaos, outsourcing what gave them a competitive edge.

It’s similar in food where counterfeiting foods like parmigiano-reggiano makes up a multi-billion dollar industry. Some of this is self-serving industry isolation, of course, but a lot of it is about protecting what is successful. Lastly, we see this in the outsourcing of common tech industry tactics like moving tech support to a country that is much cheaper—and much worse labour laws.

Paradoxically, that’s why this AI boom feels more Metaverse than cell phone or wireless internet. It feels as if it’s teetering on a knife edge because too many people see it as a cure-all. I know, shock surprise, but a journalist doesn’t trust many forms of AI, but we’re starting to see this in other industries, including questions over the usefulness of AI generated code. At the same time it feels like a permanent fixture, one that requires constant education about what can and can’t be used. Like all things tech innovation, that will be decided, in a lot of ways, by the courts and company mergers.

There are plenty of people working in the AI space, and its various offshoots, who are aware of this precarious ethical ground. The question is whether those people are the ones with decision making power. AI, with all the surety thrown around by its main sales people, also seeds distrust. Just look at higher education, where more and more students are using AI and more and more instructors are making assumptions about those very same students. The more AI creeps into the back office, the more it seems to cause chaos.

This isn’t an argument for the human touch that centers around pointless – and often classist/ableist – return to office (RTO) mandates. This isn’t ‘water cooler-style’ institutional memory about the eating habits of the CEO, or who got the best company Christmas gift

This is about understanding, on a fundamental level, what software can’t know. For example, a lot of accessibility needs are met through handshake agreements between bosses and employees, primarily because legalistic systems of governance have meant that having a disability, in tech and otherwise, is viewed with suspicion. Simply put, AI cannot replicate human decency.

Institutional memory also rears its head in other areas. A chatbot can’t maintain a relationship with a client, nor can it be trusted to hire without introducing additional biases or to understand the work culture of a decades-old company.

Now, the AI evangelists will tell you that LLMs can’t do this yet and it’s only a matter of time – but when it comes to whether we ever need to replace this function, I’m not so sure. The argument that an LLM’s biases can be trained out easier than a human’s can hold at least a little water but, in the long run, if we’re letting AI into the foundation of our workplaces we’re headed for the commercial version of an earthquake. Part of the problem with institutional memory is that it has lots of warts and dangers and needs constant adaptation and conversation in order to be successful. If the explosion of AI content – generative and otherwise – has shown anything, it’s that AI really likes to maintain the status quo and our world’s status quo is a beautiful dumpster fire.

My favourite example of this is transcription software. More and more, tools like Otter and Zoom have honed in on their use for business. Here’s the problem: AI transcription likes to hallucinate and tell me things that were said that weren’t, or to accidentally leave your AI assistant in a Zoom room to wreak unintentional havoc.

AI isn’t new and it isn’t going away tomorrow. But I think its insistence on touching every part of our corporate ecosystem is a key development that requires pushback. Much of the work we do is hidden in plain sight yet LLMs are bound to lack the ability, to take a quote from Katherine Johnson in the movie Hidden Figures, “to look beyond the numbers”. Institutional memory is needed, not just to maintain the good, but to root out the bad.


Source link
Exit mobile version