Machine identities – the digital credentials used by machines to authenticate one another and securely communicate – are becoming increasingly attractive targets for cybercriminals.
This is because they’re one of the fastest growing groups within organizations’ identity and access management (IAM) programs, significantly outnumbering human identities. This is a disparity that’s only expected to increase with the continued growth of cloud usage, automation, AI, integrations, and bots, notes Steve Wessels, Director Analyst at Gartner.
These non-human identities (NHIs) are also becoming ever more powerful actors within businesses, with the ability to control critical processes, access sensitive data and perform tasks with autonomy. Long gone is the concept of simple server credentials: today machine identities encompass a vast array of AI agents, IoT devices, cloud workloads, application programming interfaces (APIs), and even autonomous robotic systems.
It’s important to note that as AI systems become more agentic, the concept of identity takes on deeper dimensions, she says. It’s no longer just about credentials, but rather the capabilities, goals and degrees of autonomy tied to each identity. “We may even need to consider psychological analogues or disordered states in machines that could impact behavior, which raises new questions about trust and control.”
The complexity of control
Tracking and managing such machine identities needs strong governance, continuous logging of all plans and interactions and clear agency tracking, especially in hybrid human-AI systems. Simply put, transparency and accountability are crucial.
Common management issues include the sheer scale and diversity of NHIs, with different categories having different characteristics and challenges. “Some are relatively long-lived and fixed, while others are fast moving and ephemeral. They act in different ways, in different environments, with different rights and privileges,” explains Mark Child, Associate Research Director, European Security at analyst firm IDC.
Then there’s also the opacity of some AI decision-making (“so called black boxes,” says Watson), the emergent behaviors of agentic AI and swarms, and ensuring these identities remain aligned with human values and ethical boundaries, “therefore preventing issues like shadow AI or unintended power seeking,” she adds.
Prime targets
The reality is that while existing IAM tools can likely address some of the requirements of NHI management, machine IAM represents the least mature part of most organisations’ IAM programs.
“Due in part to the significant growth in machine identities, there are considerable gaps in required tools for discovery, visibility, observability, governance and monitoring,” Wessels says.
This makes machine identities increasingly attractive targets to exploit for three key reasons, he says.
Firstly, they’re typically poorly governed. “Many organizations still lack formal lifecycle management and oversight for machine identities and their access,” he says. Secondly, they’re often overlooked in audits, as unlike human identities they often go unmanaged post-deployment, making them ideal for long-term persistence and lateral movement.
“Finally, they offer privileged or high-value access especially in DevOps pipelines, cloud automation scripts and API integration,” he points out.
With many machine identities able to conduct privileged actions, compromising them gives attackers the possibility to do more than they could through just a basic user identity. Nestled privileges enabled by multiple overlapping or intersecting service accounts can obfuscate over-provisioning of access notes Child, highlighting the need for improved visibility into the creation and provisioning of NHIs.
“Admins often don’t know all the NHIs in their environment, where they’re acting, what access right and privileges they have. That can be a huge benefit for threat actors, for example with living off the land approaches,” he says.
AI’s a double-edged sword
But while AI can add to the challenges of managing machine identities, it can also offer possible solutions.
In terms of visibility and control, part of the challenge can be the volume and velocity of NHIs: how many there are, how rapidly they’re spun up and put into operation and how many actions they can conduct in any given time frame, says Child. “Manual processes are unable to cope with this, but AI can be used to automate processes, accelerate workflows and trigger policy-driven actions every time a new machine identity is created.”
AI can improve visibility through advanced anomaly detection in machine behavior, identifying deviations that might suggest compromise or misalignment, agrees Watson, adding that it can also power interoperability tools to scrutinize the internal states of other AI systems or act as oversight agents – superego AIs – that monitor and govern swarms. “Automated auditing and agency tracking can also provide much-needed control,” she adds.
The experts agree, however, that humans still play a critical role. As Wessels puts it – someone still needs to define the rules, handle exceptions and ensure systems follow regulation requirements.
“Especially in sensitive industries like finance or healthcare, AI’s actions will likely need to be reviewed, logged or even cryptographically verified. In the long run, we expect a hybrid model. AI will act more like a copilot, automating the heavy lifting while humans remain in control of the bigger picture. Transparency, explainability and adherence to zero trust principle will be essential as this balance evolves,” he says.
Laying the groundwork
Governance is a critical aspect of machine identity management, the experts concur. IT, security and/or IAM leaders should ensure strong frameworks are implemented within their organizations that put processes and policies in place to manage machine identities before or as they’re created, to ensure they’re well managed from the beginning.
“Organizations should establish a dedicated oversight body to define policies for AI-managed identities, conduct regular audits and ensure alignment with enterprise-wide security and ethical standards,” says Watson. “It’s not enough to simply track how many identities exist, organizations need to assess the capabilities and autonomy levels associated with each identity and apply stricter controls to more powerful or independent agents.
“Stress-testing these systems regularly will maintain resilience and compliance under pressure,” she adds.
To guide your approach in this nascent field, Child advises keeping track of frameworks and best practices put out by cybersecurity authorities. “Look to the likes of NIST, ENISA, and OWASP,” he concludes.
Source link