An AI-powered recruitment tool hosted by Amazon used to help the UK’s Ministry of Defence (MoD) could put defense workers at risk.
An assessment from the Department for Science, Innovation, and Technology (DSIT) has laid out the potentially dangerous impacts of a data breach to a new AI tool used by the MoD.
The tool in question, Textio, is described as an AI-powered writing assistant that improves job adverts by optimizing the language for “inclusivity, engagement, and effectiveness’.
Textio provides real-time feedback on the language patterns of job listings and suggests alternative phrases using predictive analysis and AI to eliminate bias and improve readability.
“The user is then given a score out of 100 which determines readability and the level of inclusive language, a score between 80-100 is deemed to be more effective in terms of potential candidate engagement, users have the ability to add or subtract words to improve the scoring,” the DSIT assessment stated.
Personnel will have the ability to overrule the suggestions Textio makes if they do not align with the MoD brand, DSIT noted, with the employee making all final decisions regarding the content of the job advert.
Textio is hosted on AWS infrastructure in the US, which provides the cloud computing resources to run the machine learning (ML) algorithms and the AI inferencing on large datasets.
This hosting also includes Amazon Guard Duty, AWS’ threat detection service that offers security monitoring to protect data and ensure systems running on AWS comply with cyber regulations.
But a breach could have significant negative outcomes for military staff, DSIT warned, noting the types of data that will be ingested by the tool.
“Due to MoD employee personal data being stored in overseas territory (MoD staff names, role and email), a data breach may have concerning consequences, i.e. identification of defence personnel.”
The department added that the extent of the data stored by the tool and the safeguards put in place by Amazon, such as Guard Duty, mean the level of risk posed to military personnel remains low.
“Due to the minimal storage of sensitive data and robust safeguards put in place by the supplier, this was deemed a low level risk according to MoD’s Secure By Design process.”
ITPro has approached Amazon for comment but did not receive a response before publication.
The MoD directed ITPro to the DSIT assessment when approached for comment.
The assessment is part of a series of disclosures made by DSIT to improve transparency on the algorithms used by the UK government’s 23 central government agencies.
The UK government has come under fire for its use of other algorithms, specifically for recruitment. In 2019 the government launched an investigation into potential bias in recruitment algorithms used by the criminal justice system.
More recently, the Home Office said it would immediately stop its use of an algorithm used to sort VISA applications in 2020, after a legal challenge from the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove claimed it biased, according to reporting from the Guardian.