LLMs
-
Blog
Splunk Urges Australian Organisations to Secure LLMs
Splunk’s SURGe team has assured Australian organisations that securing AI large language models against common threats, such as prompt injection attacks, can be accomplished using existing security tooling. However, security vulnerabilities may arise if organisations fail to address foundational security practices. Shannon Davis, a Melbourne-based principal security strategist at Splunk SURGe, told TechRepublic that Australia was showing increasing security awareness…
Read More »