“By this time next year, Oracle employees won’t be using passwords” — Larry Ellison wants a biometric future in cybersecurity


Oracle CTO Larry Ellison took a strong stand on cybersecurity at Oracle CloudWorld 2024, calling passwords a “terrible idea” in need of replacement with biometric security measures.

“The idea that we use passwords is a ridiculous idea. It’s obsolete. It’s very dangerous,” Ellison said during his keynote at the company’s flagship event in Las Vegas.

“They are 17 characters long – including special characters that no one can ever remember and has to write down – are unbelievably easy to steal,” he added.

He even went as far as to detail Oracle’s commitment to banishing passwords internally in the not-too-distant future – “by this time next year, Oracle employees won’t be using passwords, and a lot of you will have the choice to not use passwords anymore.”

The alternative? Biometric security measures such as facial or thumbprint recognition capabilities that Ellison feels are far more secure and less susceptible to exploitation from malicious actors.

“Biometric logins are much easier to use – and much faster and much more secure,” Ellison claimed.

“You point your smartphone to your face for a couple of seconds, you put your fingerprint on the smartphone, you can record your voice if you want to – and we register it in a biometric database,” he added.

Ellison pointed to other areas in which biometric technology could be used to enhance security, such as with credit cards or in schools to ensure that people cannot gain unauthorized access to physical sites.

“It becomes very difficult to impersonate somebody else if you have proper user authentication,” Ellison said.

Biometric technology isn’t fault-proof

Ellison’s enthusiasm aside, there are cybersecurity concerns surrounding biometric technologies, with events in recent years pointing to issues with harvesting vast amounts of personal data.

In 2020, for example, Clearview AI was hacked, raising concerns about malicious actors gaining access to the facial recognition technology provider’s large stores of images. In this case, the hackers only gained access to the firm’s client list, but the damage could have been worse.

Any company gathering facial recognition data should therefore be careful, as their systems could become attractive targets for cyber criminals seeking to access personally identifiable information (PII).

The technology isn’t perfect, either. In 2018, civil liberties group Big Brother Watch reported that facial recognition technology misidentified people as criminals at a failure rate of 98% when used by police in the UK.

There is also growing concern around deepfakes as technology is used to create fake video and audio files of real people, now increasingly via generative AI.

Companies looking to adopt voice recognition technology as a replacement for passwords, for example, would need to think long and hard about this, as there is potential for cyber criminals to imitate the voices of employees to gain access.

In February this year, a finance worker at a major firm in Hong Kong fell prey to threat actors using deepfake technology to pose as the company’s chief financial officer in a conference call.

The unsuspecting worker was duped into paying out $25 million.


Source link
Exit mobile version