NovelVox

Is Your Industry In The Deep of Deepfakes? : Analyzing the Extent of Deepfake AI and Authentication Measures

Is Your Industry In The Deep of Deepfakes

Deepfakes make me reminisce about the Mission Impossible movie wherein the protagonist wears the mask of a supposed villain, gains access to authorized places, does what his mission requires him to do, and exits the scene dramatically, leaving people scratching their heads without ever getting caught.

But not all things in life are as dramatic as the “masks coming off” in movies. It’s all fun and games until such representations start projecting themselves in real life. And worse still is the fact that the masks rarely ever come off in reality.

We are “generation – technology”; our masks are not made of some synthetic material; we have networks operating under some “should you choose to accept” mission that is way more sophisticated and targets the world at large. With the rise of AI technology adoption, it has become far easier to penetrate our online defenses and leave people and businesses vulnerable to these faceless, nameless entities.

So, What Exactly is Deepfake?

A deep fake is an AI-powered form of media that depicts a person as saying something they did not say or appears in a manner different from authentic visuals. It is a human impersonation created using advanced technologies such as AI and ML, which can be a fake picture, an audio file, or a person’s expression that resembles the person.

Deepfakes are everywhere. They have become the modern-day meme as they are simple to create – from video impersonations to celebrity face swaps. And while you might think that you’ve never been fooled into thinking a deep fake was real, none of us can be a hundred percent sure.

If the head of a UK-based energy company can be made to initiate a transfer of $243,000, believing he was on the phone with his boss, there are narrow chances for us to detect fraud while we’re the target.

The Evolving & Fraud-Prone Contact Center Landscape

An overwhelming majority of fraud cases today are backed by sophisticated technologies. The fraudster will either get hold of your information, say card details, or trick you into making large transfers yourself. But where does the contact center feature in all this?

According to the Aite group, 61% of fraud cases pass through a contact center at some stage.

Attackers often use the telephony and IVR channels to perform reconnaissance and gain the information they need to carry out the fraud effectively. Let’s explore some of these frauds.

💡 Also Read | Correlation between Visual IVR and Improved Customer Experience

a) Synthetic Identities

Using AI technology, fraudsters can enable the creation of synthetic identities by combining real and fake information to establish financial accounts, obtain loans, or open credit cards. These synthetic identities often go undetected for some time, accumulating fraudulent activities that can harm individuals and businesses.

b) Voice Deepfakes

Deepfake technology can clone a human voice. All they need to do is access a data repository containing an audiotape of a person whose voice must be imitated. Deepfake algorithms can learn from this data collection to replicate the specific person’s voice.

The Evolving & Fraud-Prone Contact Center Landscape

c) Video Deepfakes

Our generation lives in an omnipresent social media world, where videos and photos can convey stories better than the written word, making it the most widely used type of deepfake. The technology is used to create fake photographs and videos by having another individual reenact the desired personality’s facial gestures which are placed on the intended personality’s deep fake.

d) Real-time or Live Deepfakes

Being astonishingly advanced, the deep fake technology can easily allow fraudsters to generate advertising clones, imitate celebs or political adversaries, and recreate user voices to bypass voice-based authentication.

💡 Explore | How to detect loss and fraud in the retail sector?

How Can Contact Centers Sniff Out Deepfake?

Usually, when faced with the risk of fraud, businesses traditionally have responded by adding more and more security questions to customer’s calls. This could be either through the IVR system or directly from the contact center agents themselves. This system, however, has its own drawbacks:

  • Customers have to spend more time on the call answering tedious security questions and,
  • It’s costly for the contact centers since asking more questions amounts to longer call time

Hence, this arrangement does not work for either the customer or the contact center.

In the face of growing threats, a layered approach to contact center security needs to be adopted that focuses on a robust implementation of a range of authentication measures. These include:

Voice Based Authentication

A voice deepfake is a convincing reproduction of a person’s voice which has been faciliatiated with the recent rise in the use of Generative AI. Now, voice-based authentication mechanisms understand that each person’s voice is unique and full of variations – an entire orchestra of voice tract structures, age, emotion, and speech patterns. While a digitally-manufactured voice is like an instrument that plays a single note. Voice biometrics uses the customer’s voice print in a manner similar to fingerprint or facial recognition software. The unique vocal features of the customer are recorded and used to identify them.

💡 Explore | Generative AI is here, what next for contact centers?

Three Pillars of Authentication

IVR-based Authentication

This method associates the telephone number of an individual customer with their customer record (such as their bank account number) enabling an automated check to be performed. Each time a customer calls the contact center, their number is matched with their record as a way of verifying that the call originates on the same device they have logged. This method relies on CLI, which can be masked or spoofed to disguise the inbound number to hide it or even mimic the number of a legitimate caller.

Knowledge-based Authentication

This authentication mechanism works by making the agents ask questions to customers that are tough for a scammer to attempt to answer. This could include a standard security question
(such as a memorable word or phrase) and PINs. This mechanism limits the risk of a fraudster using social engineering techniques to persuade an operator to bypass certain checks. As with IVR-based authentication, KBA alone does not provide a comprehensive defense against attack.

💡 Explore | The Ultimate Guide to Customer Authentication

Best Practices to Prevent Deepfake Fraud

Simplifying Contact Center Authentication with Unification

Along with deploying the right authentication mechanisms, contact centers need to unify disparate contact center applications to automate their authentication mechanisms. An integrated view of the customer can help agents identify the caller even before the interaction begins. There can be two approaches to achieve this:

Integration with CRM

The solutions backed by automation seamlessly integrate with backend databases, and third-party and CRM applications to identify callers directly from their number.

Integration with Multi-factor Authentication Applications

By integrating solutions with third-party voice applications, contact centers can authenticate callers based on voice patterns, behavior, and profile matches.

The third-party business application integration can be achieved with a trusted partner like NovelVox. The integration provides agents with detailed customer information on their screens through screen pops making it possible to deliver delightful experiences without asking the customer to repeat the context.

Has Deepfake shifted the standards of Authentication in Contact Centers?

Authentication mechanisms are the foundation on which any further measures of rooting out such threats can be based. However, deep fake technology has become so advanced, widely available, and democratized that it has challenged the ability of agile industrial institutions to prevent threats.

A recent study at the University of Waterloo showed that voice biometric authentication including those of industry leaders such as Amazon and Microsoft authentication systems can be bypassed by deepfake technology in only six attempts.

In such a scenario, contact centers need to keep testing the certainty rate of their authentication mechanisms to prevent a breach. For eg: businesses should ensure that their voice biometric tools are actively tested against deep fake audio samples.

Further, advancements in countermeasures including the use of Machine Learning are leading to authentication systems that can produce a good probability score.

A layered approach to fraud detection such as setting up multi-factor authentication tuned against customer meta-data, as well as robust behavioral analytics can provide a way forward that not only maximizes fraud prevention but also protects customer experience.

Wrap up

If you’re a contact center, there’s every reason for you to continue adopting new technology to increase operational efficiency and upgrade the customer experience. However, there needs to be an awareness of the fact that with each new technology also comes its vulnerabilities. To stay safe, the best solution is to layer multiple types of fraud prevention technology together, ensuring the best possible protection.

This is the mission that contact centers definitely must choose to accept!!

Was this post useful?

Recent Posts

Sign Up for Newsletters

Subscribe to our free newsletter and get blog updates in your inbox


  • Subscription Type

  • Hidden
  • This field is for validation purposes and should be left unchanged.

You May Also Like

Subscribe to our free newsletter and get blog updates in your inbox

Fill up the form to watch the video

  • Hidden
  • This field is for validation purposes and should be left unchanged.

Download Brochure

  • Hidden
  • This field is for validation purposes and should be left unchanged.

Refer an Opportunity

  • About Yourself

Download Guide