Copy of MODIC DESIGN Logo 3.png

MODIC DECODED

Brand Strategy + Digital Defense

modic decoded — tips in brand strategy + digital defense


 

AI AND DEEPFAKES: THE RISING CONCERN OF VOICE CLONING

 

EBONY S. MUHAMMAD

 

Immediately following a 2013 interview I conducted on photo manipulation, I’ve kept an eye out for similar forms of altered images that were deceptive in nature. A few years after this interview I saw the rise and advancement of Deepfakes.

According to MIT, a deepfake refers to a specific kind of synthetic media where a person in an image or video is swapped with another person's likeness. 

I'm sure you've seen the image of the Pope wearing a white puff coat that went viral this week. As real as it appeared to be, it was completely AI-generated. Although this image of the Pope wasn't considered malicious in nature, it did spark the debate of ethics, the need for deepfake detection tools and realizing the potential harm when using AI with such realistic features.

Deepfakes are not limited to images and videos. There is a synthetic voice technology, also known as voice cloning, that is AI-generated. Voice cloning technology isn't new. Initially the samples I listened to weren't that advanced, and one could easily hear the slight robotic tone it carried. 

However, a few days ago I came across a sample that was quite sophisticated. The sample used was of Kanye “Ye” West. Roberto Nickson, founder of the popular tech brand Metav3rse, wrote that he “tracked down a trained AI model of Kanye” to replace his vocals. 

The sample I originally came across was on his Twitter account, but the below covers are just as mind-blowing.

This resulted in a near perfect vocal resemblance of “Ye”. If one were listening with their eyes closed, it would be nearly impossible to tell the difference. Here is the sample he posted on @metav3rse's Instagram account of “AI Ye” covering popular songs of other artists.  

Again, this was not performed by Ye. It was all generated with AI.

IMAGE VIA METAV3RSE INSTAGRAM

 

The advancement of voice cloning makes it very difficult to distinguish the real from the synthetic. This could cause a number of issues.   

As one specializing in Cyber Threat Intelligence, I am compelled to share this with you to bring awareness and encourage you to exercise caution before sharing or reacting to something posted online.   

Synthetic media, such as Deepfakes, are known as social engineering tactics. Those that are malicious in nature are intended to create confusion, fear, anger, anxiety, sadness with a prompt of urgency and curiosity. 

 My recommendations:

  • Be aware that although Artificial Intelligence (AI) can be incredible if it is used properly and ethically, that there is a darker side of AI that can cause harm if mis-used.

  • Slow down and use a careful eye when coming across any form of media that generates those above emotions.

  • Be willing to authenticate media by checking the official social media accounts before reacting and resharing anything.

 

May Allah (God) continue to bless and strengthen us all in this very critical hour.  


JOIN US AND TAKE BOLD STEPS TOWARDS DIGITAL DEFENSE + TECHNOLOGY