HarmonyCloak

HarmonyCloak: Making Music Unlearnable for Generative AI

Calling all music enthusiasts and professionals!

Please take this quick 2-minute survey to share your insights on AI's impact on the music industry and stay informed about our latest research initiatives.


Introduction

In recent years, the rapid advancements in generative AI have transformed a wide range of domains, from generating realistic images to creating coherent text. Recently, these technologies have set foot in the music industry, enabling AI systems to produce sophisticated compositions that rival human creativity. However, this progress has also raised critical concerns about the unauthorized use and exploitation of musicians' original works. As generative AI models learn from vast databases of music, there is an increasing risk of unintentional infringement on copyrighted material, threatening both the livelihoods and artistic integrity of musicians. In response to this emerging challenge, we propose HarmonyCloak, the first defensive framework designed to protect musicians' intellectual property by making their music unlearnable by AI models. By embedding imperceptible, error-minimizing noise into the music, HarmonyCloak effectively prevents AI systems from extracting meaningful patterns, all while preserving the perceptual quality of the music for human listeners. Our approach leverages time-dependent optimization constraints and psychoacoustic principles to ensure that the protective noise remains undetectable to the human ear. Extensive experiments on state-of-the-art generative AI models, including MuseGAN, SymphonyNet, and MusicLM, demonstrate the effectiveness of HarmonyCloak across both white-box and black-box scenarios. By shifting the focus from visual data protection to audio protection, this framework opens new possibilities for safeguarding creative works in the music industry without compromising their artistic essence.


Threat of Generative AI Aginst Musicians

  • As generative AI rapidly evolves, it presents a growing threat to musicians and their intellectual property. AI models, which can now produce coherent and high-quality music, often rely on vast datasets of publicly available music scraped from online platforms and streaming services. While this technology offers exciting new opportunities, it also introduces significant risks, particularly regarding copyright infringement and the unauthorized use of original compositions.
  • The threat escalates as attackers—such as AI model owners or companies—may scrape music data from the web, using it to train generative models that replicate or mimic the protected compositions. As a result, the risk of copyright violations becomes more pronounced, and the need for robust protective measures becomes paramount.
  • The issue is further compounded by these companies profiting from the generated music, while the original creators receive no monetary compensation or societal recognition for their contributions. Armed with powerful AI technologies and unrestricted access to publicly available music, corporations can systematically undermine the rights of musicians.
scenarios


Figure 1: llustration of the threat model where the attacker scrapes music posted online by victim musicians to train their music generative models.


What is HarmonyCloak?

HarmonyCloak is designed to protect musicians from the unauthorized exploitation of their work by generative AI models. At its core, HarmonyCloak functions by introducing imperceptible, error-minimizing noise into musical compositions. While the music sounds exactly the same to human listeners, the embedded noise confounds AI models, making the music unlearnable and thus protecting it from being replicated or mimicked. For example, a beautifully composed symphony may remain pristine to the human ear, but to an AI, the "Cloaked" version appears as a disorganized, unlearnable dataset. As a result, when an AI model attempts to generate music in the style of the original composer, the output will be incoherent, preventing the model from capturing the essence of the protected composition.

But why does HarmonyCloak work so effectively? Why can’t an attacker simply remove these subtle alterations by applying common digital manipulations—such as converting the file, compressing it, or adding noise? The answer lies in the nature of HarmonyCloak: it’s not a simple additive noise or a hidden signal that can be easily filtered out. The imperceptible noise added by HarmonyCloak is carefully crafted to remain below the human hearing threshold, blending seamlessly with the music. This noise is masked by the music itself, making it undetectable to the human ear while still disrupting the AI’s ability to learn from the data. Think of it as a new, invisible layer to the music—one that only AI models can detect but humans cannot. This protective layer is dynamically created, adapting to the characteristics of each individual piece of music. And because the dimensions HarmonyCloak operates on vary with every song, generative AI models cannot easily reverse-engineer or bypass the protective noise without knowing the specific parameters for each track.

HarmonyCloak doesn't compromise on sound quality; it preserves the artist’s original intent while preventing AI from learning from the music. Whether it’s classical, jazz, or modern electronic compositions, HarmonyCloak ensures that artists’ intellectual property remains safe in an era of rapid advancements in AI music generation. Keep reading to discover how HarmonyCloak works, and explore the results of our experiments across multiple state-of-the-art AI music models like MuseGAN, SymphonyNet, and MusicLM.


Our Goals

At HarmonyCloak, our goal is to protect musicians and their creative works from the unauthorized use of generative AI models. We aim to ensure that artists retain control over their music, preventing AI systems from learning or mimicking their compositions without consent. By providing innovative solutions that safeguard music from exploitation, we empower musicians to confidently share their work while maintaining its artistic integrity. Our mission is to create a future where technology respects and supports creators, ensuring that music remains secure, original, and enjoyed as intended by human audiences.


HarmonyCloak Design

HarmonyCloak is designed to protect musicians' work from being misused by generative AI models in two main scenarios: white-box and black-box settings.

In the white-box setting, we know the details of the AI model that will train on the music. This allows HarmonyCloak to tailor its protection by adding noise that is hidden within the music, making it undetectable to human ears but confusing to AI models. The noise is carefully matched to the tones and rhythm of the music, so it blends naturally with each instrument. Even after common processes like compressing the file to MP3, the noise remains effective, ensuring that the AI cannot learn from the music, even if it tries.

In the black-box setting, where we don’t know which AI model will be used, HarmonyCloak applies a more flexible approach. It generates protective noise that works across different models, even those we haven't seen before. This makes sure the music stays safe from being learned, no matter what type of AI is trying to replicate it.

scenarios
scenarios
scenarios
Clean Music
Defensive Noise
Cloaked Music


Figure 2: Illustration of the spectrogram of the music and the added defensive noise.



Music Samples

Original and Cloaked Music

Original Music
Original Music
Original Music
Cloaked Music
Cloaked Music
Cloaked Music

AI Generated Music

Generated Music with MuseGAN Model

Trained on Clean Music
Trained on Clean Music
Trained on Cloaked Music
Trained on Cloaked Music

Generated Music with MusicLM Model

Trained on Clean Music
Trained on Clean Music
Trained on Cloaked Music
Trained on Cloaked Music

Generated Music with SymphonyNet Model

Trained on Clean Music
Trained on Clean Music
Trained on Cloaked Music
Trained on Cloaked Music

Media Coverage

UTK News: Liu’s New Tool Makes Songs Unlearnable to Generative AI - October 2024

Microsoft Network: New tool makes songs unlearnable to generative AI - October 2024

FURTURA: Bonne nouvelle pour les droits des artistes : HarmonyCloak empoisonne les IA pour protéger la musique ! - October 2024

山下裕毅(Seamless): 「“毒入り音楽”でAIに学習させない」――人には聞こえないノイズを音声にこっそり入れ訓練不能にする技術(生成AIクローズアップ) - October 2024

NDD.news: Protéger sa musique des IA, c’est possible ! - October 2024

追问Nextquestion: 新工具HarmonyCloak,助力音乐创作者抵御AI侵权 - October 2024

New Atlas: HarmonyCloak slips silent poison into music to corrupt AI copies - October 2024

TechXplore: New tool makes songs unlearnable to generative AI - October 2024

Alan Cross' A Journal of Musical Things: Someone has come up with a cloaking device to fight bogus AI music. It’s pretty cool - October 2024

Softonic: This app is saving musicians by poisoning the AI so it stops stealing music - October 2024

ProjectREYLO: HarmonyCloak: The Silent Savior Against AI Music Theft - October 2024

WWWHATSNEWS: HarmonyCloak: Protección musical contra La Ia Generativa - October 2024

Francetvinfo: Musique : les artistes vont pouvoir "empoisonner" leurs œuvres pour que les algorithmes ne puissent plus s’en inspirer - October 2024

The Outpost: HarmonyCloak: A New Tool to Protect Musicians from AI Copyright Infringement - October 2024

Ainvergo: HarmonyCloak: Innovative AI Solution to Safeguard Music from Unauthorized Scraping by Generative AI Platforms - October 2024

Mischa Dohler: Harmony Unleashed: AI’s Musical Revolution Silenced - October 2024

knowledge: New tech makes songs invisible to AI, protecting artists from copycats - October 2024

Tecnología: Herramienta dificulta a la IA entrenarse con canciones - October 2024


Contact

Team Members

Paper


scenarios
scenarios