Taylor Swift turns her vocal cords into a registered trademark

A.I
Taylor Swift turns her vocal cords into a registered trademark
Faced with a flood of AI deepfakes and unauthorized endorsements, Taylor Swift’s legal team is attempting to weaponise trademark law to protect her biometric identity.

On Friday, April 24, a short .wav file was uploaded to the United States Patent and Trademark Office (USPTO) database. It is not a song or a promotional clip meant for radio play. It is a recording of three words—'Hey, it’s Taylor'—uttered by Taylor Swift. By filing this audio snippet, Swift’s holding company, TAS Rights Management, is attempting something that bridges the gap between traditional brand protection and the frantic new reality of biometric security: she is trying to turn the specific resonance of her vocal cords into a federally protected commercial mark.

This is not a vanity project; it is a defensive fortification against an industrial-scale problem. As generative AI models become increasingly adept at cloning human voices with just a few seconds of training data, the legal framework for protecting an individual's identity is showing its age. Swift’s move to trademark her voice and a specific visual likeness—holding a pink guitar in a multicoloured bodysuit and silver boots—suggests that copyright law is no longer a sufficient shield against the synthetic tide.

Can a voice be a brand?

The technical tension here lies in the distinction between copyright and trademark. Copyright protects a specific 'work'—a song, a book, a photograph. It does not, however, protect the style, the tone, or the identity of the person who created it. If an AI generates a new song that sounds exactly like Taylor Swift but uses a new melody and lyrics, copyright lawyers often find themselves in a cul-de-sac. Trademark law offers a different path: it protects the source of a product. By registering her voice as a trademark, Swift is arguing that her vocal timber is a 'source identifier' for her brand, much like the MGM lion’s roar or the Intel chimes.

Intellectual property attorneys, including Josh Gerben, have noted that this represents a fundamental shift. It is a move away from litigating the output of AI and toward litigating the identity markers used to sell it. The filings target 'AI-generated clips or unauthorized uses,' aiming to give Swift’s team a clear federal hammer to drop on platforms that host deepfakes. It is an attempt to treat a human voice with the same legal rigidity as a corporate logo.

The failure of voluntary safeguards

In the Trump incident, the technology was used to manufacture a false endorsement—a direct attack on the commercial and political value of a celebrity's persona. In the United States, 'Right of Publicity' laws are a patchwork of state-level regulations that vary wildly between California, Tennessee, and New York. By moving into the realm of federal trademark law, Swift’s legal team is seeking a uniform, national standard that does not rely on the whims of state legislatures or the inconsistent Terms of Service of social media giants.

How Europe views the biometric land grab

While Swift fights her battle in the US patent offices, the European perspective offers a starkly different regulatory approach. Under the EU AI Act, which is currently being phased into law across the member states, there are specific transparency obligations for 'high-risk' AI and general-purpose models. Article 52 of the Act mandates that users of an AI system that generates or manipulates image, audio, or video content that appreciably resembles existing persons—commonly known as deepfakes—must disclose that the content has been artificially generated.

In Germany, the concept of *allgemeines Persönlichkeitsrecht* (general right of personality) is deeply entrenched in the constitution. German courts have historically been more protective of an individual's right to control their own image and voice than their American counterparts. However, the German legal system, much like the rest of the EU, is currently grappling with the jurisdictional nightmare of AI. If a model is trained on a cluster in Dublin using data scraped from a server in Singapore and then deployed by a user in Munich, the 'right of personality' becomes difficult to enforce. Brussels is betting on top-down regulation of the model providers themselves, whereas the American approach—perfected by Swift—is to arm the individual with enough private property rights to sue everyone in the supply chain.

The bottleneck of the training data

Beneath the legal filings lies a deeper technical grievance: the provenance of training data. AI models like Suno, Udio, or Voicebox do not create voices out of thin air; they require massive datasets of existing human speech. For an AI to mimic Taylor Swift, it must first 'consume' thousands of hours of Taylor Swift’s recorded history. Engineers in the industry know that the current crop of Large Language Models (LLMs) and audio diffusion models were built on the assumption that anything publicly available on the internet is 'fair use' for training.

Swift’s attempt to trademark her voice is, in a way, a retroactive tax on that training data. If her voice is a registered trademark, then any AI model that can demonstrably reproduce that voice might be infringing on her mark simply by existing as a commercial product. This creates a potential liability for the hardware and software companies that provide the infrastructure for these models. It moves the conflict from the teenager making deepfakes in their bedroom to the venture-backed AI labs in Silicon Valley and the GPU clusters that power them.

The gap between law and latency

Despite the strategic brilliance of the trademark filing, a significant gap remains between legal protection and technical reality. A trademark gives you the right to sue, but it does not give you the ability to stop a viral video before it reaches ten million views. The latency of the legal system is measured in months and years; the latency of a deepfake going viral is measured in seconds. This is the reality that engineers and policymakers are struggling to reconcile.

Even if the USPTO grants these trademarks, the enforcement will likely require a new kind of 'digital fingerprinting' or watermarking—technologies that are still in their infancy and easily bypassed by sophisticated actors. Matthew McConaughey has reportedly adopted a similar strategy, indicating that we are seeing the beginning of a celebrity-led enclosure movement of the digital commons. The goal is to make the unauthorized use of a human likeness so legally expensive that AI developers are forced to build 'opt-in' systems rather than the 'scrape-first, ask-later' models currently in vogue.

Ultimately, Swift is doing what she has always done: treating her art and her identity as an industrial asset to be protected with the highest possible walls. She has the lawyers, Brussels has the directives, and the AI scrapers have the data. It remains to be seen which of these forces will prove more durable in a digital economy that increasingly values the synthetic over the authentic. For now, the .wav file sits in the USPTO database—a tiny, digital claim-stake in the wild west of the generative era.

The USPTO will now decide if a human voice can be a brand. It’s a decision that will likely be made in Virginia, but the ripples will be felt in every boardroom from Cupertino to Berlin.

Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany

Readers

Readers Questions Answered

Q Why is Taylor Swift seeking a federal trademark for her voice?
A Taylor Swift is seeking a federal trademark for her voice to protect her biometric identity from unauthorized AI deepfakes and false endorsements. While copyright protects specific songs, trademarking her vocal timbre allows her to treat her voice as a brand identifier. This federal protection provides a consistent legal standard across the United States, overcoming the current patchwork of state-level right of publicity laws that vary significantly by jurisdiction.
Q How does trademark law offer better protection against AI than copyright?
A Copyright law protects a specific work, like a song or book, but it does not protect an artist's unique style or vocal resonance. If AI generates a new song that mimics a singer's voice without using their lyrics, copyright provides little recourse. Trademark law identifies the source of a product. By treating her voice as a brand logo, Swift can litigate against the identity markers used to sell synthetic content.
Q What specific likeness and audio are included in Swift's trademark application?
A The trademark filings by TAS Rights Management include a short audio file of the phrase "Hey, it’s Taylor" to register her specific vocal resonance. Additionally, the application includes a visual description of her likeness, specifically depicting her in a multicolored bodysuit and silver boots while holding a pink guitar. These specific markers are intended to serve as federally protected source identifiers, making it easier to target unauthorized AI-generated content.
Q How do European regulations on deepfakes differ from the American legal approach?
A The European Union regulates AI through the EU AI Act, which requires clear disclosure when content is artificially generated. While some European nations like Germany have constitutional protections for personality rights, the EU focuses on top-down requirements for AI model providers. In contrast, the strategy used by Taylor Swift in the United States seeks to empower individuals with private property rights, allowing them to sue anyone in the AI supply chain for trademark infringement.

Have a question about this article?

Questions are reviewed before publishing. We'll answer the best ones!

Comments

No comments yet. Be the first!