The music industry is in the grip of an intensifying debate over how AI-generated derivatives of existing music should be created, licensed, and monetized.
Universal Music Group has been at the center of that debate. The company has championed a “walled garden” model for AI derivatives, central to its settlement with Udio last October, where AI-generated music cannot be downloaded or distributed outside the platform on which it was created.
Michael Nash, UMG’s EVP and Chief Digital Officer, has argued that without such restrictions, AI derivatives risk allowing users to “effectively use artists’ content and their brand to create derivatives where you’re going to compete with the artist on other platforms.”
That position has put UMG at odds with AI music platform Suno, whose Chief Music Officer Paul Sinclair has advocated for “open studios, not walled gardens.”
UMG and Sony Music continue to pursue litigation against Suno, while Warner Music Group struck a deal with the platform in November that preserved users’ ability to download their creations.
Meanwhile, the demand for AI derivative tools is building from the platform side. In February 2026, Spotify Co-CEO Gustav Söderström said the streaming giant’s technology to let fans make AI-generated remixes and covers was “ready,” but that “the absence of a rights framework” was holding things up.
Against that backdrop, an entity linked to UMG has been building a patent portfolio around AI-music infrastructure, through a partnership with IP asset management, investment, and advisory firm Liquidax Capital. The technology, depending on how it is commercially deployed, could potentially support a so-called ‘walled garden’ approach to AI-generated music derivatives, among other possible applications, which currently remain unclear.
In July 2025, UMG announced a strategic partnership with Liquidax, led by CEO Daniel Drolet, to accelerate the development and licensing of music-related AI patents.
A new entity, Music IP Holdings, Inc. (MIH), was formed to license the technology into the global marketplace. UMG said at the time that it had filed 15 patents with Liquidax across fields including musical collaboration, multimedia content creation, AI threat protection, and rights management.
By November 2025, MIH had opened a headquarters on Nashville’s Music Row, with Drolet serving as Chairman and CEO. In a press release announcing the expansion, the company said it held “more than 60 protected innovations with numerous additional technology families and portfolios under development” — a significant expansion on the 15 filings disclosed four months earlier.
MBW has obtained three of those patent filings and reviewed them in detail. The filings are assigned to MIH, not to UMG directly, and neither MIH nor UMG has publicly disclosed whether or when they intend to use or license the described technology, or to whom.
Here’s what the filings describe…
‘AI-GENERATED MUSIC DERIVATIVE WORKS’
This filing seeks to address the core copyright problem posed by AI trained on and used to remix existing music.
The invention the patent claims is a system built around artist approval. A user asks the AI to transform a copyrighted track, the system checks whether the rightsholder would approve, and if it passes, the output gets a digital watermark and is released under terms the rightsholder has set.
Filed on October 24, 2024 and granted on June 3, 2025, this is the earliest-granted patent in the family. It originally listed Daniel Drolet as the sole inventor.
A subsequent expanded continuation, assigned to MIH and currently pending, was filed in August 2025 and adds Chris Horton, Jeremy Uzan, and Sion Elliott as co-inventors alongside Drolet. The three are identified on LinkedIn as EVP, Strategic Technology at UMG; Director, AI and Advanced Technology at UMG; and Director, Global New Business Strategy at Universal Production Music, respectively.
You can read the continuation here and the original granted patent here.
The abstract in the updated continuation filing describes “a system and method for creating AI-generated derivative works from predetermined content with copyright compliance and content owner control”.
It adds that “in some aspects, the system receives predetermined content and a user-requested transformation theme, then employs generative artificial intelligence to create a derivative work. The system may enable scalable rights management for AI-generated content across music, video, text, and other media formats.”

It describes derivative works as “creations that are based on one or more predetermined works” and warns that AI-generated versions “may infringe on the rights of the creator or copyright holder of the predetermined work, through reproduction and transformation without permission” and “may be adverse to preferences of the creator or copyright holder of the predetermined work.”

In the filing’s own language, the method covers “receiving a request to transform the predetermined content into a derivative work, receiving a requested theme for the derivative work, using generative artificial intelligence to create the derivative work generated as a function of the predetermined content and the requested theme, determining if the generated derivative work is approved based on a machine learning model configured to determine a content approval score as a function of content owner preferences, in response to determining the generated derivative work is approved, applying a digital watermark to the approved derivative work, configuring an authorization server to govern use of the approved derivative work based on the digital watermark and providing user access to the authorized derivative work.”
The user’s instructions to the AI can be gathered conversationally. The filing states that “the requested theme may be determined using a Large Language Model (LLM) and a chatbot interview.”
Watermarks can also expire, giving rights holders a way to manage how long a derivative remains authorized. The filing states that “the authorization server may be configured to revoke, or not renew, approval for a derivative work, based on allowing the watermark embedded in a derivative work to expire.”
‘MULTI-STAGE APPROVAL AND CONTROLLED DISTRIBUTION OF AI-GENERATED DERIVATIVE CONTENT’
Filed on May 2, 2025 and granted on September 23, 2025, this filing, which you can read here names Chris Horton, Jeremy Uzan, Sion Elliott, and Daniel Drolet as inventors. It builds on the earlier-granted patent, with broader claims.

Where the earlier patent establishes the base system, this one tackles a different problem: how to build approval logic that reflects the artist’s own preferences at both ends of the creative process.
The filing describes a method that combines a machine-learning approach with “predefined rule sets established by content owners that specify permissible and/or impermissible transformations for specific content” — the two can operate “either separately or in combination, to accommodate different content owner preferences and use cases.”
The core addition is a two-stage approval process: one check before the AI generates anything, another check on the finished output.
The filing refers to “pre-generation preference data” and “post-generation preference data” associated with a “content authority” — defined broadly as “any entity having legitimate control, decision-making power, or governance rights over digital or creative content and its permissible transformations,” including “a natural person (such as an artist, creator, or designated individual)” as well as legal entities and rights management organizations.
The filing gives a concrete example of how the approval system reflects artists’ personal values: “If an artist is vegetarian and does not want their voice or style to be used in songs about meat consumption, this preference can be captured by their label ahead of time and provided to the content derivation platform. This allows filtering to occur both at the prompt level and at the output level, ensuring alignment with the content owner’s values throughout the creation process.”
“If an artist is vegetarian and does not want their voice or style to be used in songs about meat consumption, this preference can be captured by their label ahead of time and provided to the content derivation platform.”
excerpt from Music IP Holdings patent filing
Artist consent is also built into voice-substitution claims within the filing. The filing describes a feature that lets users “replace the lead vocals in an existing recording with the voice of another artist who has consented to such transformations” — and notes that “the selection action itself may trigger the attribution and remuneration system, which ensures appropriate compensation to both the original and substitute artists.”
Rejected requests can come with an explanation. The filing describes “a remediation report when a derivative work fails to meet approval criteria”, analyzing what caused the failure so the user can try again.
The patent also sets out how distribution controls can prevent unauthorized use of derivatives on third-party platforms. Partner platforms including streaming services, social networks and distribution aggregators “may be contractually required to scan incoming content for the presence of such markers, and take automated enforcement action based on the encoded rules.”
The filing also describes “context-restricted playback, where transformed works are only accessible within the approved media environment” — one of several distribution configurations the patent covers, but one that sounds a lot like the ‘walled garden’ approach championed by UMG. None of the filings use the term ‘walled garden’.
The filing further describes automated revenue distribution, with “a smart contract that automatically allocates revenue from each authorized distribution of the derivative work to stakeholders identified in the usage registry”, triggered “each time the digital identifier is verified during a streaming session.”
The rights framework the patent describes has elements in common with the kind of infrastructure Söderström identified as currently missing — though the filing makes no reference to Spotify or any other platform, and MBW has no information on commercial discussions between MIH and any streaming service.
‘AI-GENERATED DERIVATIVE CONTENT SCALING FOR MERCHANDISE’
Filed on October 3, 2025, this pending filing names Horton and Uzan as inventors.
This filing, which you can read here, tackles a different commercial problem: how to extend the same approval-and-watermark system beyond audio into physical and virtual merchandise. It applies to “AI-generated album artwork derivatives, apparel designs, posters, and virtual goods”, all under the same governance framework as the audio filings.

The fan-facing side of the system is designed around a five-step journey on a mobile app. The fan first picks a source asset — lyrics, moments from a song, or themes linked to an artist. The AI then generates a set of design variations. The fan picks one, customizes it by adjusting color, size, and position, and sees a real-time preview rendered either as a physical product or in augmented reality. At the final step, the fan chooses to buy or save.
Behind the app sits a backend that pulls from a library of artist reference materials — artist logos, artist fonts, lyrics and text, album art, and tour visuals (see illustration below). This is the curated set of creative assets the rightsholder makes available for fans to remix.

The fan’s request passes through two AI components. A “knowledge agent” checks it against the rightsholder’s brand guidelines. An “action agent” then runs the generative AI and produces the design. The design then goes through an approval check: if it fails, it’s rejected (a T-shirt reading “Music is just noise!” in the filing’s own illustration); if it passes (a T-shirt reading “Stage is my canvas”), it moves on to fulfillment.
From there, the system splits in two. Physical merchandise is routed to a “print agent” that feeds a print-on-demand partner, and a “shop agent” that lists the product on an e-commerce storefront. Virtual merchandise takes a separate path to a 3D rendering engine and metaverse platforms — where, in the filing’s illustration, an avatar is shown wearing the AI-generated design inside a virtual environment.
Both tracks feed into a contract that distributes revenue to stakeholders. One approved design can become a physical T-shirt, an e-commerce listing, and a wearable item inside a metaverse — with automated royalty payments routed at the end.
The filing also describes the platform operating in real time during live events, with concert-synchronized merchandise generation pulling in data about the song being played, stage visuals, lighting, crowd engagement, and venue location to produce context-specific items. Fan co-creation interfaces check each modification against brand rules in real time. Authentication combines physical elements like holographic tags or QR codes with embedded digital identifiers.
The three publicly available filings reviewed here represent only a fraction of what MIH says it is building.
The company’s claim of more than 60 protected innovations suggests the portfolio extends well beyond the derivative works family into the other fields UMG has referenced, including musical collaboration, music and health, and AI threat protection.
UMG’s Michael Nash publicly referenced the company’s AI patent activity in remarks at the HumanX conference on April 8.
“With respect to leaning into innovation too, to demonstrate to artists that we’re focused on directing our resources to also create solutions, we’ve developed a number of AI patent applications,” Nash said. “One good example would be in terms of health and wellness. We developed a process that’s enabled by AI which supports our exclusive participation in the Sound Therapy category on Apple Music.”
MBW has previously reported on a separate UMG patent filing covering AI-generated binaural beats.
On UMG’s Q2 2025 earnings call last July, Chairman and CEO Sir Lucian Grainge noted that the technology underpinning Sound Therapy is not the only tech built in-house using AI.
He added that the company had been developing AI-powered technology since 2020 “to support artist marketing, analytics and distribution”.
He described the Liquidax partnership as the next step: “To accelerate and scale the development of our patents, we recently partnered with Liquidax Capital, an IP asset management investing advisory firm,” Grainge said, acknowledging that “Liquidax, on our behalf, has already filed 15 patents in the fields of musical collaboration, multimedia content and campaign creation, AI threat protection, music administration and rights management.”
He added that UMG’s “greatly expanded patent portfolio can then become a catalyst to accelerate introduction of products to the marketplace.”
UMG has declined to comment further, and the full plan for the tech described in the filings above remains unclear.Music Business Worldwide
#UMGbacked #patent #portfolio #targeting #music #derivatives #technical #blueprint #walled #garden #model