
When brands hire illustrators, animators, or other artists, they typically know what they’re paying for: a defined set of creative assets, delivered on deadline, with clear usage rights.
But in the age of generative AI, that’s no longer the whole picture.
Commissioned artwork is increasingly being used not just in finished campaigns, but as training data to power AI models—models that, in turn, generate new, derivative outputs. Often, this use isn’t spelled out in contracts. It’s not malicious. It’s just . . . new.
That’s left brands, agencies, and artists in a tricky spot—trying to apply old licensing logic to a new generation of tools. The result is a growing disconnect between how creative work is made, how it’s used, and how it’s paid for.
What’s needed isn’t a philosophical debate about machine creativity. It’s a practical framework—one flexible enough for fast-moving teams, but structured enough to protect the humans still at the heart of the process.
The Creative Loop Has Changed
Traditionally, artists get paid for what they deliver—a character design, a series of storyboards, a set of icons or illustrations. The license defines where, how long, and in what formats those assets can be used.
But as AI workflows become more embedded in creative production, the loop looks different.
A brand commissions original artwork. That artwork is used not only in campaigns, but to fine-tune a generative model trained to produce content “in the style of” the original work. From there, marketing teams or third-party vendors can generate dozens of variations on demand—without going back to the original artist.
There’s nothing inherently unethical about this. In many cases, it’s efficient and creatively useful. But if the artist who trained the model isn’t compensated for that secondary use, a value gap opens up. And that gap becomes a reputational risk for the brand—especially as creative professionals, advocacy groups, and consumers become more AI-literate.
A Shift from Ownership to Participation
This isn’t a question of whether AI should be used. That debate is over. The question now is how to ensure the humans who shape the aesthetic intelligence of these systems are fairly recognized and fairly paid.
One path forward is to rethink the licensing structure. Instead of defaulting to flat fees for fixed deliverables, brands can structure creative engagements to reflect how derivative value is created over time. That starts by offering two distinct paths: one built around full ownership, and the other designed for ongoing participation.
In the ownership model, brands pay a higher up-front fee that covers the rights to train a model, generate derivative outputs, and use those outputs across campaigns without future royalties. It’s clean, comprehensive, and often a fit for fast-scaling companies or complex campaigns with long content tails.
In the participation model, brands pay a standard commission fee and then compensate the artist over time, based on how their work is used to generate new content. This might look like a royalty per output, a revenue share, or a pooled licensing structure tied to usage volume—akin to how publishers or music rights organizations operate.
Neither option is perfect. But both reflect the realities of modern creative work—where original contributions can fuel a long arc of generative production. More importantly, they offer artists a choice in how their labor and influence are valued.
What a Smarter Licensing Framework Looks Like
For brands and agencies ready to adopt more transparent compensation models, the good news is this doesn’t require a reinvention of the creative contract. A few key mechanisms, easily added to existing agreements, can bring clarity to how AI-derived work is used and monetized.
The first is a Commission-to-Model clause. It makes explicit that commissioned work will be used to train a model, and defines the scope of that use. These clauses can specify what kind of model is being trained, whether third-party partners will have access, and how long the model can be used. Crucially, they establish triggers for expanded use—say, across new business units or global campaigns—that would require a conversation or renewal. Think of it as the AI-era equivalent of a sync license for a song: it clarifies how the “source material” can be extended and scaled.
Next is a Derivative Use Ladder—a pricing framework that reflects how far an AI-generated asset strays from the original commission. Minor edits or resizes might be included in the base fee. AI-generated variants used within the same campaign could carry a modest uplift. Broader reuse across platforms, regions, or product lines would trigger higher fees or require relicensing. The goal isn’t to over-monetize creativity. It’s to avoid ambiguity and allow both sides to plan with confidence.
For brands building longer-term systems, where a model trained on original artwork might generate thousands of outputs, a royalty-bearing model license may be the most aligned. This could take the form of a flat fee per generated asset, a quarterly revenue share, or a pooled royalty structure when multiple artists contribute to a shared model. The mechanics can vary. What matters is the principle: as the system creates more outputs, more value should flow back to the creative source.
Each of these frameworks can integrate into existing production workflows. But together, they offer something more powerful: a shift in mindset from “we own what we paid for” to “we share in what we build together.”
What Artists Want (and Brands Can Offer)
Artists aren’t looking to halt innovation. Most understand the value of generative tools. Many already use them in their own workflows.
What they want is transparency, consent, and a fair share of the value created when their work is used to teach machines.
That doesn’t mean every output requires a payment. But it does mean brands should be prepared to offer clear terms—not just to protect themselves legally, but to build trust with the creative talent they rely on.
A Reputation-Forward Approach to AI
As generative AI becomes normalized in creative production, scrutiny is rising: lawsuits over unlicensed training data, open letters from illustrators, AI-generated brand work that backfires online.
In this environment, it’s no longer enough to stay quiet and hope no one asks. Responsible AI use is becoming part of a brand’s public posture. A clear, fair compensation model for human contributors isn’t just ethically sound—it’s reputationally smart.
Put simply: compensating the people who make your model smarter is good business.
Pay the Source
The creative economy is shifting—from artifact to algorithm, from fixed deliverables to living systems, from single commissions to ongoing creative loops.
In that new reality, we need new rules.
Paying the source isn’t about holding onto the past. It’s about designing a future where artists, technologists, and brands can build together, with clarity and trust.
That future is already arriving. The only question is whether we meet it with contracts that reflect the tools we use—or keep pretending the old ones are enough.