Most conversations about AI ethics in design happen at a level of abstraction that makes them easy to agree with and impossible to act on. "AI should be used responsibly." "Designers must remain in control." "Transparency is important." These statements are true and completely unhelpful for a studio trying to decide, in practical terms, what to do on Tuesday.
This piece is an attempt to bring the conversation down to the ground — to the specific, uncomfortable questions that come up when you're actually using these tools in client work. We don't have clean answers to all of them. But naming the questions precisely is a precondition for answering them honestly.
Do You Tell Clients You're Using AI?
This is the question most studios are actively avoiding, and avoidance is itself an answer — just not a principled one. The honest version of the question is: does a client paying for design work have a right to know that significant parts of that work were produced by AI tools?
There's a reasonable argument that they don't — that clients pay for outcomes, not methods, and no client has ever asked which version of Photoshop you used. But AI is a different kind of tool. It's not a tool that assists your work; in some cases it's producing the work, with you in an editing and curatorial role. That distinction matters, even if the line between "assisted" and "produced" is genuinely blurry.
"Clients aren't paying for your hours. But they are paying for your judgment. The question is whether using AI without disclosure is misrepresenting where that judgment ends and the machine's output begins."
Our working position: disclosure should be proportional to degree. Using AI to draft copy that you then substantially rewrite, or to generate image variants that you then select and edit — this is tool use, and no different in principle from using a stock photo library. Using AI to generate a logo concept that you present as your original design work — this is the kind of thing that should be disclosed, because the client is paying for creative origination and is not getting it in the way they expect.
Whose Style Are You Using?
Generative image tools — Midjourney, Stable Diffusion, DALL-E — are trained on enormous datasets of human-made images, including the work of living designers and illustrators who did not consent to their work being used as training data and are not compensated when their style is replicated.
This is a live legal and ethical dispute with no settled answer. But the specific question for design studios is narrower and more practical: when you use a prompt like "in the style of [artist name]" to generate client work, you are deriving economic value from that artist's identifiable creative output without compensation or credit. That feels different from general stylistic influence, in the same way that copying a specific illustration feels different from being influenced by a school of illustration.
A useful test: would you be comfortable telling the artist whose style you're referencing that you used their name in a prompt to generate paid client work? If the answer is no, that's worth examining. If the answer is yes — because the output is genuinely transformative — then you're probably on reasonable ground.
The safer and more defensible approach is to use AI image generation for the parts of your work where style isn't the point — generating reference images, exploring composition options, producing texture or pattern assets — rather than for work where a client is effectively paying for a creative voice that happens to belong to a specific human artist.
Are You Feeding Client Data to Third-Party Models?
This is the question that comes up least often in public conversations about AI ethics in design, and it may be the most practically consequential. When you paste a client's strategy document into ChatGPT to get copy suggestions, or upload their brand assets to an AI image tool, you are sending proprietary client data to a third party whose data handling, training practices, and retention policies you may not have checked.
Most standard web design contracts do not address this explicitly. Most clients have no idea it's happening. In sectors with strict data handling requirements — healthcare, finance, legal, any regulated industry — this is not just an ethical issue but potentially a contractual and legal one.
- Check the data handling policy of every AI tool you use with client material — specifically whether inputs are used for model training
- Use the enterprise or API tier of tools where available, as these typically have stronger data privacy guarantees
- Consider adding a clause to your contracts that addresses AI tool usage and client data, so both parties have explicit clarity
- For sensitive client contexts, keep AI assistance to generic tasks that don't require uploading proprietary content
Are You Pricing Honestly?
If AI tools have reduced the time it takes you to produce certain deliverables by 60%, are you passing any of that efficiency to clients in lower prices, or keeping it entirely as margin? Neither answer is automatically wrong — businesses routinely profit from efficiency gains — but it's worth being conscious about it rather than defaulting to charging the same rates for less work without reflection.
The version that becomes genuinely problematic is charging time-based rates for work that AI largely produced, with the time recorded being the editing and prompting time rather than the full production time. This isn't just an ethics question — it's a trust question. If a client later discovers that a deliverable they paid 20 hours for was substantially AI-generated and took 4 hours, the conversation that follows will be difficult.
The studios navigating this best have moved away from time-based pricing for AI-assisted work and toward value-based or deliverable-based pricing. This sidesteps the time question entirely and prices the outcome rather than the method. It also happens to be better business: value-based pricing typically produces higher margins on AI-assisted work and lower client frustration about billing.
What Are You Doing to Your Own Skills?
This is the least discussed ethical question in the AI design conversation, and it's an ethical question about what you owe yourself and your future clients, not just your current ones. If you use AI to handle the parts of design work that are hardest and most developmentally valuable — generating creative concepts, writing persuasive copy, making difficult layout decisions — you may be trading short-term efficiency for long-term skill atrophy.
Design judgment is built through reps. Every time you skip the hard thinking by prompting your way to an answer, you're saving time and forgoing practice. This matters most for junior designers, but it applies at every level. The senior designer who stops wrestling with layout problems because AI generates acceptable options quickly is, over time, becoming less capable of recognising why those options are merely acceptable rather than genuinely good.
The working principle we've landed on: use AI to accelerate the execution of decisions you've already made, not to avoid making the decisions in the first place. The judgment is yours. The typing can be the machine's.
A Framework for Deciding
Rather than a list of rules — which will be outdated before the year is out — here's a set of questions worth asking whenever you're uncertain about a specific AI use case:
- If the client knew exactly how this deliverable was produced, would they feel they received what they paid for?
- Does this use of AI involve someone else's creative work in a way they haven't consented to?
- Am I sending data that a client would expect to remain private to a third-party system?
- Am I building or eroding my own judgment and craft by working this way?
- If this practice became standard across the industry, would design as a discipline be better or worse for it?
None of these questions has a universal answer. But asking them consistently, and being honest about the answers, is the difference between a studio that uses AI thoughtfully and one that uses it in ways that will eventually catch up with it.
The studios that will be most trusted in five years are the ones that built principled practices around AI now, before the industry converged on norms. That trust — with clients, with the wider creative community, with yourselves — is worth more than any short-term efficiency gain.