Freya Clarke
Strategist
In 1955, American designer George Tscherny broke with convention to forever change the way furniture was advertised. Rather than show a standard photograph of the item, his beautifully simple design for Herman Miller’s opening in Dallas featured the silhouette of a chair saturated in bold red, with a cowboy hat resting on the seat. His method, “the human element implied”, created an absence that was startlingly evocative and allowed for greater focus on the product.1
Celebrated for his wit and clarity, Tscherny’s broader work reflected the power of impactful design - the ability to communicate an idea with purpose, originality, and relevance. These principles hold true today, even if the tools to execute the idea have changed with digital advancements. However, as generative AI software like Midjourney and DALL-E has proliferated in design work, anxiety has risen over its implications for creativity itself.
Generative AI tools undeniably bring efficiencies to some parts of the creative process, but there’s growing unease that they enable passivity by providing the results of creativity without effort or self-examination.2 Challenging this passivity not only requires defining the difference between good and bad design, it also requires defending the need for good design in the first place.
When prompted, generative AI software can produce a mind-boggling variety of images, from sepia-toned family-album style 70s film photography to conceptual dreamscapes of blossoms in bubbles. It’s easy to see why use of the technology has so swiftly gone from hypothetical debate to working reality, as creatives experiment with bringing concepts to life that were previously impossible or, at the very least, highly time consuming and expensive.3
“Despite having the apparent ability to create entirely unique images at the click of a button, there’s a sense that the use of AI is accelerating a glossy kind of homogeneity. That’s because it’s derivative by design.”
With the advancement of high profile lawsuits against generative AI developers - including one making its way through the California Courts, which accuses Stability AI, Midjourney and DeviantArt of unlawfully using artists’ work to train Stability’s Stable Diffusion model - it’s now widely known that the software is ‘trained’ on massive amounts of available data. Because of this, it’s more likely to produce images that display stereotypical themes.5
From Frida Kahlo and Vincent Van Gogh to Yayoi Kusama and Banksy, generative AI’s ‘knowledge’ is an amalgamation and reconfiguring of thousands upon thousands of data points, without the depth or discerning provided by contextual experience or knowledge. Overrepresentation of source material also results in an overrepresentation of that style, a self-fulfilling cycle. For critics, that’s why it suffers from an “uncanny averageness”, producing “impossibly shiny” images that have a slightly surreal edge to them.6
Countering this requires inputting ever more sophisticated prompts. Analysis of 14 million pairs of image prompts by researchers at Georgia Tech and IBM found that using terms specific to artistic media and techniques, such as “chiaroscuro lighting” and “occlusion shadow”, yielded more targeted results.7 For designers, this means not just being able to know what ‘looks good’ when they see it, but also being able to turn those references into text prompts that a Large Language Model (LLM) can recognise.
But even this approach has its limits. Firstly, artists who are against having their work used to train LLMs are turning to tools such as Glaze and Nightshade, both produced by the University of Chicago, to subtly alter their pieces so they cannot be replicated. Known as “data poisoning”, this self-defence against copyright infringement protects the individual artist, but could also plausibly stop the AI from working by corrupting its output to prompts.8
Secondly, the skill of effective prompting is not the same skill as designing - part of the reason McKinsey suggests the need for dedicated “prompt engineers”.9 Without a design education (formal or informal), though, prompters are restricted to a narrower field of references. Moreover, to recognise ‘the best’ of what an AI produces, one has to understand the thing it is trying to reproduce and - more importantly - understand what makes a good reproduction of that thing.
The final limit to using generative AI in creative design work is not about the crafting of it, however. It’s what to do with the art once it’s created. Generating images, regardless of tool used (whether pencil or MidJourney), is never the first or last step in the design process.
Notwithstanding the shady data used to get there, generative AI does make images easier to produce. It does give visual life to fun and interesting concepts. But what it cannot do is connect that imagery to a message - the part that gives design both meaning and utility. The US editor-at-large of It’s Nice That, Elizabeth Goodspeed, argued that achieving this requires designers to develop their own individual sense of taste, proposing a highly inward-looking approach “not so unlike going to therapy”.10
But there’s something to be said for outwardness too. If using AI to create artwork tempts a user to think within ever narrowing parameters, then expanding curiosity beyond the self, by discovering obscure sources, collaborating with fellow designers, noticing the strange things people leave on street pavements, is vital to avoid producing design that is predictable and monotonous.
Because of its implications for efficiency, business leaders are frothing at the mouth to implement generative AI in all processes. In the case of design, though, this should not automatically lead to more production. Instead, it should give creatives time and space elsewhere, to do the work of exploring and experimenting.
Eminent designer Steven Heller put it best when he praised Tscherny for showing corporate America that “design should not be a cosmetic service”: it is not about creating images for images sake.11 At its best, design takes a sharp knife through the white noise of ever more ubiquitous visual media, finding the idea that matters and using it to connect with the audience. Embracing the intelligence of design will be far more effective than mass production ever could be.
If anything you've read here piques your interest, we'd love to hear from you at hello@poppins.agency
1 http://www.designculture.it/interview/george-tscherny.html, https://www.printmag.com/daily-heller/the-daily-heller-george-tschern
4 https://www.theartnewspaper.com/2024/01/04/leaked-names-of-16000-artists-used-to-train-midjourney-ai, https://www.reuters.com/legal/litigation/stability-ai-midjourney-should-face-artists-copyright-case-judge-says-2024-05-08/
6 https://www.artnews.com/list/art-news/artists/surrealism-and-artificial-intelligence-art-1234704046/this-is-not-a-pipe-why-do-ai-images-look-surreal/, https://www.wsj.com/articles/how-the-ad-industry-is-making-ai-images-look-less-like-ai-8b4250fd
7 https://www.washingtonpost.com/opinions/interactive/2024/ai-image-generation-art-innovation-issue/, https://poloclub.github.io/diffusiondb/
8 https://www.digit.fyi/artists-use-nightshade-to-protect-their-work-from-ai/, https://www.theverge.com/24063327/ai-art-protect-images-copyright-generators
9 https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering
10 https://www.itsnicethat.com/articles/elizabeth-goodspeed-column-taste-technology-art-280224
11 https://sva.edu/features/in-remembrance-george-tscherny-1924-2023