Why Some UX Designers Aren’t Using AI — and Why That Matters
Artificial intelligence is a prevalent topic in current design conversations. Tools like ChatGPT, Midjourney, and Figma plugins promise to streamline workflows and unlock creativity. However, for some UX designers, those promises fall flat or create new problems. Georgia Tech researcher Inha Cha wants to know why.
Cha is a Ph.D. student based out of Tech Square’s TSRB building, where she studies human-computer interaction (HCI) and science and technology studies. Before starting her doctorate, she worked as an AI product designer in South Korea. That experience—building AI tools while wrestling with their ethical and practical limits—sparked her current research. “I’m really interested in how people deliberate on whether or not to use AI,” she told us. “Not just the technology itself, but the decisions around it.”
Her recent paper, presented at the 2025 CHI Conference in Yokohama, Japan, explores an often-ignored question: Why do some UX professionals choose not to utilize AI?
UX, short for user experience, is the design field focused on how people interact with digital products. UX designers research, test, and build intuitive systems to create seamless and easy-to-use websites, apps, and tools.
Not Using AI on Purpose
In tech circles, the assumption is that if a tool exists and is powerful, people will use it. Cha pushes back on that idea. Her research explores “non-use,” a term that covers a wide range of intentional choices. This includes avoiding AI, using it sparingly, or limiting its use to specific contexts.
Collaborating with her advisor, Dr. Richmond Wong, Cha interviewed 15 UX professionals from various industries, including healthcare, finance, and streaming. All had tried AI tools in some form, but all had also chosen, at various points, to walk away.
“They weren’t resisting just for the sake of it,” Cha said. “They had thoughtful reasons, rooted in how they work, what they value, and what their organizations allowed.”
Three themes came up again and again.
The Nature of UX Doesn’t Always Fit AI
UX work often follows structured design systems. Designers need consistency across platforms, teams, and devices. Generative AI, which thrives on improvisation and variation, does not easily fit into those systems.
“UX isn’t necessarily a generative process,” one participant told Cha. “You have to work within constraints—what the technology allows, what the platform requires, and how the design system is structured.”
That structure leaves little room for AI-generated content. Several designers reported that they spent more time refining AI outputs than creating them from scratch, which slowed them down.
Others emphasized that the UX process, especially user research, cannot be easily automated. One designer explained that the value of creating personas lies in the act of building them: conducting interviews, interpreting subtle cues, and developing empathy. “If you skip that and generate a persona with AI, you’re losing what makes it meaningful.”
Human Judgment Still Matters
Cha found that many UX practitioners trust their instincts more than an AI model’s output, especially when it comes to understanding real people. “A lot of what they do is interpret things that don’t make sense on the surface,” she explained. “AI tools aren’t good at dealing with that kind of ambiguity yet.”
Several participants described AI-generated summaries as “flat” or “shallow.” One mentioned that tools like ChatGPT can quickly find patterns in interview transcripts but miss emotional tone or conflicting behaviors that only a human would notice. Another said that AI seemed incapable of grasping how users often act irrationally: “There’s a gap between what people say and what they do. That’s where our judgment comes in.”
This skepticism also came from a desire to protect the job's core. “There’s a worry,” Cha said, “that people outside UX will see AI as a full replacement. But most designers I talked to saw it as a tool that might help, not something that can replace their insight.”
Ethics, Policy, and Organizational Constraints
Many designers flagged concerns about privacy and security. If they used real user data to train AI tools, would that data remain secure? Would the AI system train on it? In regulated industries such as healthcare or finance, even the perception of data misuse can cause significant issues.
One designer told Cha she avoids using generative AI tools entirely when working on confidential projects. Another said she does not upload anything to the cloud, not because she’s anti-AI, but because her company policy prohibits it.
These rules and limitations are not always clear-cut. Policies may be vague in startups, leaving designers to guess what is allowed. In large companies, multiple teams such as legal, security, and leadership may weigh in before any new tool is adopted. One participant described AI use as a “gray zone,” where risk-averse teams tend to default to avoidance.
A Complex System, Not a Simple Choice
Cha’s study frames AI non-use not as stubbornness or ignorance, but as the result of a broader system. In academic terms, it is referred to as a “sociotechnical assemblage”—a way of understanding how tools, people, values, policies, and workflows interact.
For example, a designer might avoid using an AI plugin because it clashes with the company’s design system, contradicts their ethical standards, or is not compliant with data protection laws.
This system’s view helps explain why non-use is not just the absence of adoption. It is often the outcome of deliberate, rational decision-making.
Rethinking Innovation
Cha and Wong argue that researchers and toolmakers should stop assuming AI is always the answer and instead pay closer attention to when AI should and should not be used.
This means designing tools that accommodate both use and non-use, aligning with privacy standards from the outset, and creating space for professionals to opt out when necessary.
Cha also believes researchers must explore non-use as a form of resistance and care. “Sometimes choosing not to use AI is ethical and still efficient,” she said. “It can be about preserving quality, creativity, or trust.”
Her study does not dismiss the use of AI in design. But it offers a more grounded perspective that recognizes the value of human judgment, the importance of context, and the legitimacy of saying no.
“In the end,” Cha said, “I don’t think it’s about whether AI is good or bad. It’s about who gets to decide when and how to use it, and making sure designers have that choice.”