What AI in Healthcare Is Really Promising
Imagine a doctor’s visit that never ends. An app on your phone and a wearable on your wrist monitor your health 24/7, alerting you to potential issues before you even feel sick. It sounds utopian: no surprise illnesses, faster diagnoses, maybe even AI-guided cures. However, as Georgia Tech researcher Catherine Wieczorek has found, today’s artificial intelligence (AI) health tools carry more than medical promise—they also reflect society’s deeper values, assumptions, and trade-offs.
Wieczorek, a TSRB researcher and Ph.D. student in Human-Centered Computing, brings this space a critical but constructive lens. With a design and public health background, she approaches healthcare technology as a technical and cultural system. “I’m interested in thinking about time differently in computing,” she says. “We have to look to the past and future, not just the present, when we design tech.”
Her recent study with colleagues Heidi Biggs, Kamala Payyapilly Thiruvenkatanathan, and Shaowen Bardzell asks what kinds of futures AI in healthcare is trying to build, and for whom? The answers are as revealing as they are provocative.
A Framework for Thinking About AI’s Promises
Wieczorek et al.’s analysis draws on sociologist Ruth Levitas’ idea of “utopia-as-method,” a sociological theory that proposes using utopia not just as a blueprint for a perfect society, but as a method of exploring and imagining alternatives to the present. Bringing this to technology, Wieczorek et al. examine AI systems not just for what they do, but for the vision of society it implicitly supports. They utilize utopia-as-method’s three-part approach to ask the following questions:
Archaeology: What hopes or ideals from the past or present does technology reflect?
Ontology: What kinds of people or behaviors does it assume or encourage?
Architecture: What kind of future system does it aim to build, and with what consequences?
Using this lens, Wieczorek and her collaborators analyze 21 real-world AI healthcare products, ranging from fertility tools to wearable monitors. Their findings outline four dominant visions being marketed through health AI.
Always-On, Always-Optimized
Promotional materials for AI-driven health tools consistently depict four distinct visions or “worlds” of the future. In the world of omnipresent health, AI is always watching: devices like Alerje’s allergy sensor and Marani Health’s maternal wearables collect constant data streams, turning everyday environments into diagnostic spaces. Healthcare becomes proactive and ever-present, catching problems before symptoms appear.
The second vision prioritizes faster, more competent care, where AI-powered ultrasounds and surgical robots promise speed, precision, and fewer mistakes. However, as Wieczorek et al. note, an overemphasis on efficiency can sideline the human connection that defines compassionate care.
The third world is a preventative one, where AI doesn’t just respond to illnesses, it tries to prevent them entirely. Chatbots like Northwell Health’s Pregnancy Chats check in regularly; even selfie apps scan your face for vitals in seconds. Health is a continuous process, and everyone is treated as a potential patient.
Finally, there’s the vision of optimized bodies, where AI tools act as personal improvement coaches. Fertility apps like AiVF promise better chances at pregnancy, while behavior-shaping devices like Pavlok nudge users toward healthier habits. Beneath the surface lies an ideal of constant self-optimization: continually improving and upgrading. These visions carry real potential but reflect a narrow definition of what it means to be healthy. “These technologies embed ideas about what kind of people we should be, a singular vision of what it means to be healthy, and how society should work,” Wieczorek says.
Chloe (pictured above) is an AI assistant that opens up previously unseen insights for those navigating IVF, offering real-time support for patients and their care teams.
Shifting Roles: Patients, Providers, and AI
One of the most significant changes brought about by these technologies is the way they redefine roles within healthcare.
Traditionally, patients are considered active decision-makers, and doctors are regarded as authoritative caregivers. However, in the AI-driven model, Wieczorek et al. write, “patients become passive consumers, healthcare providers become co-producers of knowledge alongside AI, and AI itself acts as both agent and gatekeeper.”
In practice, this means patients are often asked to trust the system’s recommendations without fully understanding how they’re made. Doctors, meanwhile, increasingly rely on AI tools to interpret data or suggest treatments, validating or refining the machine’s suggestions rather than generating them independently.
In this context, AI isn't just a helper; it filters information and sets thresholds. In fertility care, for example, one system promises to “reduce guesswork” and enable “collaborative discussions.” But Wieczorek warns that “many tools present the AI as the smarter ‘partner’ in the room,” with humans relegated to implementation.
This new dynamic has consequences. If a health app says you’re fine, do you accept that passively? If an AI disagrees with your doctor, who do you trust? The answer isn’t always clear.
When Utopia Collides with Reality
Despite their promises, AI healthcare tools raise genuine concerns in the real world that complicate their utopian appeal. Privacy is one of the most pressing issues. Continuous monitoring means data collection, tracking everything from heart rate and sleep to intimate conversations with chatbot therapists. Without explicit protections, the line between seamless care and surveillance blurs. There’s also the issue of provider burnout. While AI is often touted as a time-saver, it can generate more alerts and data for clinicians to interpret, thereby increasing their workload. Studies have already documented AI-related fatigue, particularly in fields like radiology. Another challenge is bias and exclusion. Because AI systems learn from data, they often reproduce the blind spots within that data. Many tools assume an ideal user who is non-disabled, neurotypical, and tech-savvy, leaving others behind. Devices that promise to “fix” vision, hearing, or fertility can unintentionally frame disability as something to be corrected, not accommodated. “If you don’t fit that standard,” Wieczorek warns, “you risk being overlooked or stigmatized.” Finally, there’s the loss of human touch. AI can detect symptoms but can’t offer empathy, cultural context, or emotional support. Healing involves more than data; it requires connection. And that’s something no algorithm can replace.
So, how should we build AI in healthcare? For Wieczorek, the answer lies in collaboration. Tools should be designed not just by engineers, but with input from doctors, patients, caregivers, and ethicists. For example, an app for postpartum care might draw insights from OB-GYNs, doulas, mental health professionals, and new mothers.
Co-design, she argues, surfaces blind spots early. “If an algorithm flags a risk, it should be built with input on how to explain that compassionately,” she says. And systems should be auditable, inclusive, and transparent. They should be clear about how decisions are made and for whom they’re made.
“People should be the center of conversation in technological design, not just machines,” she argues. “Healthcare must remain as much about empathy as efficiency.”