The Good, the Bad, the Mortal
Video paper · ARIN5301 HCI · Spring 2026 · ~720 words
A critical HCI analysis of Moxie, the AI companion robot for children that went silent overnight when its company shut down.
The idea
My video paper examines Moxie, an AI companion robot built for children, especially neurodivergent kids, by Embodied Inc. between 2020 and 2024. I evaluate Moxie across three classical product lenses repurposed for HCI: User Desirability, Technical Feasibility, and Business Viability. The framing isn’t original; it’s a standard product-strategy lens. What is original is the conclusion the analysis forces: Moxie is a case study in HCI ethics at product termination, not at product design. The video closes on a provocation I want to keep working on, “Design for goodbye.”
Why this topic
I picked Moxie because it failed in the most uncomfortable way a product can fail. It worked. Children formed real attachments, autistic kids reportedly had behavioural breakthroughs, parents wrote about it like a member of the family. Then in December 2024 Embodied’s funding collapsed and the robots went silent overnight. No refunds, no transition plan, no offline fallback. Every conversation had been cloud-processed, so when the cloud went away the robot became a plastic shell.
That asymmetry, a product that succeeds emotionally and fails structurally, is the most HCI-shaped problem I have seen all semester.
Sources and inspirations
The footage is fair-use academic criticism: TikTok creators @noahglenncarter and @heatherfraziertiktok documenting their families’ Moxie experiences, CNN’s Tech for Good segment from 2021, and Embodied’s own promotional video from 2020. The contrast between the promotional footage (children laughing) and the user-uploaded TikToks after the shutdown (children asking why Moxie stopped talking) is what carries the argument.
The conceptual frame is built from course readings: Don Norman’s visceral / behavioural / reflective levels for explaining why Moxie’s Pixar-warm face was so effective on the desirability axis; the technical critique that the cloud is not a fallback when discussing always-online products; and the design-ethics question of what designers owe users after launch, which is closest to what Cennydd Bowles calls future ethics.
Before vs. after this course
Before this course I would have evaluated Moxie like an engineer: did the speech recognition work, was the latency acceptable, did the on-device hardware perform reliably? I would have framed the failure as a business failure (the company ran out of money) and concluded the product itself was fine.
After this course that framing feels incomplete. HCI gave me three things the engineering frame did not.
- A vocabulary for emotional bond as a design property, not a side effect. If your product is good enough that vulnerable users form attachments, you have created a relationship, and relationships have continuity expectations.
- A method for treating discontinuation as a design problem, not a corporate problem. The same care we put into onboarding (the first ten minutes) should go into offboarding (the last ten minutes), and most products treat this as someone else’s problem.
- The discipline of writing the design rationale, not just the design, which is what made me realise Moxie’s failure was already encoded in its architecture. A purely cloud-dependent product cannot honour an emotional contract that outlives its company. That is a design decision, made in 2020, that pre-determined the 2024 outcome.
What I’d do differently
If I were designing Moxie today I would treat the question “What happens to this relationship if our company disappears?” as a P0 design requirement, the same priority as “What does the first interaction feel like?” I’d design a graceful local-only fallback, a parent-side disclosure of dependency risk, and a published end-of-life plan from day one. Not because it’s good business, it isn’t, especially at a startup, but because the alternative is what happened: thousands of children, many of them autistic, lost a friend overnight, and no design rationale was on file to explain why nobody saw that coming.
How I made it
I built the video in Remotion, a React-based framework where every frame is a component and every animation is code. That let me iterate on copy and timing in version control rather than a timeline editor, compose typography the way I compose a web UI, and use an AI-generated voiceover (ElevenLabs) for a consistent narrator. AI tools used in production: Anthropic Claude (script and code), ElevenLabs (voice), Remotion (composition). Every output was reviewed and directed by me; the analysis and critical framing are my own.
Sources & Acknowledgements
Footage (fair use, academic criticism): TikTok @noahglenncarter and @heatherfraziertiktok; CNN Tech for Good (2021); Embodied Inc. promotional video (2020).
AI tools used in production: Anthropic Claude (script and code), ElevenLabs (voiceover), Remotion (video composition).
All AI output was reviewed and directed by the author. Analysis and critical framing are my own.