Maestro GUI

TEAM
Moran (PM), Emilie (Clinical), Jefferey (CSO)
ROLE
Interaction Design Design Systems User Research Trust & Safety
DURATION
3 months — Jul–Sep 2024

I was the only designer at Moon Surgical. The Maestro robot’s touchscreen was causing surgeons to abandon procedures mid-surgery. I redesigned the entire interface. After launch: 0 surgery restarts.

View Impact

Preview of the robot’s second-generation human-machine interface

CONTEXT

Surgical Teams Control the Robot Through a Touchscreen

Maestro is a compact laparoscopic surgery robot. Before each procedure, the surgical team configures it through a touchscreen: procedure type, arm positions, camera mode, force settings. They do this in gloves, under bright OR lights, with a patient already on the table. There’s no room for confusion.

Close-up of Maestro Robot’s GUI in action during surgery

Goal

Find out why surgeries were being cancelled mid-procedure, fix the root cause, and build a design system that could scale across 40+ U.S. hospitals.

THE PROBLEM

Surgeries Were Being Cancelled Midway

The same report kept coming in from ground support: surgeons were quitting on the robot mid-surgery and switching back to manual. I needed to understand why, so I ran three research tracks:

1:1 Interviews

Surgeons, surgical assistants, and circulating nurses across multiple hospitals

16
interviews conducted

Diary Studies

Staff documented friction points and workarounds over 4 weeks of daily use

28
diary entries

OR Observations

Live observation of surgical teams interacting with the GUI during real procedures

9
sessions observed

I grouped everything by severity. Three problems showed up in 85%+ of sessions and were the direct cause of cancellations:

PROBLEM 01

Excessive touch targets in a linear flow

Setup should be three steps. But the screen was packed with buttons, sliders, and toggles that weren’t needed yet. People hesitated. They hit the wrong control. Cognitive fatigue in an OR is dangerous.

Screenshot of the old GUI showing high number of touch targets highlighted in red boxes
PROBLEM 02

Unclear instructions with no visual hierarchy

The instructions were walls of text. Nobody reads paragraphs while setting up a surgical robot. People didn’t know what to do next, backed up, tried again. Setup took twice as long as it should.

Screenshot of the old GUI with unclear instructions highlighted in boxes
PROBLEM 03

Fault modals with no context or recovery

A fault occurs and a modal takes over the entire screen. No explanation. No recovery path. The only option: restart the whole system. With a patient on the table. That’s how you lose a surgeon’s trust for good.

Screenshot of the old GUI showing fault modals blocking the entire interface
These weren’t UX annoyances. Every cancelled surgery meant a patient under anesthesia while the team switched to manual instruments. The interface was actively making surgeons distrust the robot.
DESIGN JUDGMENT

What We Tried & Killed

We didn’t land on the final direction immediately. Three approaches failed first.

COGNITIVE OVERLOAD
Streamlined V1.0 with clearer instructions Added step-by-step instructions, informative modals, and a straightforward flow. But the information architecture was still too heavy — surgical staff faced the same cognitive load.
OR NOISE INTERFERENCE
Voice-guided setup mode Explored hands-free voice guidance. But OR noise — equipment, team communication, alarms — made recognition unreliable. Added anxiety rather than reducing it.
BLOCKED BY RIGID STEPS
Wizard-style step lock Strict wizard that locked users into sequential steps. Worked for novices but frustrated experienced teams who wanted to skip ahead. Needed flexibility within structure, not rigidity.
What these taught us

Adding more information didn’t work. Restricting the flow didn’t work. The answer was subtraction: strip out every non-essential element and make what remains instantly recognizable through icons and animation, not text.

DESIGN PROCESS

Three Principles. One System.

The failed attempts made the principles obvious. Everything came back to three ideas:

💠

Emphasis on Icons

Replace text-heavy controls with icon-based recognition to reduce reading time under pressure

⚠️

Prioritize Fault Handling

Faults in high-stakes systems must be informative, non-blocking, and recoverable

➡️

Movement Guidance

Replace static instructions with immersive animations that guide through adjustments

Key Decision

I built the system using atomic design: tokens, then atoms, then components, then screens. The three principles aren’t guidelines someone can skip. They’re embedded in every component by construction.

Canvas showing the Maestro GUI design system components including buttons, icons, modals, and layout patterns

Design system components built using atomic design methodology

TOKEN SYSTEM

Core tokens optimized for bright OR environments and gloved-hand interaction.

Background
#191919
Primary Text
#FFFFFF
Accent / Active
#FFC374
Fault / Alert
#FF7F7E
48
Components
3
Atomic levels
8
Icon sizes
12
Color tokens
OR ACCESSIBILITY CONSTRAINTS

The OR forced every token decision. A misread screen means more time under anesthesia.

48px
Gloved Hands 48×48px minimum touch targets for latex and nitrile gloves
4.5:1 fails in OR 7:1 OR safe
Bright OR Lighting 7:1 contrast minimum — OR lighting cuts perceived contrast 30–40%
icon + text, never color alone
Color Independence Every state uses icon + text — never color alone. Safe for color-blind staff.
SOLUTION

Addressing Each Problem Directly

Each fix maps directly to one of the three problems. Because they all came from the same design system, they shipped as one cohesive update, not separate patches.

PROBLEM 01 → SOLUTION

Reduced touch targets to essentials only

I pulled everything off the setup screen except what you actually need: positioning, locking, arm control. The rest went into advanced menus. Fewer choices, faster decisions.

Screenshot of the redesigned GUI showing minimal, clear touch targets
PROBLEM 02 → SOLUTION

Immersive animated instructions

Nobody reads paragraphs while prepping a surgical robot. I replaced the written instructions with short animations that show each adjustment as it should happen. The full details live in a reference manual, but most teams never need it.

Immersive animated instructions for arm adjustment

PROBLEM 03 → SOLUTION

Contextual fault labels instead of blocking modals

No more screen-blocking modals. When something goes wrong now, a label appears at the top telling you exactly what happened. You can still see and use every other control. The system keeps working while you fix the issue.

Screenshot of the redesigned GUI showing non-blocking contextual fault label at the top
Key Decision

Three screens. That’s the whole flow. Prerequisites, Setup, Surgery. We mapped all 14 fault types and designed recovery for each. The animations are 2–3 seconds, timed to match actual arm movement speed. We tested slower and faster — too slow felt patronizing, too fast and the guidance was missed.

Before — Branching
?
Multi-path, users got lost between screens
After — Linear
Prereq Setup Surgery
3 clear screens, always know where you are

The complete linear user journey: Prerequisites → Setup → Surgery

IMPACT

Launched Dec 2024. Measured Until Mar 2025.

For three months post-launch, we tracked setup duration, system restart frequency, and surgery completion rate with the robot across every deployment.

0
Surgery restarts
after redesign
7 min
Average reduction
in setup time
75%
Reduction in system
restarts per 5 surgeries
3
Step linear flow
(from cluttered multi-screen)
“This is the first time I’ve been able to recover from faults without restarting the whole system.”
— Surgical Assistant during validation testing
MY ROLE

Sole Product Designer

I was the only designer on this. Research, design system, screens, testing, handoff. The team brought clinical knowledge and business context.

What I Owned

  • User research — conversations with surgical teams and ground support
  • Problem analysis — screen analysis, logs, user complaints
  • Design principles and design system (atomic methodology)
  • All UI design — from wireframes to high-fidelity screens
  • Usability testing with OR teams
  • Animated instruction design
  • Fault handling redesign
  • Three-step flow information architecture

The Team

  • Moran — Product Manager (requirements, prioritization)
  • Emilie — Clinical Researcher (OR context, user access)
  • Jefferey — Chief Strategy Officer (hospital expansion strategy)
  • Engineering team (implementation)
REFLECTION

What I Learned

This project reset how I think about pressure in design. Not shipping deadlines. The kind where someone is under anesthesia.

Three buttons can feel harder than ten.

It depends on consequences. If tapping the wrong one extends someone’s time under anesthesia, three buttons feel impossible. I used to think cognitive load was about quantity. It’s about anxiety.

The robot worked fine. The interface made people doubt it.

Surgeons didn’t abandon Maestro because the technology failed. They abandoned it because the screen made them feel like it might. I didn’t add features. I removed reasons to hesitate.

You can’t simulate gloved hands and a patient on the table.

We found problems in the OR that never appeared in any design review or simulation. A clean screen in Figma becomes chaotic when you’re wearing nitrile gloves under surgical lights. I don’t trust lab testing anymore for products like this.

Surgical faults and AI errors follow the same rule.

Don’t block the user. Show what went wrong. Let them recover without losing context. I designed this for a surgical robot, then realized it’s the same pattern AI products need: confidence signals, graceful degradation, and always an escape hatch for the human.