Selected Work · 07 projects
01Contact & App Resolution 02Head Gestures & Summarization 03Sending Messages Without Confirmation 04Announce Messages 05Share This 06Editing Message Text 07Intercom
01

Contact & App Resolution

Apple · Siri

Say "call Nick" and Siri might find three Nicks across two different apps. Most people use more than one messaging app — and in some regions, the default isn't even a native iOS app. Ambiguity was everywhere.

Led design end-to-end, working with engineering on a behavioral model using signals like frequency and recency to resolve both the contact and the app. A key aspect of the design was asking questions when the model was unsure — and always asking the simplest possible question. If two Nicks share the same first name but different last names, just ask: Which Nick? A or B. I also led junior designers on the correction flows for Contact & App.

Ambiguity Handling·User Behavioral Model·Correction UI·Multi-Modal Design
02

Head Gestures & Summarization

Apple · AirPods

Our incubation lab developed the ability to detect head gestures through AirPods. My challenge as lead designer for Announce Notifications was figuring out if, and how, to incorporate them — knowing AirPods are often worn in public, where using your voice to control announcements can feel awkward. I was also well aware of user frustration with long announcements disrupting media playback.

I worked with engineering to summarize long notifications using an LLM, then invite users to "read it" via voice or head gesture. I worked with Sound Design to subtly signal the follow-up window and gesture detection. Beyond summaries, I also proposed Smart Actions — like reply or "add to calendar" — based on notification content. The key to keeping it simple: always map each question cleanly to a yes/no answer.

Model Design & Tuning: LLM·Novel Input Modality
03

Sending Messages Without Confirmation

Apple · Siri

Siri always asked for confirmation before sending — even when it clearly heard you right. That friction adds up.

Working closely with engineering and PM, we tuned a behavioral model that predicted whether a user would have confirmed anyway — based on voice input quality and message content. Even with careful tuning, models make mistakes, so user control was essential. The opt-in introduced the feature gradually, a clear toggle let people turn it off, and a small send delay gave just enough time to catch a second thought.

User Behavioral Model·Model Design & Tuning·Multi-Modal Interaction·Settings Design·Opt-in Experience
04

Announce Messages

Apple · CarPlay
CarPlay Announce Messages Settings

Having messages read aloud while driving is genuinely useful — until you've got a car full of people who don't need to hear them. Privacy needs vary wildly depending on who's in the car with you.

The settings design was really the heart of this feature. Working with the CarPlay Design Lead, we mapped out the full spectrum of scenarios — from someone usually driving alone to a parent shuttling kids around all day. The result was a control system flexible enough to handle all of it, without feeling overwhelming. Interruption was made easy — steering wheel, voice, or screen — and announcements were carefully kept out of the way of navigation.

Settings Design·Multi-Modal Interaction·Privacy Model
05

Share This

Apple · Siri

Sharing photos, videos, and rich media through Siri — making complex actions feel fast and totally natural through voice alone.

Voice Sharing·Rich Media·Intent Resolution
06

Editing Message Text

Apple · Siri

Introduced keyboard-based message editing within a voice-first experience. Designed interaction models that seamlessly integrate touch and voice input without disrupting flow.

Touch + Voice·Input Design·Multi-Modal Interaction
07

Intercom

Apple · HomePod

The experience that lets you call out to anyone in the house just by speaking — no tapping, no app, just your voice and a HomePod in the room.

Multi-Device·HomePod·Voice Interaction
Email LinkedIn