Hexabot Manager App

Summary

This year DesignFlows, sponsored and organized by Bending Spoons, challenged participants to imagine user interfaces in an alternate 2025, where humans permanently inhabit the Moon and Hexabots assist them in daily life. My project explored this parallel scenario through grounded, modular UX, despite time limitations.

A Feasible Interface for a Fictional Future

The brief imagined a sci-fi scenario, but I chose to approach it as a real design problem: what does a reliable, voice-accessible, mission-critical UI look like when used by both researchers and civilians in constrained environments?

Overview of current Hexabot status. Clear labels support rapid scanning.
Early state for user role selection. Designed to minimize initial friction while emphasizing mission context.
The app should greet users with visual clarity — a fast splash to anchor the brand without making them wait.
Logins are frequent. The action should feel invisible. Tap. Done. No distractions, no small buttons.
I need to differentiate user flows early. The needs of a researcher aren’t the same as a parent or a curious visitor.
The dashboard must show what matters in under two seconds. Status at a glance, with zero interpretation needed.
Voice-guided task assignment interface showing Hexabot' details in the background. High contrast for mission-critical clarity.
Dark mode explorer interface. Highlights map-based Hexabot coordination and urgent visual cues for status monitoring.
Mid-task tracking with pause and cancel actions. Emphasis on reversibility and status transparency under urgency.

I focused on feasibility over fantasy. Every screen was designed to be modular, scalable, and understandable, without relying on exaggerated futurism. Inputs are large. Feedback is direct. Tasks are clear. Commands can be issued by hand or voice, useful for glove-bound users or in motion-restricted environments.

Once a user opens a specific bot, the UI should flex — more depth, but never more complexity.
Starting a task under pressure? The form should feel like a checklist, not a burden.
I added a review screen because the mental cost of a mistake is higher than the friction of a second look.
Progress isn't just a number. It's peace of mind. So I designed live feedback to feel like support, not noise.

This was not a visual experiment. It was a UX study in abstraction, accessibility, and decision friction.The interface adapts to researchers planning experiments and civilians managing daily logistics. Features were prioritized based on urgency, context, and action reversibility.

Completion state with contextual metrics.
Notifications, settings, and voice setup. Priority was given to accessibility, control toggles, and voice onboarding in compact screens.
I wanted the end-state to feel intentional — like something completed, not just something that ended.
Alerts should guide attention, not demand it. Filters help users act without having to sort things out first.
When someone can rename and color-label their bots, the system becomes theirs. Not just software, but familiar space.
Hands are often busy. Voice is not a feature here — it’s a form of inclusion.
What I Learned

This was a low-fidelity submission, designed during the same period I was preparing for another challenge, where I placed first. With limited time, I prioritized clear flows, interaction intent, and prompt clarity over polished visuals. Designing under constraint helped focus on what matters most: (1) one interaction per screen, (2) one voice command per action, and (3) one UI system for distinct user types.

In retrospect, I think my mistake was being too attached to feasibility within the uncommon scenario of the challenge. That mindset led me to a simpler, perhaps less imaginative solution compared to the top 40 selected for the finals.

DesignFlows reminded me that speculative design is strongest when it dares to go further — even if we’re on the Moon. 🚀

CONTACT ME