DURATION
6 Weeks
Jul 2023 - Aug 2023
MY CONTRIBUTION
Experience Design
Interface Design
Usability Testing
CONTRIBUTORS
Kangning Tang
Coco Chen
Sherry Yang
TOOLS
Figma
Mid Journey
After Effects
"Illuminate nature's wonders."

Reimagining nature exploration, “Lumos” is a next-generation AR companion that makes every adventure more immersive and informative, and safer.
MISSION STATEMENT
To integrate AR and AI technology seamlessly with nature, helping users navigate their surroundings while maintaining an organic experience of the world.
PROBLEMS
Nature explorers often find it challenging to navigate unfamiliar terrains and identify the diverse species they encounter. Traditional methods like physical maps or mobile apps can be cumbersome, distracting, and less immersive.
Distraction from Environment
Safety Concerns
Lack of Real-Time Updates
Limited Information
HOW MIGHT WE
"How might we ensure users’ safety while venturing into the wilderness?"
"How might we foster a deeper connection between user and nature?"
"How might we assist the exploration without overwhelming the user or disrupting the natural immersion?"
WHY AR GLASSES
Recognizing the limitations of traditional outdoor navigation and species identification, we turned to AR for its ability to seamlessly blend the physical and digital worlds. This technology enables an interactive, intuitive experience, providing real-time guidance and information integrated with the natural environment.
HOW IT INTERACT
Eye Tracking
Using advanced sensors to capture minute eye movements. The data is analyzed in real-time to precisely determine where the user is focused on the screen and how their eyes move across it.
Voice Control
The built-in microphone captures the user’s voice and commands, which AI processes to extract key information and execute actions, providing feedback via voice or on-screen display.
SIX MAIN FEATURES
SMART NAVIGATION


The navigation bar on the ground will only appear when the user has just started their journey or headed to wrong direction for a while, which minimizing obstruction of vision and allowing the user to explore freely.
“I want to go to...”
”Show me the way to...”
SPECIES IDENTIFICATION


Curious about a plant or animal? Stare at it and ask me! With AI-powered object recognition, I’ll identify it and provide a detailed description with engaging animations.
“What is it?”
“Tell me about this plant!”
ACTIVITIES DISCOVERY


Curious about nearby spots? Stare and ask me and I'll tell you all about it!
“Show me nearby spots.”
HEALTH DATA & REVIEW


Curious about your health data? Stare and ask me and I'll tell you all about it!
“Show me the health data.”
EMERGENCY



If you have any emergency that have no idea? Stare and ask me and I'll tell you all about it!
“I am encountering..., what should I do?”
SAFETY WARNING


I will provide you with various warning include weather, health, route, and battery.
RESEARCH
CONTEXTUAL INQUIRY








Some rare species or weeds will give different answers for several tries.
Using mobile phone to navigate but sometimes get out of signal.
It is easy to fall into the pit several times.
It's hard to see the screen under the sun.
COMPATITIVE MATRIX

Most of the existing solutions lack built-in safety and health-related functionalities
Each existing app serves a single or limited purpose, such as weather tracking, navigation, or nature identification.
Most apps ignore health aspects, such as fitness tracking, hydration reminders, or altitude adaptation alerts.It's hard to see the screen under the sun.
INTERVIEWS
6
interviewees
“I want to see the time left, but based on my own pace.”
“I don’t think you can use your cell phone when you’re out in the field—you won’t see the road, and you’ll end up falling”
“If I face a serious injury, I have no idea what to do because there’s no internet or resources on the mountain.”
“The screen gets too dark during the day, so I can’t see clearly.”
“I wonder how far I’ve already gone and how much time is left. When I ask, the kids always say five minutes, but there’s really an hour left.”
“The view changes with each season. After heavy rain, the waterfall is flowing—I want to know the current status of the view.”
INTERFACE EXPLORATION
INFORMATION & HIERARCHY
4
rounds
|
11
participants
COLOR & HIERARCHY


Reducing the number of colors minimizes confusion while maintaining clear urgency levels.
Reducing transparency in alert boxes ensures better readability against the natural background.
Aligning and emphasizing actionable text more prominent and enables quicker scanning.
Applying stronger color cues which makes warnings feel more immediate and easier to interpret.
AESTHETIC & INFORMATION


Increasing text opacity and contrast makes the information more legible against the background.
Adjusting text size and weight differentiates primary and secondary details, improving clarity.
Aligning text and bounding boxes reduces clutter and enhances visual balance.
AESTHETIC & INFORMATION


Enhancing depth and layering makes the interface feel more immersive and intuitive, which enhance storytelling and assist understanding.
INFORMATION & HIERARCHY

Adjusting text size and weight makes critical actions more prominent and easy to scan quickly.
Expanding the speech bubble and separating key actions creates a cleaner, more intuitive structure.
Shifting from a question-based prompt to direct voice command suggestions.
FURTHER TECH OVERVIEW
PROTOTYPING & FINDINGS

Implementing AR glasses for testing
OPTICAL LIMITATIONS


AR glasses, unlike VR headsets, are unable to render true black due to their optical limitations.
FOV CHALLENGING


AR glasses, unlike VR headsets, are unable to render true black due to their optical limitations.
It is highly challenging to balance field of view (FOV) and visual components in AR, ensuring clarity without overwhelming the user. Even when UI elements are placed at the edges of the screen, they still dominate the view.
Conducting user testing for AR glasses presents unique challenges due to the private nature of their displays, making it difficult for observers to see what users are experiencing. Consequently, feedback obtained verbally, limiting the ability to directly observe user interactions and reactions.