Smart mobile solution for identifying real-time parking space

Project Background

Living in a city comes with its own advantages and disadvantages. One of the major disadvantages is the amount of traffic density it generates on a daily basis leading to parking issues.

There are a number of constraints while looking for a parking spot: the time of the day, location, cost, distance to destination, open/covered parking. Apart from this, it is a known cause of stress and distraction while driving.

Interactive Prototype: https://r9trx9.axshare.com/home.html

Role

Client: Academic project
Role: Entire product design from research, concept, visualisation, and testing
Tools: Axure, Photoshop, Illustrator, FromTextToSpeech.com
Skills: User Interviews, secondary research, UX & UI design, Prototyping, User testing
Duration: 3 weeks

Project Objective

To solve the challenge of finding an ideal parking spot by coming up with a solution that identifies real-time parking spots and guides the user without seeking attention or action.

Competitive Study

I started my secondary research with two New Zealand based mobile solutions, PayMyPark and ParkMate. However, both of them are focused on easing the parking payment process and showing the availability of fixed parking spots. Additionally, the UI is cluttered with actions and information that makes it difficult to learn and use.

There are apps like ParkWhiz and ParkingPanda which are localised for a few cities and focuses on displaying fixed parking space inside buildings. They do not show any real-time available spots.

SpotAngels is another US-based mobile solution that is based on crowdsourced data where users mark any available parking spots with their smartphone GPS. This may not always be reliable.

I came across an IoT based solution called “SENSIT”, which is a wireless parking sensor mounted on road to detect the real-time occupancy status and parking duration of individual parking space. It is developed by a Netherlands based company, Nedap Identification Systems.

With SENSIT’s capabilities as inspiration, I planned to visualise a design solution with Artificial Intelligence and Mixed Reality that could not just simplify the parking problem but add delight to the overall experience.

Let’s Talk to Users

I conducted interviews with 5 users to understand how they look for parking spots, the challenges they face and their expectations on how technology can come to their help.

Insights

To generate user insights, I created a framework to organise user feedback and generate insights. I marked each insight with an alphanumeric character to map with the challenges and expectations later.

Next was to understand the process people follow while looking for a parking spot and what kind of challenges they face. That helped me identify the pain points and opportunities.

User Journey Map — Current State

The user insights are mapped with the opportunities identified. Based on the pain points, I identified a few new opportunities marked in green.

The journey map clearly shows that finding a parking space, the parking process and returning to the parking spot is a cause of stress and hassle for users.

Let’s define an ideal journey for the users with my proposed solution.

User Journey Map — Future State

The future state takes care of the pain points and challenges users currently face.

Design Drafts

The feature set defines what elements go in a UI, how it behaves and interacts with the user. I started out sketching out my initial ideas.

The sketches led to low-fidelity wireframes. I connected the screens and created wireflows.

I tested my wireframes with 5 users and incorporated their feedback to improvise vehicle identification, parking spot selection and the parking simulation.

Design Guidelines

Design Decisions

1. Input Vehicle’s Registration Number

This is an optional screen for users if they would prefer iPARK to recommend parking spots based on their vehicle size.

User Action: Input vehicle’s registration number

Design Strategy:

  • More prominence to voice-input by means of contrast and size as users may use the app during driving
  • Manual entry is an alternative and secondary approach
    Keep it simple, clean, minimalist and avoid clutter

2. Confirm Vehicle

The app suggests the user the vehicle’s make and model.

User Action: Tap on the image to confirm the make and model

Design Strategy:

  • Allow user to perform action without looking at the screen
  • Added vehicle image for easier identification
  • Bigger, prominent text for easier readability
  • Clean UI with minimum elements

3. Available parking spots

iPARK recommends real-time available parking spots represented as “bubbles” near the users’ destination.

User Action: Selects the parking spot based on proximity, free/paid, open/covered

Design Strategy:

  • Use users’ existing mental model by showing map UI with pointers for destination and parking spots
  • Bigger the size of the bubble, higher the probability of acquiring it. It is calculated by the IoT sensor fixed at the parking spot that tracks the average frequency of the spot being occupied. If the spot is on a busy street and frequently occupied, the size of the bubble will be less and vice versa.
  • Use conventional wisdom to show paid parking as “dollar” and covered parking as “roof”
  • A small learning curve to “learn” the free and open parking

Offering users a choice of real-time parking spaces is the USP of iPARK. This is the UI where the user would make a conscious decision of selecting a parking space. Hence, I analysed this interface with the decision action funnel.

4. Park mode

At the parking spot, the UI turns into a simulated view of the user parking the vehicle.

User Action:

  • None
  • Follow the beep sound to park

Design Strategy:

  • Simulate the movement of users’ vehicle
  • Sensor flashing animation to indicate it is analysing the surroundings
  • A dotted guide to indicate the optimum parking path
  • The IoT sensor on-road calculates the position of the vehicle and simulates it on the mobile UI real-time. There’s audio-assistance with a beeping sound to guide the user to park correctly which can be turned off.

5. Parked!

Based on the users’ parking, a message is shown along with a percentage generated to show a comparison with other iPARK users.

User Action: Click Done to indicate parking location and time left

Design Strategy:

  • Motivate users by showing relevant message and comparison with other iPARK users
  • Introduce gamification by rating the parking and letting users redeem it for parking fees
  • Inform user than let user act

6. Extend Parking Time

UI indicates the parked location and parking time left.

User Action:

  • Extend parking time
  • Guide user back to the parking location

Design Strategy:

  • Primary focus on the time remaining with animation
  • Show the parked location as a second priority
  • Allow user to extend parking time but as a secondary action

7. Guide Back

UI guides the user back to the parked location.

User Action: Use phone camera for guidance

Design Strategy:

  • Design visual cues with mixed reality to guide the user
  • Use one-third of the UI to show users’ current location

Future Considerations

Hands-free interaction: Two-way voice interaction between the app and the user using Natural Language Processing (NLP) can completely avoid any touch-interactions with the app Park with AI: Use AI to pre-book parking spots based on users’ calendar Smart Cars + Smart Park: Self-driving cars can “talk” to smart parking sensors and reserve an ideal parking spot without users’ intervention

Future Considerations

  • This was a fun project and the only challenge I faced was where to draw the line in terms of the app’s features and capabilities. I wished to conceive a product that is scalable and futuristic, yet technologically possible

  • While adding audio using JavaScript, I intended to play it by default. However, the tool has limitations and I had to find a workaround with an “onClick” event.

Learning

Conclusion

Interactive Prototype

Prev Project

Visual analysis of my last 100 Facebook posts

Next Project

Plant trees remotely with smartphones using drones and LIDAR (Light Detection and Ranging)