Google Reviews | UX Design
Introduction
Google Reviews is a popular medium for finding real critiques, experiences, and opinions on places that you might visit. Although popular, the current interface is interwoven with Google Maps, making the features a little tricky to discover, along with limiting what can actually be done with the platform. This project explores the opportunity to connect with niche reviewers and have the ability to follow reviewers and have a social feed where users can view posts from the people that they follow. I am someone that likes to both read and write reviews, so having a feature that would allow me to find credible reviews from real people that visit places of interest to me and have my reviews accessible to a larger audience is something that I would enjoy having the opportunity to use.
Currently, you can follow reviewers on Google Reviews, but there is no way to search specific reviewers by name or see posts from accounts you follow in any sort of feed. The only way to find reviewers is to stumble upon one of their reviews, or if you are already following them and you search through your “following” tab to find them. For this project, my goal is to improve the Google Reviews platform for users through introducing a feature that allows you to view reviews in a single tab from reviewers that you follow, while also incorporating feedback from survey, interview, and focus group participants.
Current landing page of Google Maps—There is no “feed” button, tab, or way to easily view content from reviewers that you follow.
Needfinding Plan
Survey
My first needfinding activity was conducting a survey to get basic information regarding my topic and respondent demographics. My survey questions were mostly multiple choice with a few that were open ended. At the very end, I also opened up discussion for others to share their thoughts on my survey, feedback about the survey or project, or even ideas that they would like to see implemented in my project. My idea is to implement a new feature to Google Reviews, but I wanted to open it up for discussion at the end of my survey to see if there was anything else that people would want to see apart from the feature that I had in mind. This would help prevent me from getting too closed-minded with one idea and any preconceived notions. A full list of the survey questions can be found in the Appendix 15.1: Survey Questions of the document attached at the end of this page.
Heuristic Evaluation
For my heuristic evaluation, I will analyze the Google Maps and Google Reviews interfaces using these three heuristics to assess the overall user experience. This evaluation will help me identify areas for improvement before moving forward with designing my prototypes.
1. Recognition rather than recall: A well-designed interface should minimize the cognitive load on users by allowing them to recognize information rather than forcing them to remember it. How easy is it to remember the names of places with reviews that we want to refer back to? Is there a way to store reviews for later? How well does Google Maps and Google Reviews support users in recalling previously viewed reviews, and does it offer any shortcuts or recent activity logs to assist in this process?
2. Help users recognize, diagnose, and recover from errors: A user-friendly system should provide clear pathways for users to recover from mistakes and continue their tasks with minimal frustration. I will assess how Google Maps and Google Reviews handles errors and unintended actions, such as accidentally closing a tab or losing track of a review a user wanted to revisit. Is there a way that Google Maps/Google Reviews can help us if we make a mistake? For example, what if we accidentally close a tab with a review that we wanted to look at later? How easy is it to find that review again?
3. Consistency and Standards: Interfaces should maintain consistency in design and interactions so users can predict how elements will function. This includes adhering to both internal consistency within Google Maps and external consistency with other widely used review platforms. I will evaluate whether UI elements such as buttons and icons. Do features hint at how they are supposed to be used? Is it easy to understand what a specific button will do? For example, I will examine whether the interface uses familiar symbols (e.g., star ratings, comment bubbles, thumbs-up for helpful reviews) and whether these symbols are universally understood.
I will take notes on what I find and then look at my survey responses before beginning the brainstorming process. My low-fidelity prototypes will be digital sketches and I plan for my high-fidelity prototype to be a prototype that I will create on Figma. I will decide what device to design for once I receive feedback on what device is most popular for reading reviews.
During my heuristic evaluation, I will take detailed notes on usability strengths and weaknesses, along with any potential pain points that could impact the user experience. Then, I will analyze responses from my survey to understand user preferences, behaviors, and frustrations when engaging with online reviews. Based on the findings from my heuristic evaluation and survey responses, I will begin brainstorming potential design solutions to improve the review experience. Before I commit to a specific device to design for, I will analyze my survey results to determine which device is most commonly used for reading reviews. This will ensure that my design aligns with user behavior and preferences.
Needfinding Results
Survey Results
I was able to get 42 survey respondents. Out of all of the survey respondents, 66.7% were within the age range of 18 - 29 years old, making it the most prominent age group for my survey. Additionally, 85.7% of the respondents were from the United States, while 4.8% were from Canada, 2.4% were from India, 2.4% from Croatia, 2.4% from Japan, and 2.4% that did not input their location. I used the data from the other questions to guide my thought process when it came to how important and impactful reviews are when it comes to visiting a new place. From the information I gathered, we can recognize the following:
95.2% (40/42) of survey respondents trust reviews more when they can see if it was written by a real person.
92.8% (39/42) of survey respondents note that reviews impact their decision on whether or not to visit somewhere.
83.3% (35/42) of survey respondents note that they would consider not going somewhere they originally wanted to if it had bad reviews.
85.7% (36/42) of survey respondents note that they would enjoy having the ability to easily see content from reviewers that review places of interest to them.
73.8% (31/42) of survey respondents note that they would enjoy having a section on Google Reviews that would allow them to scroll through reviews from reviewers they follow.
From the collected insights, we can see that there does seem to be significant interest in a feature that would allow users to read reviews from real, trust-worthy reviewers that they are interested in in an aggregated format.
At the very end of my survey, I opened up discussion for any additional features that users might want to see or for any feedback on my surveys and the ideas that I had presented. The following were interesting features that I will consider further during my brainstorming process: recommendations based on preferences and history, “want to go” feature, a way for profiles to have a description of what kinds of foods/places they review or like so followers can easily see if they are interested in the content, and showcasing reviewer credibility. A full list of the survey responses can be found in Appendix 15.2: Survey Results and Appendix 15.3: Survey Open-Ended Results in the document attached at the end of this page.
Heuristic Evaluation Results
When evaluating the three heuristics that I chose for Google Reviews, the following were my findings:
Recognition rather than recall: Google Reviews violates this heuristic in regards to reviews in that there is currently no system set up for a user to save reviews. It partially obeys this heuristic in that you can look through recent searches, so you can easily navigate back to a location that you were previously viewing (within limitations), but there is no way to search for a review you were reading other than scrolling until you find it. This is a violation because recognition is not utilized, but it is almost purely recall.
Help users recognize, diagnose, and recover from errors: Google Reviews violates this heuristic in that if you accidentally lose a review you were viewing, there is no way to get back to it if you forgot where it was (location-wise) and if you forgot the name of the person who left the review. Additionally, even if you remember all of these things, you would still have to manually scroll to find the review since there is no search feature. This is a violation because there is not an easy way to recover from this mistake and could risk a user never being able to find that review ever again.
Consistency and Standards: Google Reviews seems to use pretty consistent design and interactions, so it obeys this heuristic. The use of stars for ratings makes it clear that more stars means better, it uses a bookmark icon as a save feature (for saving locations, not reviews). For mobile users, everything is tappable and the map is pinchable, which is easy to figure out as a user. However, a potential drawback/violation is that Google Reviews is integrated within Google Maps, which may confuse users and cause them to not realize the features available to them (discoverability is not great).
Initial Brainstorming Plan
From both my survey results and heuristic evaluation, I was able to identify a few core problems with the Google Maps and Google Reviews interface. One major issue is the lack of a feature to save or bookmark reviews, making it difficult for users to reference important reviews later. Additionally, while users have the option to follow reviewers, this feature appears to have little practical value, as it does not actively surface relevant reviews from those they follow. Another notable gap is the absence of a way to see reviews from frequent visitors of specific types of places, despite a clear demand for such a feature. Users who want insights from trusted or like-minded reviewers currently have no streamlined way to access that information.
Keeping these core problems in mind, I plan to conduct an individual brainstorming session where I will generate at least 20 ideas for a design solution that addresses these issues. Once I come up with those 20 (or more) ideas, I will begin sorting through them and eliminating concepts that are unrealistic or not feasible while refining and expanding those with potential. This evaluation will involve assessing feasibility in terms of user experience, technical implementation, and overall effectiveness in solving the identified problems. I aim to narrow down my ideas to a focused selection, ideally refining them into a cohesive, well-integrated solution that enhances how users interact with reviews on Google Maps. This structured approach will help me move forward with a clear vision before transitioning into the prototyping phase. I plan on writing down all of my ideas in a file on Google Docs. I will start with a focused session and then come back to brainstorming as needed and necessary until my idea list is sufficient.
Brainstorming Results
List of ideas
The full list of ideas from the brainstorming session can be found in Appendix 15.4: Brainstorming Results in the document attached at the end of this page. I chose the following ideas to move forward to the prototyping stage:
Design for mobile (most people view reviews on phone)
Tab (for quick access) for looking at reviews from people you follow
Reviewers with a star are verified (for credibility)
Recommendations, curated maps
Ability to tag someone
Specific biographies - prompt reviewers to discuss what they review
Bookmarks/bookmarked reviews page
Commenting on reviews/rating reviews
I will be designing low-fidelity prototypes for three different versions, one that includes every feature mentioned, one with fewer, and one that is very simple while still addressing the core problems.
All-Inclusive Prototype
I want to start off with a design that includes all of the innovative features to see how feasible and user friendly it can be. I feel like having more features would make the interface more interesting to interact with and keep users engaged. This prototype will be a mobile interface that has a new tab for finding posts from reviewers that you follow. You can also see who is verified/credible by seeing who has a star by their name. Additionally, you will get recommendations on places to visit and have the ability to comment on reviews and rate them for relevancy, accuracy, etc. You will have the ability to tag someone if you were with them, or if you want someone to see a place that you are interested in, and there will be curated maps made for you based on who you follow. There will be a bookmarks page and your profile page will encourage you to write about the kinds of places that you review.
Narrowed Prototype
Although having more features can make the interface more exciting, it could also lead to issues like lack of discoverability and turning into an overwhelming and crowded environment. For this prototype, I will narrow the features to make it less intimidating to use. This prototype will be a mobile interface that also has a new tab for finding posts from reviewers that you follow. You will still be able to see who is verified/credible by observing if they have a star by their name. The recommendation feature will also still be available and you will be able to see curated maps based on people that you follow. You will be able to rate reviews (on a star scale) based on relevancy, accuracy, etc., but there will not be a comment feature. You will not be able to tag others, but you can still access your bookmarks on a separate page and will be encouraged to write about the kinds of places that you review in your profile biography.
Simplistic Prototype
This prototype will be the most simple of all. The reason why I think that this prototype is a good option is because it sharpens its focus on the core problems, remains simple and clean, and is very easy to use. It does not focus on being too flashy, but rather on ensuring that needs are met and not overwhelming the user. In this mobile prototype, the user will still be able to find posts in a separate tab from reviewers they follow and they will be able to see who is verified/credible by observing if they have a star by their name. The recommendation feature will be available and users will be able to view your bookmarks in its own page. On the profile page, users will be encouraged to write about the kinds of places that reviewers like to review in their personal biography.
All Inclusive Low-Fidelity Prototype
This prototype includes a new tab where users can find posts from reviewers they follow, verified reviewer indication (star) for credibility, personalized recommendations, the ability to comment on reviews, rate them based on relevance and accuracy, and tag friends for shared experiences. This design leverages engagement and gamification by introducing social and interactive elements, while affordance and visibility ensure that key functions, like verified reviewers, are easy to recognize. Users can navigate by tapping on the icons, and on the curated maps page, they can click the pins or zoom in to see the recommendations. The bookmark page showcases all of the locations that users bookmark as they scroll through their feed page of reviewers they follow.
Narrowed Low-Fidelity Prototype
The narrowed prototype refines the all-inclusive design by removing certain features to enhance usability without sacrificing essential functionality. While it retains a tab for following reviewers, verified reviewer stars, recommendations, and curated maps, it eliminates commenting and tagging features, reducing the complexity of social interactions and reducing the feeling of overcrowding- especially since this interface is a mobile design.
Users can still rate reviews based on relevance and accuracy, access the bookmarks page, and add details about their interests on their profile. By prioritizing simplicity, this version enhances discoverability and learnability, making it easier for new users to navigate while maintaining an engaging experience. However, while removing comments and tagging simplifies the interface, it may also reduce user engagement and social interaction, making the platform feel less connected compared to the all-inclusive version.
To rate a review, you can click on a review and it will enlarge. When you scroll down, you will find a section to rate the review for three categories: relevance, accuracy, and helpfulness. You can rate each category in the same way you would provide ratings in a normal review to promote consistency. Users can also like reviews by tapping the heart underneath it. The bookmarks and curated maps pages work the same way as in the all-inclusive prototype.
Simplistic Low-Fidelity Prototype
The simplistic prototype strips the design down to core functions, ensuring a clean, focused, and highly usable interface. It includes the most essential features: a tab for following reviewers, verified reviewer indicators, personalized recommendations, a bookmarks page, and a profile section where users can describe their review interests. This version adheres to Jakob Nielsen’s heuristic of aesthetic and minimalist design, eliminating distractions and reducing cognitive load.
With fewer features, the learning curve is shallow, making the platform more accessible to all users. Users can complete actions quickly without unnecessary friction. The comments and rating reviews features were removed with the reasoning that everyone’s experiences are unique, so it might be biased or incorrect for people to rate other people’s experiences on accuracy, relevance, etc. Additionally, location owners could take advantage of the rating reviews feature and give the negative reviews bad ratings, and this could harm the integrity of reviews as a whole.
Users can scroll through the “Feed” page to like and bookmark reviews they find useful or interesting. You can click on the bookmark icon in the navigation bar to access the “You” page where you can then see personalized recommendations as well as a way to access your bookmarks. When you click on the bookmarks tab, you will be taken to a bookmarks page.
Evaluation Planning
Recruiting Participants
The goal of my evaluation is to test the effectiveness and usability of the three different prototypes that I designed to find which one best supports user experience and tackles the core problems. I will reach out to my personal network on Instagram and randomly select 5-7 participants out of everyone who shows interest to evaluate the prototypes to ensure a diverse subset of participants. If needed, I will also reach out to my network on LinkedIn or classmates on Ed. The participants will be reasonably tech-savvy and likely already familiar with Google Reviews.
Survey
I will create a survey on Google Forms that I will use to collect more quantitative data that will be sent out after the interview. Prototype A is the all-inclusive low-fidelity prototype, Prototype B is the narrowed low-fidelity prototype, and Prototype C is the simplistic low-fidelity prototype. The survey will ask the following questions for each of the prototypes:
Effectiveness (1 - Strongly Disagree, 5 - Strongly Agree):
I was able to achieve my goal easily using this prototype.
I would be able to navigate this prototype on my own without difficulty.
I found all of the features in this prototype useful.
Usability (1 - Strongly Disagree, 5 - Strongly Agree):
This prototype was easy to navigate.
The instructions given were easy to follow.
I did not run into any issues.
Think-Aloud Protocol
I will show each participant all three of the prototypes in a randomized order starting on the landing page and instruct them to navigate to the “Feed” page, bookmark a review, and then navigate to the “Bookmarks” page. I will have them talk through how they would use each interface and from this, I can gather how easy it is for them to navigate through each of them and how frequently they seem to run into any errors or issues when going through it. I will note these errors and include the amount of errors per participant per prototype in my quantitative analysis. I will also show them the pages that are unique to each prototype.
If I notice that they pause for an extended period of time I will prompt them with questions such as “Is anything unclear or confusing within this interface?” or “What are you thinking about?” and I will write down their responses in my notes. I will also time each of the participants as they use each of the prototypes.
Quantitative Analysis Plan
I plan to use descriptive analysis to determine the standard deviation and mean based on the scaled ratings from the survey responses. I can also use the error data to count the amount of errors (as well as the types) that participants encountered during their process of going through each of the prototypes. Additionally, I will compare the time it takes to get through each of the prototypes by averaging the time taken to navigate all three prototypes by each participant (adding up all of the times and dividing it by the number of participants for each prototype). I will also do a paired t-test to compare the error rates of each of the prototypes.
Qualitative Analysis Plan
While users interact with the prototypes during the think-aloud protocol, I will conduct a sentiment analysis test to see how users are feeling while using each prototype. This will give me a sense of their feelings towards each one while they are using it, and I can compare that with the survey data to see if their emotions and literal responses line up. If there appears to be dissonance with the survey response and my qualitative observation, I can follow-up with them for further clarification to ensure that my observation was accurate. Using the interview data, I can also conduct a thematic analysis and group common responses into categories (for example, if participants tend to answer one of the questions similarly). This way, I can also note potential suggestions and fix any issues regarding clarity or confusion.
Individual Interviews
After the think-aloud session, I will conduct a short interview with each of the participants. Each participant will be asked the same 5 questions and I will take notes on their responses:
Do you prefer that this kind of interface has more features?
Which prototype did you enjoy using the most?
Were there any specific challenges you faced when going through each prototype?
Did you feel that any prototype was more confusing than another?
Do you have any suggestions for what could be improved for any of the prototypes?
Evaluation Results
The detailed notes and results from both the qualitative and quantitative analyses can be found in Appendix 15.8: Qualitative and Quantitative Results Notes from Think-Aloud Protocol, Appendix 15.9: Qualitative Results Notes from Interviews, and Appendix 15.10: Quantitative Results from Error Rate and Post-Interview Survey in the document attached at the end of this page
Quantitative Analysis Results
The following section provides key insights regarding error rates and post-interview survey results from all of the participants. I have broken it down into sections by prototype. From these insights, we can see that Prototype C had the fewest errors on average, as well as the highest usability and effectiveness scores. I also asked one quantitative question during the interviews regarding which prototype each participant enjoyed using most, and 85.71% (6/7) of participants said that they prefer Prototype C the most.
Prototype A
Average error count: 3.86
Average usability score: 52.38%
Average effectiveness score: 38.1%
Average time to get through prototype: 1 minute 46 seconds
Prototype B
Average error count: 2.29
Average usability score: 85.71%
Average effectiveness score: 61.9%
Average time to get through prototype: 1 minute 31 seconds
Prototype C
Average error count: 1.29
Average usability score: 95.24%
Average effectiveness score: 100%
Average time to get through prototype: 1 minute 13 seconds
T-Test Results Summary
Prototype A had significantly more errors than Prototype C, with an average difference of 2.57 more errors, indicating that users made noticeably more mistakes using Prototype A. However, there was no significant difference in error counts between Prototype A and Prototype B or between Prototype B and Prototype C, suggesting that while Prototype B had fewer errors than Prototype A and slightly more than Prototype C, these differences were not statistically meaningful (but this is likely due to the small sample size). Overall, Prototype C performed the best, with the fewest errors, while Prototype A had the highest error count, making it the least effective in terms of reducing user error.
Participant Recruitment Outcome
From asking on Instagram alone, I was able to recruit 7 participants for the think-aloud protocol, interview, and post-interview survey. All of them were familiar with Google Reviews prior to the study. I conducted all of the sessions virtually, sending the instructions to participants beforehand, but made sure to go over them again during the video call.
Qualitative Analysis Results
Regarding the sentiment analysis, it seemed as though the participants were the most relaxed when using Prototype C (the simplistic low-fidelity prototype) as it was the least overwhelming to use since it had the fewest amount of features.
The thematic analysis revealed key insights about cognitive load, interface clarity, and ease of navigation.
For Prototype A (all-inclusive low-fidelity prototype), participants share the notion that the feature to leave reviews on reviews could be a bit much. They make comments about the crowded nature of the “Feed” page, but mention that overall the navigation is okay. Participants navigated the slowest through this prototype.
For Prototype B (narrowed low-fidelity prototype) the consensus was that it was mostly easy to navigate, and seemed preferable over Prototype A, although the rating reviews feature was not very popular.
Prototype C (simplistic low-fidelity prototype) seemed to be the most popular, having the most simple and easy to follow interface. There was a comment on including text next to the icons in the navigation bar, but this was a common thought shared across all of the prototypes dependent on which one was seen first (for example, someone had the same thought if they saw Prototype A or B first, but after you see one prototype, you will understand the navigation bar of the next one you see- this is why I stressed the importance of randomized order when it came to letting participants look at each prototype).
Across the interviews, some of the key insights noted were that, although users may like more features, they might not use all of them, and also would prefer a cleaner interface over a cramped one. Additionally, most participants said that the biggest challenge mostly had to do with understanding icons initially due to the lack of text. Prototype A was noted to have been the “most confusing” prototype to use out of all three, and participants suggested adding text for clarity, and removing the rating reviews feature. They also liked the idea of the bookmarks page being attached to the “You” page (as presented in Prototype C). Additionally, I noted that participants found that the “recommendations” and “curated maps” features were quite similar, so it might be redundant to include both. During the interview, I asked one quantitative question that will be discussed in the next section.
Second Iteration Planning
Re-Introduction
Based on the findings from the first iteration, I now have a better understanding of the strengths and weaknesses of each prototype, allowing me to refine the design for the second iteration. The qualitative and quantitative analyses provided valuable insights into user preferences, cognitive load, and usability challenges. Moving forward, the second iteration will focus on simplifying navigation, reducing redundancy in features, and improving interface clarity. Some of the key adjustments will include adding text labels to icons for better comprehension and reconsidering the necessity of certain features such as rating reviews and curated maps. By implementing these changes, the goal is to enhance user experience, making the Google Reviews interface more intuitive and efficient while also ensuring that it still has components that make it engaging for users.
Second Iteration Needfinding
For this needfinding iteration, I am going to be pulling from the evaluation results from the previous iteration. Since the simplistic low-fidelity prototype (Prototype C) was the most popular from my previous iteration, I decided to move forward with that prototype and refine it further since it was the clear winner.
Key takeaways from the evaluation highlight that while users appreciate feature-rich interfaces, excessive complexity can lead to confusion and inefficiency (for example, when participants from the study saw a very crowded section of a page, they seemed a bit confused and it made their navigation process slower). Prototype C, with its simplistic and streamlined design, emerged as the most effective and preferred option, demonstrating the highest usability score (95.24%), highest effectiveness score (100%), and the lowest average error count (1.29). In contrast, Prototype A, which incorporated more features, was perceived as overwhelming, resulting in slower navigation and the highest average error count (3.86). Prototype B struck a middle ground but still faced usability concerns, particularly with the rating reviews feature.
I was given feedback about including text descriptions next to icons, so I will consider that when designing my next prototype. Additionally, users mentioned the redundancy of having both a “Recommendations” section and a “Curated Maps” section, so I plan on only moving forward with the “Recommendations” section since it is clear how to use it, and I got feedback about participants liking the “Recommendations” feature being integrated on the “You” page from Prototype C. Additionally, the rating reviews feature was unpopular due to the complexity and confusion it can add (study participants were happy with the choice to remove that feature when using Prototype C), so I plan on keeping that out when designing the final prototype.
Users seem to prefer a cleaner design over a design that incorporates a lot of different features in a small space, so I will also keep that in mind as I move into the design process once again.
Final Prototype
New Tab on “Landing” Page
From data collected during my needfinding process, I found that most people read reviews on a mobile device (which is why I proceeded with the mobile design), meaning that users likely read reviews on-the-go or want to do it in a fast manner, which calls for quick access.
Adding a tab to the navigation bar on the landing page for “Following” helps with finding reviews quickly. I decided to keep the “You” tab on the navigation bar so users can also find saved reviews and locations quickly as well.
“Following/Feed” Page
The “Following/Feed” page showcases a novel way to read reviews. From the think-aloud protocol, interviews, and post-interview surveys I found that reducing clutter was important, so I made sure the interface looked as minimalistic and clean as possible. Additionally, I removed the ability to rate reviews to reduce complexity and confusion and promote diverse experiences.
Study participants in both the needfinding portion and prototype testing liked the feature that showed verified reviewers (reviewers with a certain number of high-quality reviews), so I kept this feature. I put the star right next to the reviewer’s profile photo so it is obvious who the verification belongs to (grouping by proximity). Currently, the star feature on Google Reviews refers to a reviewer’s “level”, but I have changed this to refer to verification status.
Enhanced “You” Page & “Bookmarked Reviews”
The “You” page was well received when it had the recommendations integrated within it. Additionally, it was easy to find bookmarked reviews from this page as well as finding saved locations to view later. For these reasons, I went ahead and kept the design relatively similar to the Prototype C version.
The biggest difference in this prototype versus Prototype C was making the text a little more descriptive to provide more clarity. I kept the “bookmark” icon to maintain consistency and also kept in mind the rationale that the “bookmark” icon is widely used and recognized across various platforms and online tools. When you tap the “Bookmarked Reviews” section, it takes you to a page with the reviews that you bookmarked from the “Following/Feed” page.
Modified “Edit Profile” Page
From my needfinding, I was given the recommendation to include a feature that allowed users to see what kinds of content that reviewers write, which was later received very well in the low-fidelity prototype walk-throughs and interviews.
In the figure below, you will see a prompt that says “Tell your followers what kinds of places you like to review”. This encourages the development of community, and can help users find reviewers that post reviews that are of interest to them. For example, if someone is only interested in coffee shops, they might choose to follow a reviewer that has “Coffee Shop Reviewer” in their profile’s biography.
Video Prototype
The embedded video below will play a narrated video by me where I go through my functional prototype that I created on Figma (you can also access this video through the link below):
https://youtu.be/Hrx-cjMPGG0?si=kXyNXe0emc1YR5uF
The core features of this prototype include the "Following Feed" and "Bookmarks" pages, along with several enhancements aimed at improving personalization and navigation. Users now have the ability to bookmark reviews, access a dedicated tab on the landing page for quick viewing of posts from followed reviewers, and are encouraged to share their reviewing preferences through a new biography prompt. A tailored recommendations section surfaces suggestions based on the reviewers you follow and the places you’ve visited. Additionally, the redesigned "You" page consolidates your bookmarks, showcases recommendations based on your viewing/visiting history, and includes a section for adding locations that you want to visit to a list (“Want to go”), making it easy to save and revisit locations of interest.
Final Evaluation Plan
Final Recruitment Plan
To recruit participants for the final evaluation, I plan to reach out to my personal network and have 5-6 people review my prototype. I will utilize Instagram (so those recruited will likely be reasonably tech-savvy/familiar with Google Reviews) to recruit participants and use Ed or LinkedIn to recruit more participants if necessary.
Live Interview
I will have my participants watch my prototype video live and then answer four qualitative questions and three quantitative questions immediately following the viewing of the video. I will allow participants to rewatch the video if necessary and point out specific components if they would like. Each interview will be conducted independently to avoid issues such as groupthink and collect everyone’s individual and original opinions.
Interview questions:
What are your thoughts on the new features introduced in this prototype?
Are these features something that you would use/be interested in?
On a scale of 1 - 5 (5 being the best), how easy do you think it is to navigate this interface?
On a scale of 1 - 5 (5 being the best), how useful do you think these new features are?
On a scale of 1 - 5 (5 being the best), how confident are you that you could navigate this interface on your own?
Is anything confusing about this interface?
What, if anything, would you change about this interface?
Qualitative Final Evaluation Analysis Plan
My qualitative evaluation will be based on the qualitative post-video survey questions. I will use thematic analysis, looking at how participants’ responses compare to one another after the interview and group them into categories.
Quantitative Final Evaluation Analysis Plan
For the quantitative evaluation, I will use descriptive statistics by using the user-generated ratings of usability (scaled questions from the live interview section).
Final Evaluation Results
The full responses to the interview questions can be found in Appendix 15.11: Final Evaluation Interview Responses in the document attached at the end of this page.
Final Participant Recruitment Outcome
From using my personal network, I was able to recruit 5 participants. All of them had familiarity with Google Reviews and have used it in some capacity before. All of them participated in watching my prototype video and then responding to seven post-video questions in a live interview format. Three of the interviews were conducted online through Zoom due to location, while two were conducted in-person.
Final Qualitative Evaluation Analysis Results
Using a thematic analysis approach to examine the qualitative results, I found that all participants expressed enthusiasm for the new features, particularly highlighting their appreciation for the feed tab, which allows them to seamlessly scroll through reviews from reviewers whose tastes align with their own.
Participants unanimously agreed that they would find the new features valuable and envisioned using them primarily to discover new places of interest. Notably, three out of the five participants specifically mentioned that they would use the platform to find new restaurants local to them, reinforcing the idea that the feed tab and recommendation system cater well to users seeking personalized discovery experiences.
As a point of constructive feedback, two out of the five participants suggested making a visual distinction between the "You" page icon on the navigation bar and the "Bookmarks" icon. They felt that the similarity between these icons could potentially lead to confusion (although once you click on the “You” tab it is very clear what to do) and recommended adjusting the design to improve clarity. This feedback underscores the importance of intuitive navigation and suggests that even small design changes can enhance the overall user experience. Although they provided this feedback, they did not think that this small issue made the interface difficult to use at all.
Final Quantitative Evaluation Analysis Results
To collect quantitative data, I asked three quantitative questions during each of the interviews. Below are the average scores for each of the questions:
Average ease of interface navigation (1 - 5, 5 being the best): 4.9
Average usefulness of new features (1 - 5, 5 being the best): 5
Average confidence in navigating interface alone (1 - 5, 5 being the best): 5
All of the values were 5 or close to 5, and the scale was from the range of 1 - 5. From these qualitative results, we can conclude that the participants found the interface easy to navigate as well as thinking that the new features are useful. They also note that they would feel confident navigating this platform without any assistance.
Final Analysis Takeaways
An in-depth review of both the qualitative and quantitative data from the final evaluation reveals a strong overall sense of user satisfaction. Participants consistently conveyed that the interface felt intuitive, straightforward, and easy to navigate, with many emphasizing its simplicity as a key strength.
Although there is always some room for improvement, like changing the “You” page icon to something that can be differentiated from the “bookmark” icon, overall, we can see that this positive feedback, combined with the supporting metrics, suggests that the final prototype successfully balances functionality with a clean, user-friendly design.
Full Project Report
The following file contains my entire in-depth project report, including the appendix, specific survey insights, T-Test/other collected data, and information on iterations. If you are interested in learning more about the specifics of my project, feel free to take a look!
Thanks for reviewing this project!
Want to see more of my work? Click the button below!