Here is a UX breakdown of a projects I consulted on for my client EzBuild in 2019. EzBuild was an ecommerce hybrid site, which helped users explore and buy all compatible hardware components to build a computer.
Below I breakdown my UX process into the typical steps I take. First looking at the initial research I conducted and my methodology. Then I break down how I tabulate and use this data to build user flows and user personas. Using these I then show some of my design process, and how I utilize usability testing to help iterate.
EzBuild aims to make buying and building a PC easier for users with beginner and intermediate levels of knowledge. Using proprietary algorithms and user needs surveys, EzBuild helps assess specific computer parts on the basis of 5 unique categories. The algorithm is tailored to a formula specific to the individual's needs, whether it be GPU intensive gaming, Multithread intensive rendering, or processor-heavy data analysis (plus many more use cases). EzBuild recommends the most efficient and compatible parts to get the most power for your dollar.
In our 2019 research, we found that potential users' number one barrier to purchasing and building a custom PC is their perceived low knowledge of computer hardware. Newegg (one of the most popular online PC hardware ecommerce stores) has over 3700 results for RAM alone, and the price variation for 16GB of RAM can be over $100. Even for customers with intermediate knowledge, this sheer amount of parts and price variations can be overwhelming.
Currently, the majority of PC build websites and apps on the market are aimed only at gamers. Whereas professionals who need to upgrade or build a PC for uses like graphic design, 4k video editing, 3D rendering, heavy data analysis and GIS applications need more than just graphics power.
EzBuild aims to launch in early 2021 as a white-label interface, operating on a marketing affiliate and advertising network model for revenue. EzBuild gives partners and streamers the ability to confidently recommend full PC builds to their users while opening up a new revenue stream.
I conducted 3 rounds of usability testing, utilizing usertesting.com. 25 respondents per round were tasked with interacting with two computer building sites (a total of 5 competitors sites were tested). Users were asked open-ended qualitative questions about their opinions and experience, then were required to fill out a 25 question quantitative survey.
Over 3 days, I conducted in-person interviews with 25 customers exiting brick and mortar stores (2 Memory express’ and 1 best buy). Interviews were conducted with customers who had either purchased computer parts or expressed interest in building a computer. Interviews consisted of open discussion and open-ended qualitative questions. Respondents were then asked to fill out the online 25 questions quantitative survey for a gift card, with a 75% response rate.
Coming from a marketing and consumer behavior background, I'm a big fan of using perceptual maps to help break down and quantify the qualitative responses you find in UX usability tests. To do this I record adjectives that respondents use to describe the design and experience. I then classify and cluster them: words with similar meanings are combined and ones with opposite meanings are paired. In the subsequent quantitative surveys, I use synonyms and antonym scales as a control to validate the qualitative feedback.
I’ve found it’s helpful to take an initial baseline in the competitive analysis and market research stage to Identify market gaps and help identify keywords we are looking for in subsequent tests.
T5 groupings (of 10) are shown below. The red circle indicates a market gap that we want our product to fill, and keyword categories we want to hear in the usability tests of our product.
Click below to expand
Taking the qualitative feedback on actions and task execution, I categorize them into perceived usability attributes categories (below are 6 examples of 15). In the quantitative survey, I use synonyms and antonym scales as a control to validate the qualitative feedback for attributes.
Since there are quite a few competitors in the market, it was easy to have an open dialog with users in both the in-person and online surveys about user flows. Users were asked to compare and contrast selected competitor’s flows. I found there were 3 main categories:
Initially, we were planning to follow a (Survey Blank slate Store) flow. However, in the initial competitive analysis, we found that we wanted to focus on the beginner and intermediate segment, which I identified as a market gap and offer the biggest user base. Through the interviews and collected quantitative data, we found the “blank slate” is more suited for an intermediate/expert user who already understands the parts they need. Because of this, we changed our model to something more unique from our competitors. (Survey Completed Build Component Recommended parts (4) (optional) Store individual parts). This allows users more customization than just a completed build while still keeping the user’s required knowledge low. Recommended parts are based on multiple factors which include the software, price range and main use cases selected in the survey. This also continues into the store where the algorithm ranks parts specific to the user’s requirements. This creates a positive personalized sales experience tailored to the user’s specific needs.
Based on the competitive analysis, market research and usability testing, I create user personas. I typically try to keep the represented distribution close to the same as the use cases.
Sex: Male
Age: 16
Occupation: Highschool student
Income: Paid research
Knowledge level: Beginner
Use cases: Gaming, video streaming
Types of Games: FPS, mmorpg FPS, mmorpg, strategy, simulation, action adventure
Price Range: $800 - $1000
Income: Proprietary
Algorithm Change: Proprietary
Main Pain points Proprietary
Sex: Male
Age: 19
Occupation: College student
Income: Paid research
Knowledge level: Beginner
Use cases: Gaming, office suite, streaming.
Types of Games: FPS, mmorpg, strategy, simulation, action adventure
Price Range: $1200 - $2250
Income: Proprietary
Algorithm Change: Proprietary
Main Pain points Proprietary
Sex: Female
Age: 27
Occupation: Video editor
Income: Paid research
Knowledge level: Intermediate
Use cases: 4k rendering, VR stitching, sound design, 2d/3d animation, color correction
software: After effects, Premier Pro, Audition, Avid Media suite
Price Range: $2500 - $4500
Income: Proprietary
Algorithm Change: Proprietary
Main Pain points Proprietary
Sex: Male
Age: 33
Occupation: 3D animator
Income: Paid research
Knowledge level: Intermediate
Use cases: 2d/3d animation, 3d modeling, rendering.
software: Maya, 3D Max, Mudbox, Cinema 4D, Zbrush, After effects.
Key Hardware needs: $4500 - $8900
Income: Proprietary
Algorithm Change: Proprietary
Main Pain points Proprietary
Some areas have been removed at the request of EzBuild. In the competitive analysis and market research section above you can see some examples of general pain points, and here are some examples of some of the additional fields data:
After compiling the user personas, I like to go through the surveys for the competitive and market research analysis to find a focus group of 5 - 10 respondents who represent the user personas and ideal target market. I find having a consistent repour with the target market/user personas (who see each iteration), leads to a better consistency of feedback, and allows for better insights into subsequent changes. During the first usability test, it also allows me to better understand and document user personas pain points.
In-person interviews were conducted over Zoom due to COVID. Each session consisted of 2 parts and took the respondent 30 - 40 mins. Users are given gift cards for each useability test they take part in. The in-person interviews were conducted in two parts:
Part 1
A guided discussion for feedback on specific features being tested for that round of usability testing, and updates to the product from past iterations. Open-ended questions ranged from UI design, functionality and usability testing to task execution. Time to complete the task and open-ended discussion results are tabulated into usability attributes and perception maps then compared to the general population group results.
Part 2
A 25-30 question quantitative questionnaire. Task examples include ranking 10 features from most favorite to least favorite, synonym and antonym ranking scales, ranking of future features and more. Both the focus groups’ quantitative and general usability groups (see below) quantitative results were used as a control method to validate results between the focus group and the general population.
To help further rule out cognitive bias from both the focus group and myself, I like to have a 2nd usability group made up of random participants conducted through usertesting.com, to represent the general segment of our ideal user. In this case, users were interested in building a customized PC and listed their knowledge as a beginner to intermediate. Each round consisted of 8-10 respondents, and was conducted in two parts:
Part 1
Users are asked to perform 5 set tasks (for example to change the build’s RAM), based on features and updates we are testing. Time to complete the task and open-ended discussion results are tabulated into usability attributes and perception maps then compared to the focus group results.
Part 2
Users are given the same 25-30 question quantitative survey the focus group received.
Click below to see each test's goals
To give an insight into my design iteration process here is a breakdown of the results page, which is reached after the user finishes the survey steps.
Working on the initial concept from the client, I created this Interactive layout of the inside of a computer (see figure 1.1), using icons to highlight the specific components. In user tests 1 and 2, respondents found the infographic confusing to interact with. The beginner knowledge users rated it as their least favorite feature and were uncertain what its context was. After more open discussion with the focus group in usability test 2, I realized that this graphic is more directed at an intermediate and above user. These segments, unlike the beginner, already understand which components they need. We opted to move away from it in later design iterations to better target the beginner segment. User reactions in tests 3 and 4 were a lot more positive about this section (especially among the beginner and intermediate focus group).
Users like the highlighted and obvious price range widget, but wanted an easier way to adjust and reselect their price range with fewer clicks. In later versions, I added in a slider which allows users to adjust their selected price range, while components update live (see figure 1.2).
Users liked the idea of the ratings widget, however, felt very confused about how it worked (especially among the beginner and intermediate segments). Initially, we were using a weighted average out of 5 that combined all the categories. In future iterations, I changed this to a rating out of 10, and broke down the 5 categories visually in this section. To help better understand how the rating system works I also added an additional screen to the survey that helped breakdown what the ratings mean (this also pops up on the (?) button). In usability testing 3 and 4, users were far more positive about the rating system ranking it as their top favorite feature (change from 8th favorite to 1st favorite out of 10).
After testing showed the hero section infographic was not working with our target audience, the icons were removed. Instead, I simplified this area with an avatar and description of the segment category based on the survey results.
In usability test 1 I observed it took users too long (or in some cases never), to realize they can click on the components to lead to the part page. In the second test I changed these to a card layout instead (see fig 1.4).
In usability test 3 multiple respondents wanted the rating system to be more obvious. Internal stakeholders wanted the “buy now” section to be better emphasized. To achieve this I broke the card into more specific sections with color highlights (see fig 1.5).
In usability test 3 the general population was uncertain what it meant when something had a low rating. This is shown when a user manually selects a component part from the store which has a low rating. To help make this clearer and provide an up-sell, I incorporated a “recommended” widget that is displayed on the main results landing page (See fig 1.5).
Here are two of the A/B tests I recommended.
Please contact me if you have any questions, are considering a future product or just want to chat!
like to give a live project 2 - 3 weeks to establish a baseline before doing any A/B testing. Depending on site traffic, I usually run 1 A/B test at a time for 2 - 3 weeks.