Here is a UX breakdown of a projects I consulted on for my client EzBuild in 2019. EzBuild was an ecommerce hybrid site, which helped users explore and buy all compatible hardware components to build a computer.

Below I breakdown my UX process into the typical steps I take. First looking at the initial research I conducted and my methodology. Then I break down how I tabulate and use this data to build user flows and user personas. Using these I then show some of my design process, and how I utilize usability testing to help iterate.

Company Overview and UX Problem

EzBuild aims to make buying and building a PC easier for users with beginner and intermediate levels of knowledge. Using proprietary algorithms and user needs surveys, EzBuild helps assess specific computer parts on the basis of 5 unique categories. The algorithm is tailored to a formula specific to the individual's needs, whether it be GPU intensive gaming, Multithread intensive rendering, or processor-heavy data analysis (plus many more use cases). EzBuild recommends the most efficient and compatible parts to get the most power for your dollar.

In our 2019 research, we found that potential users' number one barrier to purchasing and building a custom PC is their perceived low knowledge of computer hardware. Newegg (one of the most popular online PC hardware ecommerce stores) has over 3700 results for RAM alone, and the price variation for 16GB of RAM can be over $100. Even for customers with intermediate knowledge, this sheer amount of parts and price variations can be overwhelming.

Currently, the majority of PC build websites and apps on the market are aimed only at gamers. Whereas professionals who need to upgrade or build a PC for uses like graphic design, 4k video editing, 3D rendering, heavy data analysis and GIS applications need more than just graphics power.

EzBuild aims to launch in early 2021 as a white-label interface, operating on a marketing affiliate and advertising network model for revenue. EzBuild gives partners and streamers the ability to confidently recommend full PC builds to their users while opening up a new revenue stream.

UX Problem

How can EzBuild help individuals with low to intermediate knowledge of computer hardware confidently find/purchase efficient and compatible parts that meet their specific software and or gaming needs?

Competitive Analysis & Market Research

Summary

  • The majority of respondents had never built a PC before and listed their knowledge as beginner (75% online respondents, 55% in-person interviews). I found the main computer requirement use cases for respondents was: 55% gaming, 28% media (IE: Graphic, video or sound design), and 17% for work-related software.
  • The number one worry regardless of knowledge is the compatibility of parts. Beginners are apprehensive about the compatibility of any part. Intermediate users are worried about Intel versus AMD and Nvidia versus Radeon. Experts are worried about RAM timing, and multi-threaded processors versus non.

Key Pain Points

  • Beginner and intermediate users see their low knowledge as a major barrier to purchase and are apprehensive about how to research what they need. The majority of beginners (72%) stated a PC building site with recommendations made them more confident to purchase.
  • 65% found tested PC building sites confusing. They found in most cases only one part was recommended without understanding why it was picked over other options.
  • 77% of respondents found the site’s recommendations limiting, users would rather see multiple options.
  • 42% found the design and layout of established sites cluttered and confusing.
  • 23% of users felt that sites with a survey did not fully understand their specific needs, especially among respondents who were not building for gaming (83%). These users felt these sites only catered to gamers and ignored other software related builds like 3d rendering and 4k video editing. These software applications rely more heavily on processors and RAM than the graphics card.

Research Methodology

Usability testing

I conducted 3 rounds of usability testing, utilizing usertesting.com. 25 respondents per round were tasked with interacting with two computer building sites (a total of 5 competitors sites were tested). Users were asked open-ended qualitative questions about their opinions and experience, then were required to fill out a 25 question quantitative survey.

Guerilla testing

Over 3 days, I conducted in-person interviews with 25 customers exiting brick and mortar stores (2 Memory express’ and 1 best buy). Interviews were conducted with customers who had either purchased computer parts or expressed interest in building a computer. Interviews consisted of open discussion and open-ended qualitative questions. Respondents were then asked to fill out the online 25 questions quantitative survey for a gift card, with a 75% response rate.

Perceptual Maps

Coming from a marketing and consumer behavior background, I'm a big fan of using perceptual maps to help break down and quantify the qualitative responses you find in UX usability tests. To do this I record adjectives that respondents use to describe the design and experience. I then classify and cluster them: words with similar meanings are combined and ones with opposite meanings are paired. In the subsequent quantitative surveys, I use synonyms and antonym scales as a control to validate the qualitative feedback.

I’ve found it’s helpful to take an initial baseline in the competitive analysis and market research stage to Identify market gaps and help identify keywords we are looking for in subsequent tests.

T5 groupings (of 10) are shown below. The red circle indicates a market gap that we want our product to fill, and keyword categories we want to hear in the usability tests of our product.

Click below to expand

Perceived Usability Attributes

Taking the qualitative feedback on actions and task execution, I categorize them into perceived usability attributes categories (below are 6 examples of 15). In the quantitative survey, I use synonyms and antonym scales as a control to validate the qualitative feedback for attributes.

Ease of navigation
Is there a need for more (or less) navigation clearer instructions, or are users having issues going back to previous steps?
Self-explanatory
Is the product intuitive, did the user find themselves wondering what the next step was or what one of the instructions meant?
Task completion
How long did it take users to perform an action.
Ease of use
Is the interface easy to use?
Sensory load
Does the user's eye follow a normal and consistent visual hierarchy. What is the first thing they notice on the page? What headings do they notice, what instructions do they skip over or miss? Is there too much content on one step/page for the user to digest?
Consistency
Does the next step/next page seem to fit with the last (UI and UX).

User Flows

Since there are quite a few competitors in the market, it was easy to have an open dialog with users in both the in-person and online surveys about user flows. Users were asked to compare and contrast selected competitor’s flows. I found there were 3 main categories:

  1. The user is asked a survey first (examples: pcczone.com, choosemypc.net). User flow is typically (Survey Completed build Part) or (Survey slate Component Part).
  2. The users start out with a blank slate, where components they need are highlighted and the user is then responsible for selecting individual parts (examples:newegg.ca, buildmypc.net ). User flow is typically (Blank slate Component Part).
  3. The users can select from already built systems (example: pcpartpicker.com ). User flow is typically (List of complete builds Breakdown of completed build Part).
Initial Sketch - User Flow

Initial Sketch

EzBuild - Final Wire Frame

Final Wireframe

Initially, we were planning to follow a (Survey Blank slate Store) flow. However, in the initial competitive analysis, we found that we wanted to focus on the beginner and intermediate segment, which I identified as a market gap and offer the biggest user base. Through the interviews and collected quantitative data, we found the “blank slate” is more suited for an intermediate/expert user who already understands the parts they need. Because of this, we changed our model to something more unique from our competitors. (Survey Completed Build Component Recommended parts (4) (optional) Store individual parts). This allows users more customization than just a completed build while still keeping the user’s required knowledge low. Recommended parts are based on multiple factors which include the software, price range and main use cases selected in the survey. This also continues into the store where the algorithm ranks parts specific to the user’s requirements. This creates a positive personalized sales experience tailored to the user’s specific needs.

User Personas

Based on the competitive analysis, market research and usability testing, I create user personas. I typically try to keep the represented distribution close to the same as the use cases.

Some areas have been removed at the request of EzBuild. In the competitive analysis and market research section above you can see some examples of general pain points, and here are some examples of some of the additional fields data:

Key hardware needs
Program A benefits more for multi-thread processing due to multitasking.
Algorithm Consideration
AMD processor has a higher rating for multi-threaded processing.

Usability Testing

Methodology

In-person focus group, one-on-one interviews

After compiling the user personas, I like to go through the surveys for the competitive and market research analysis to find a focus group of 5 - 10 respondents who represent the user personas and ideal target market. I find having a consistent repour with the target market/user personas (who see each iteration), leads to a better consistency of feedback, and allows for better insights into subsequent changes. During the first usability test, it also allows me to better understand and document user personas pain points.

In-person interviews were conducted over Zoom due to COVID. Each session consisted of 2 parts and took the respondent 30 - 40 mins. Users are given gift cards for each useability test they take part in. The in-person interviews were conducted in two parts:

Part 1

A guided discussion for feedback on specific features being tested for that round of usability testing, and updates to the product from past iterations. Open-ended questions ranged from UI design, functionality and usability testing to task execution. Time to complete the task and open-ended discussion results are tabulated into usability attributes and perception maps then compared to the general population group results.

Part 2

A 25-30 question quantitative questionnaire. Task examples include ranking 10 features from most favorite to least favorite, synonym and antonym ranking scales, ranking of future features and more. Both the focus groups’ quantitative and general usability groups (see below) quantitative results were used as a control method to validate results between the focus group and the general population.

General population usability testing

To help further rule out cognitive bias from both the focus group and myself, I like to have a 2nd usability group made up of random participants conducted through usertesting.com, to represent the general segment of our ideal user. In this case, users were interested in building a customized PC and listed their knowledge as a beginner to intermediate. Each round consisted of 8-10 respondents, and was conducted in two parts:

Part 1

Users are asked to perform 5 set tasks (for example to change the build’s RAM), based on features and updates we are testing. Time to complete the task and open-ended discussion results are tabulated into usability attributes and perception maps then compared to the focus group results.

Part 2

Users are given the same 25-30 question quantitative survey the focus group received.

Test Breakdown and goals

Click below to see each test's goals

UX Design Breakdown

To give an insight into my design iteration process here is a breakdown of the results page, which is reached after the user finishes the survey steps.

Hero Section - Results Landing Page

Working on the initial concept from the client, I created this Interactive layout of the inside of a computer (see figure 1.1), using icons to highlight the specific components. In user tests 1 and 2, respondents found the infographic confusing to interact with. The beginner knowledge users rated it as their least favorite feature and were uncertain what its context was. After more open discussion with the focus group in usability test 2, I realized that this graphic is more directed at an intermediate and above user. These segments, unlike the beginner, already understand which components they need. We opted to move away from it in later design iterations to better target the beginner segment. User reactions in tests 3 and 4 were a lot more positive about this section (especially among the beginner and intermediate focus group).

Users like the highlighted and obvious price range widget, but wanted an easier way to adjust and reselect their price range with fewer clicks. In later versions, I added in a slider which allows users to adjust their selected price range, while components update live (see figure 1.2).

Users liked the idea of the ratings widget, however, felt very confused about how it worked (especially among the beginner and intermediate segments). Initially, we were using a weighted average out of 5 that combined all the categories. In future iterations, I changed this to a rating out of 10, and broke down the 5 categories visually in this section. To help better understand how the rating system works I also added an additional screen to the survey that helped breakdown what the ratings mean (this also pops up on the (?) button). In usability testing 3 and 4, users were far more positive about the rating system ranking it as their top favorite feature (change from 8th favorite to 1st favorite out of 10).

Results Landing Page

Results Landing Page - Final

Part Component - Results Landing Page

After testing showed the hero section infographic was not working with our target audience, the icons were removed. Instead, I simplified this area with an avatar and description of the segment category based on the survey results.

In usability test 1 I observed it took users too long (or in some cases never), to realize they can click on the components to lead to the part page. In the second test I changed these to a card layout instead (see fig 1.4).

In usability test 3 multiple respondents wanted the rating system to be more obvious. Internal stakeholders wanted the “buy now” section to be better emphasized. To achieve this I broke the card into more specific sections with color highlights (see fig 1.5).

In usability test 3 the general population was uncertain what it meant when something had a low rating. This is shown when a user manually selects a component part from the store which has a low rating. To help make this clearer and provide an up-sell, I incorporated a “recommended” widget that is displayed on the main results landing page (See fig 1.5).

Conclusion

Below you can see the final perceptual maps and perceived usability attributes for EzBuild after utilization test 4. There is still some tweaking and A/B testing for when the site goes live to try to better hit some of the market gaps.

Click below to expand

Future A/B Testing

Here are two of the A/B tests I recommended.

Blank Slate Test
Our main user flow is (Survey → Completed Build → Component → Recommended parts (4) → (optional) Store individual parts). We want to A/B test it, versus a (survey → blank slate → component → recommended parts (4) → store individual parts). This goes back to our original user flow plan. We want to test to ensure the current user flow is optimal, focusing on the beginner and intermediate segment.
Button Color Highlights
Testing the store link buttons on the part pages (yellow verse less contrast), to see the difference in recommended part (marketing affiliate) sales. The theory being the less amount of clicks through to the store the better for revenue. However, we want to observe the ease of navigation, and sensory overload attributes associated with this change.

Please contact me if you have any questions, are considering a future product or just want to chat!

Methodology

like to give a live project 2 - 3 weeks to establish a baseline before doing any A/B testing. Depending on site traffic, I usually run 1 A/B test at a time for 2 - 3 weeks.

Related Projects