SUS: Product Usability
Updated: Dec 16, 2025 Reading time ≈ 6 min
The SUS (System Usability Scale) is a short, standardized questionnaire used to assess how usable a product, service, or system feels to users.
It consists of 10 statements rated on a five-point Likert Scale (from "strongly disagree" to "strongly agree"). The items cover core aspects of usability:
- ease of use and learnability,
- perceived complexity,
- consistency and integration of functions,
- overall confidence and comfort.
After users complete the SUS questionnaire, their responses are converted into a single score from 0 to 100. This score is not a percentage but an index:
- higher SUS scores = better perceived usability,
- lower SUS scores = more usability issues.
SUS is popular because it's:
- fast to administer,
- simple to interpret,
- robust across a wide variety of products and contexts.
You can use SUS for websites, mobile apps, enterprise software, hardware interfaces, internal tools, and even offline services with a digital component. It often appears alongside other UX and customer experience metrics such as SUPR-Q, SEQ, UMUX, UEQ, VAS, CSAT and CES.
If you're just starting, a ready-made SUS survey template in a tool like SurveyNinja is usually the quickest way to get going.
Procedure for Conducting SUS
A SUS evaluation is usually part of a broader usability test, survey or experimental research (for example, A/B tests with random assignment). A typical process:
1. Define your goal and context. Decide what you want to learn with SUS:
- overall product usability,
- comparison of two design versions,
- impact of a redesign over time (using time series analysis).
2. Recruit participants. Select respondents who represent your target audience: existing users, new users, or a mix. For more robust results, think about how you'll handle representativeness and whether you'll later apply a Weighted Survey approach.
3. Set up tasks and instructions. Give participants clear, realistic tasks:
- e.g., "Find and purchase a specific product," "Create a new project," etc. The goal is to simulate real-world usage before they answer SUS.
4. Let participants interact with the product. This can happen:
- in moderated usability sessions,
- in unmoderated remote tests,
- or as part of a larger cross-sectional survey or panel study.
5. Administer the SUS questionnaire. Immediately after usage, ask participants to complete the SUS form. That way their impressions are fresh and more accurate.
6. Convert responses into SUS scores. Use the standard scoring procedure (explained below) to transform raw answers into 0–100 scores.
7. Analyze the results
- calculate averages, medians, and standard deviation,
- compare SUS across versions, segments, or time,
- optionally run factor analysis or qualitative analysis on open comments to understand "why" scores look the way they do.
8. Interpret and recommend actions. Relate SUS results to other metrics (e.g., NPS, CES, CSAT, SUPR-Q) and behavior (task success, errors, completion time) to form design recommendations.
9. Report to stakeholders. Summarize the findings in a clear report or dashboard-SUS score, comparison with benchmarks, key UX issues and proposed fixes.
SUS Questionnaire and Instructions
The standard SUS questionnaire includes 10 statements. Respondents rate each statement from 1 (strongly disagree) to 5 (strongly agree). Odd-numbered items are positive; even-numbered items are negative:
- I think that I would like to use this product/function/service regularly.
- I found the product/function/service unnecessarily complex.
- I thought the product/function/service was easy to use.
- I think that I would need the support of a technical person to be able to use this product/function/service.
- I found the various functions in this product/service were well integrated.
- I thought there was too much inconsistency in this product/function/service.
- I would imagine that most people would learn to use this product/function/service very quickly.
- I found the product/function/service very cumbersome to use.
- I felt very confident using the product/function/service.
- I needed to learn a lot of things before I could get going with this product/function/service.
General instructions for participants:
- Use your overall impression of the product, not just one task.
- Try to answer every item, even if you're unsure.
- Focus on your honest experience; there are no right or wrong answers.
SUS is often used together with a few open-ended questions (e.g., "What was the most frustrating thing?"), which can later be analyzed with sentiment analysis or thematic coding.
Answer Analysis and SUS Scoring
SUS scoring uses a simple transformation that converts the 10 responses into a 0–100 scale.
1. Transform each item response
- For odd-numbered items (1, 3, 5, 7, 9):
transformed score = response − 1 - For even-numbered items (2, 4, 6, 8, 10):
transformed score = 5 − response
2. After this step, each item contributes a value from 0 to 4.
3. Sum all transformed scores. Add the transformed scores for all 10 items. The total will be between 0 and 40.
4. Multiply by 2.5. SUS score = (sum of transformed scores) × 2.5
This yields a final SUS score from 0 to 100.
5. Aggregate across respondents. To get your overall SUS score, compute:
- the mean SUS score across all participants,
- optionally, confidence intervals using appropriate methods for weighted or unweighted data,
- comparisons between groups using Z-tests or other statistical tests where appropriate.
You can also analyze SUS by segment: new vs experienced users, mobile vs desktop, different countries, etc. When samples are unbalanced, using a Weighted Survey approach becomes especially helpful.
What is a Normal SUS Score?
In many studies, the average SUS score across products tends to cluster around 68. This value is usually treated as the "okay/acceptable" benchmark:
- Below ~68. Suggests usability problems. The interface may feel confusing, inconsistent, or hard to learn. Prioritize UX improvements, then rerun SUS to measure progress.
- Around 68. Indicates acceptable usability. The product is usable, but not delightfully so. There is usually room to streamline flows, clarify labels, reduce friction, and improve onboarding.
- Above 68. Means users generally perceive the product as good and easy to use.
- 80+. Often interpreted as excellent usability: users find the system intuitive, efficient, and pleasant to use. At this level, you're typically polishing and fine-tuning rather than fixing core UX issues.
Because SUS is a relative measure, the most useful comparisons are:
- against your previous waves (has usability improved after a redesign?),
- between variants in controlled tests (version A vs B),
- against other internal products, or against external benchmarks when available.
For a more complete UX picture, combine SUS with:
- task success and error rates,
- SUPR-Q or UEQ for broader UX quality,
- CES and CSAT for effort and satisfaction,
- NPS or mNPS for recommendation intent and loyalty,
- behavioral data and time series analysis to see how improvements affect long-term engagement.
SUS is a small, simple questionnaire-but when used consistently and combined with other research methods, it becomes a powerful, standardized way to track and improve product usability over time.
Updated: Dec 16, 2025 Published: Jun 3, 2025
Mike Taylor