«

»

Nov 15

Print this Post

R7: Discount Techniques

Which of the “discount” techniques do you like best? Why? In what way could you use it in your own research, work, projects, etc.?

Permanent link to this article: https://www.gillianhayes.com/Inf231F12/r7-discount-techniqueswh/

31 comments

Skip to comment form

  1. Armando P.

    I liked the cooperative evaluation described in the book the best. Basically cooperative evaluation is performed by having the participant do some task on the system and describing their thinking aloud. Additionally the participant and the observer can ask each other clarification questions. I liked this technique for a couple reasons. First of all, I like that information from the study comes from both watching the user and writing down their thoughts. This can generate data that is less likely to be misinterpreted. Secondly, having the observer and the participant ask questions can promote an environment where the system is evaluated in at a deeper level. I think this would avoid finding only superficial usability problems and can yield useful leads on how to fix the more important problems found.

    The cooperative evaluation technique seems ideal for testing the usability of software systems (as opposed to physical artifacts, although it could work for them too). I have done something somewhat similar at work before. I made a website that would be used by other employees. I sat down with a co-worker to teach them how to use the site, and during the processes they pointed out some things that did not make sense or that they wished would work another way. The suggestions were very valuable and helped create a better site. Cooperative evaluation seems very similar but instead of discussing while I teach them how to use the site, I would instead ask them to try to do something on the site and discuss as they go along.

  2. Anshu Singh

    When it comes to evaluating a product that is built with human users in mind I think there could no better discounting technique that evaluating it with the users themselves. One dynamic way of evaluation would be testing the product ‘in field’ or in the work environment of the user. Studying how users use a product in their ‘natural environment’ can give invaluable feedback to the designers which cannot be gained in controlled user experiments or laboratory studies. However as we know this method of observing the users at work can have undesirable outcomes of its own, like user may unconsciously alter their behavior in presence of an observer or in some cases the observer(designer) himself may get distracted due to real world interruptions. Nevertheless, if such user experiments go well, it indeed brings significant results to the table.

    I would use a blend of ‘observing users’ and ‘user participant’ in my own project study primarily for two reasons: First, my project requires both active as well as passive evaluation so for passive evaluation I will choose to observe users and witness what they actually do in a situation, and for active evaluation I will carry out experiments with user participation. Second reason, I have come to understand that the evaluation of my design wouldn’t be complete if I choose to work with only one technique. So both the techniques i.e. observing users and user participant complement each other, as far as my design evaluation is concerned. Finally to summarize, I think the discounting techniques I will go for, for this project will be both experimental and observational.

  3. Parul Seth

    “Zero users give zero insights”. I agree that the best way to evaluate a design is to involve the actual users (>1) who match the targeted user population, as evaluators. Similarly, it is also important to limit the users to a number such that the results are unique and saves time and effort. Early and rapid iteration of the design evaluation during system development is the core theme of the discount (low-cost) techniques. Although, heuristic evaluation may not be the best method for design evaluation, I feel it is definitely the most usable method for working within the constraints of time and resources, as Nielsen appropriately says it helps in achieving “the good”. Additionally, the set of ten heuristic rules defined, broadly cover the essentials for designing a user interface by illustrating a metric for directed assessment.

    In my future work and for our project, I would prefer using more than one method for evaluating the design to cover the breadth and depth of a product design. Primarily, we can select three evaluators who can independently unearth the problems in design; these evaluators can be participants from our interviews, who are already using an existing personal finance management tool. Simultaneously, we can use a simplified version of co-operative evaluation; to promote active conversation between the user and the evaluator to offer clearer insights to the usability problems in design. Furthermore, we can send selected participants an annotated worksheet for evaluation that is built upon the Neilsen’s heuristic principles containing the severity ratings encountered to study the frequency, impact, persistence and market impact of the usability problems encountered. This worksheet will serve as a ready reference for a quick-debugging of the design. Moreover, it will be helpful to re-conduct extreme user interviews, this time in presence of a prototype, to understand the flaws and accomplishments of our design via canvassed interpretations of an expert and a novice.

  4. Surendra Bisht

    I find the heuristic evaluation the best discount technique because it is not only a simple and cheap technique but also a flexible one. Hence it could be applied at various stages of design and development process. The 10 basic usability principles enumerated by Nielsen helps the evaluators to evaluate design with ease. The evaluation based on these principles is quite exhaustive and can uncover most of the usability related issues in a design or prototype. One of the argued drawbacks of this technique is that it requires a certain level of expertise to be effective. However, I feel that the the evaluation technique is easier to learn and the required expertise varies based on the complexity of a product being evaluated. Although, users’ input is also important, the discount usability techniques which involve users could complement the heuristic technique. Another important aspect of the heuristic technique is the severity ratings associated with the usability problems found. The severity ratings help in managing the effort required to address these problems by giving priority to major issues.

    Since the heuristic technique will be effective with just 3-5 evaluators, I find it easy to use it for the class project. I could find evaluators from the current class and test the usability issue of the prototype that we are going to develop for the project. In the future projects, I would apply this technique at initial design stage to reap maximum benefit.

  5. Xinlu Tong

    Among those discount techniques mentioned by Nielson, I like the heuristic evaluation technique best. There are some reasons for this. First, that technique is no costly. We only need a few users to participate the evaluation, and involving more users will even be a waste. Sometimes we can also let the same users come back and evaluate again after we improve the design. Second, we don’t need to directly decide what and how to improve our design after the evaluation, and doing so may lead to some serious problems afterwards. Because we cannot rely on user evaluation, and there are too many uncertain factors in it. What we can do in heuristic evaluation is to compare the feedback from users and the heuristics and brainstorm again with other designers and users, and elicit some reliable results.

    In my own projects, I can go the same ways as the process in Nielson’s passage. But a more convenient way to use the method is to refer to those heuristics regularly when we design the system. We can avoid possible bad design at the first place and save the time and efforts to correct them. Also, heuristics are not golden rules. They just provide us some ideas about good design. So we don’t need to stick to them, which will also hinder our progress. We can also try to reduce the number of subjects for the evaluation, since we cannot pay for usability test for some course projects. At last, we can combine this heuristic evaluation with other techniques to complement each other.

  6. Martin S.

    Heuristic evaluation is advantageous because it allows a quick, straightforward approach to design recommendations, and has a broad range of contexts for application. It need not take many users to do a proper evaluation. The method, in conjunction with an iterative design process, points to improved design suggestions based on clearly outlined usability principles.

    I have used heuristic evaluation in previous work, and I found that it is particularly useful when a small window of time is available for dynamic design sessions, as well as evaluating iterative designs. It does, however, require some experience to understand the contextual appropriateness of any given technique; heuristic evaluation is most valuable when a tiny window is available for design work. Complementary methods ought to augment “discount” techniques. In-depth procedures that help to identify specific user interactions and behaviors (e.g. contextual inquiry) are powerful when time is less constrained.

  7. Chuxiong Wu

    I think the best way to evaluate a product must involve some users. I was surprise about the fact that a product only need to test with 5 users (no single user test). And the article suggests that we could run as many small tests as we can afford. I believe the data based on the formula. Usability Testing, based on user participation, is the approach I like to assess a product. Users can test the prototype or a product, while we observe, record and analysis related data to find insight problems. But still, the methods involve users are phases on the end of the process. So I also like heuristic evaluation and cognitive walkthrough for the early evaluation. Heuristic evaluation is an approach of expert analysis, it’s flexible and cheap. Even heuristic evaluation is not accurate; the future work can offer complementary assessments (involve users).

    Personally, I would like to use various evaluations to cater to different needs for my feature work. For the final project, I would like to use heuristic evaluation and field studies for usability testing. For heuristic evaluation, the group members can critique the project to find out potential usability problems independently. In the book, author mentions, with five evaluators, about 75% of the potential usability problems can be discovered. It’s a good discount technique for us to evaluate problems before involve more participators. And field studies for usability testing; we can see how it works exactly. I want to observe our project in action in user’s work environment, to gather complementary insights.

  8. Karen

    In terms of the evaluation techniques through expert analysis, I would choose heuristic evaluation like others, as it is very straightforward and easy-to-execute, but would quickly provide valuable information regarding potential usability problems as experienced by the users. While Nielson mentions that 5 evaluators are sufficient to uncover 75% of problems, we could easily add on evaluators if we felt like current ones were not converging in their experiences. We could also adapt or add to the 10 heuristics as desired. I do not feel the other methods mentioned are as good. For instance, using previous studies could be troublesome, as the studies would not really answer questions about OUR particular design, and they could also be outdated, or be riddled with methodological problems. I think use of previous studies is more important earlier on in the design phase.

    In terms of the user-participation techniques, I would choose a technique based upon what sort of design I was testing. Generally I would want to achieve both breadth and depth, so I would probably combine either a lab-based or field-based study with interviews regarding the participants’ experiences. I think that a field-based study is usually more valid, particularly if the system was designed to be used in a particular context. For instance, a system designed to help waiters and waitresses wait tables in a busy environment would be better evaluated in a busy restaurant than a lab. However we would really need to consider our practical constraints as well.

  9. Dakuo

    I think it’s quite interesting to know the usability test model that only 15 users are enough to discover all the usability problems. While in the article, Jakob Nielsen suggests to use the budget more efficiently. He would prefer to use the funding to recruit 15 representative customers and have them separated to three iterative test with 5 users each rather than waste them all on one test. He explains the idea of usability test aims to find problems and to fix problems. Using 15 users in one test would only be able to document 100% problems but not to solve them. While 5 users could help us to find 85% problems and we could save the other ten users in next user test to evaluate our problem fixing work. Due to the budget limiting, this iterative design and evaluation process is better for the goal of design and usability test. Having each individual users test the interface alone is heuristic evaluation. Only after all evaluations have been completed are the evaluators allowed to communicate and have their findings aggregated. This procedure is important in order to ensure independent and unbiased evaluations from each evaluator.

    In my experience working as information system engineer in France Telecom. I think we unintentionally adept the 3-5 users testing model in order to save budget. We had no theoretical support back to that time, but we did understand when the test users amount increase, the budget increase while the problems finding slope decrease. It’s also at that time, we didn’t isolate the test subjects and I think it introduced a lot of confounding that most of problems they found are replicate.

  10. Timothy Young

    I’ve found that involving users in early stages of design has always yielded surprising insights into the system. The way I’ve approached past projects whether it has with any discount technique was to try to account for as many edge cases or errors in designs that we may have overlooked when designing with a big picture mentality. Having too many users in a focus group can lead to a lot of redundancy and noise, and some users may not speak up against the crowd. Sticking the the 5 users principle has been personally beneficial as it is easier to get a more in depth look from user evaluation and also follow up with the users after certain design changes are made.

    As previous projects have scaled, we also brought in more users to test and evaluate using heuristics. The technique we used was to match the number of feedback on design to the number of total users.

  11. Jie Z.

    Heuristic evaluation, developed by Nielsen, is the evaluation technique that I like best. Compared with other discount techniques, heuristic evaluation has several advantages. First, it is a relatively cheap approach. According to Nielsen’s study, three to five evaluators are enough to get the best result, covering 85% of the usability problems. Second, it is flexible. Because we can obverse what the user does during the test, we cab find out some potential usability problems. You can also decide the number of iterative usability tests based on your budget. Moreover, with iterative design, we can investigate some usability problems of fundamental structure, not only surface-level problems. Third, it has 10 general heuristics principles as guidelines to indicate the nature of interface design, which helps to discover and explain the problems that are found. Lastly, this approach is easy to use, and is useful in the early design stage to find most interface problems at relatively low cost.

    In our class project, this approach is feasible to conduct a usability evaluation. It is easy to learn and use by following the rules given by Nielsen. We can have three to five users test the interface alone; discuss and analyze all the usability problems that are found; and compare them with the 10 principles. According to the results, we could probably discover and address most of the interface problems in the early design stage.

  12. Sreevatsa Krishnapur Sreeraman

    Of the discount usability techniques described by Neilsen, Heuristic Evaluation is the best technique which can be used on its own. The scenarios/paper prototyping can help in identifying specific features and bringing out usability issues, but there has to be a lot of scenarios built for different features and there are possibilities of slipping evaluation of features. Thinking Aloud is similar to Heuristic Evaluation, but it lacks the standard against which observations can be measured. In this way, Heuristic Evaluation provides these rules of thumb against which evaluators can test the system and find issues with it. This sense of direction will bring about more issues into the fore.

    A project which has diverse set of users needs to perform this evaluation against these sets to find issues. This is the case in our project, where, there are users who use the system for leisure and business. As suggested by Neilsen, performing heuristic evaluation with multiple groups will bring about the overlap of issues which will ensure better testing. The issues which overlap the most would also show the similarities in usage patterns, which can be accounted for when the issues found during evaluation are rectified.

  13. Xinning Gui

    I like heuristic evaluation best. It can cover most of usability problems by involving several evaluators, so it can assure the breadth of evaluation; it has severity rating system, this can help us to decide the priorities and find both major and minor usability problems; it can be used in the early stage, so it can help us to find and fix problems cheaply and timely. Also, it is not intrusive and does not require high-level expertise. In addition, Nielsen’s ten heuristics give us a clear checklist, so it is relatively easy to use.

    I would like to use heuristic evaluation together with other evaluation methods to complement each other. As for heuristic evaluation, I will do it with multiple evaluators to cover usability problems comprehensively. I will use the list of recognized usability principles to help me find problems, and try to draw into other possible usability principles to evaluate the system. Severity ratings will also be used so that all the evaluators can access the problems they have found independently, and we can give suitable priorities to the problems.

  14. XIaoyue Xiao

    The heuristic evaluation impressed me most and I like this technique best. It is about a method to check if there is any usability problems when evaluate a user inter face design, and the engineers and designers can improve their work to make the system especially the interface a better one. Nielsen described the mechanism of heuristic evaluation and used graphic as well as a mathematical expression to illustrate how to determine the number of evaluators

    I participated a project that design a human resource management system which used heuristic evaluation before, proved that this method works well. But what should be pointed out is that the number of evaluators should be determined separately by different roles of the system instead of the total number.

  15. Yao

    I like the heuristic evaluation best. I think it is really fast and easy for us to evaluate the design using Nielsen’s ten principles. In the principles, I really love “Visibility of system status”, which gives me a lot of reflections. One thing I immediate thought of was the design of the console. I have lots of chance to use the console when I am programming, and often I find that it’s really annoying to use it because I cannot tell or be sure what’s going on in my computer. Therefore, sometimes it’s really hard for me to evaluate my program in the console. I think that’s a point we have to consider in our design. The design might have a lot of great features, but we have to go through the evaluation to make sure if it is really useful. By following the principles from Nielsen, we not only provide simpler way for users to evaluate, but also help us rethink the features and structures of our design.

    As a result, in our project or in my future design, I might focus on making users participate more in the design—they should not only learn how to use it, but also understand what’s going on after every interaction. Some good examples are the status bars used in every operating systems, progress percentage of uploading/downloading, the “loading…” notifications used in web browsers or games, and the spinning color ball on the mac. In programming, we often want the program to be as fast as possible in order to give users nicer experiences. However, I realized that the design should be truly “responsive” to users. It’s great if some operations can be done immediately, but if some operations need to take some time to finish, we have to let the users know. By the heuristic principles, I got the idea that the performance of the design does not depend on its features, speed, or layout, but on how the users “feel”, which can be easily found if we have heuristic evaluations.

  16. Anirudh

    Among the different discounting techniques I am inclined towards “Heuristic Evaluation”. Heuristic Evaluation is based on a group of evaluators independently evaluating the interface as per the heuristics (which are design principles). These design principles are given as rules to evaluators and they verify if the interface being tested conforms to the rules/principles and state reason in case of non-conformance.

    I think this technique is a wonderful idea as it solves budget problems related conducting tests and also satisfies resource constraints by allowing the test to be conducted on just 5 evaluators as 5 of them can discover more than 75% of the design flaws. This test when conducted iteratively ensures that almost all issues are ironed out. This technique allows a designer to concentrate on the evolution of his/her design instead of concentrating on how the test needs to be conducted.

    In my opinion, the principles or heuristics which will play a huge part in heuristic evaluation of our project are:
    1. Aesthetic and minimalist design – A user of our portal (our project idea) needs to able to instantly recognise the layout of the portal. This will help the user in understanding that our portal is there to make his/her task easier rather than making it complex, which is the case if the portal is cluttered with lot of information.
    2. Consistency and standards – Since the user will be navigating between different pages we need to ensure that user is not shocked when performing this action. The consistency of layout, tabs, navigation hierarchy, consistent range of numerals (either in ascending or descending) needs to be checked across every page.
    3. Visibility of system status – This is very important in an online portal as the user needs to know if he/she is logged into the system or not, and if their current orders/items are being stored/noted or not (shopping cart). Also they need to be informed continuously about their current state in ordering process.
    4 . Error prevention – This principle will allow us to track and prevent a user accidentally deleting/losing his/her post when he/she is in BUY page, or when the user adds tries to check out an empty shopping cart.

  17. Pushkar

    I feel that the heuristic evaluation technique is the best discounting technique. It can be used to evaluate the design specification at an early stage. This will help in avoiding a lot of potential usability problems. It can also be used evaluate prototypes, story boards and even fully functional products. So it is flexible enough to be used at various stages of design and development of the product. In heuristic evaluation a small group of evaluators examine the interface and check if it complies with usability principles. If only a single evaluator examines the interface, he might end up missing out a lot of usability problems in the interface. But if there is a group of people, each person may think differently and might identify a variety of issues in the interface. That is why it is always better to have a group of people evaluate the product. Also there are 10 heuristics provided in the book, which help the evaluators to figure out issues. Another reason why I prefer this method is that it is cheaper when compared to other techniques.

    It is recommended that about 5 evaluators evaluate a product. This makes it ideal for using this technique to evaluate the projects that we work on in class, especially if it is a group project. 2 – 3 groups can combine to mutually help each other in testing the product. This method is proving to be very useful for our current project related to an online portal for buying/selling items. We were able to find some issues related to navigability and consistency in our design.

  18. Chunzizheng

    Where is my comment? I submitted before 9:30pm but when I return to this page now I cannot found my comments…Maybe it is my mis-manipulation? I just submit it again.

    The idea that we just need to test with 5 users shocked me at the first sight. However, after reading the article I found it is quite reasonable. When the users are in the same age group or they have a lot of similarities in their characteristics, it can be a waste to do the test on a large amount of users for they may acting in the same way. Moreover, the limitation of the budget of a project requests us to use it in a efficient way. I like the idea that we should do users testing across many small tests, because the results of using just one way to test could be partial.
    In my future work, I intend to use different test methods to test different small groups of users. I will do an analysis on users’ characteristics first, and divide users into small groups according to the analysis. Then I would choose three methods that fit most of characteristics of user groups and collect the results to do the future user requirement analysis.

    1. gillian

      If you use a different email address or spell your name differently, it thinks you are a new person, and I have to approve the comment manually.

  19. Chandra Bhavanasi

    I really liked the Iterative design where Nelson explains that 15 users are actually enough to solve most of the usability problems. I think this the typical process followed in most companies as there is a limited budget and it is really important to use this budget in a wise manner and solving as many usability problems. He explains how 5 is an adequate number to actually start building something, so we can retest iteratively on another set of 5, and so on. But it takes longer time to do an iterative process that the heuristic approach, which is less time consuming and is generally quick. I have never really experienced any of these techniques, but given a chance, I would choose Iterative approach over Heuristic evaluation.

  20. Matthew Chan

    I used to be a huge fan of user evaluation from heuristic eval to field studies and so forth. I’m also a huge Apple fan because of their amazing design. And one day, i had a phone interview with an Apple engineer. He asked what i wanted to do, and i talked about user studies and evaluation, etc. His reply was “well, we don’t do that.”

    *Mind blown*

    I asked him how it was possible that they could make elegant designs without any user studies or evaluation, especially since they’re notorious for secrecy. The man said the designers were just “that good,” which makes me reconsider the limitations of heuristic evaluation to user studies, and every thing else. They do it amongst themselves, but one key thing we learn early in all UI/UX classes is to put yourself in your user’s shoes, to see things from their points of view. In short, these guys are great at finding the unarticulated needs of users. I want to steer in that direction to develop and hone my intuition for good design and to anticipate how users will behave.

    Going back to topic, i am still a fan of heuristic evaluation bc i’m amazed by the diverging and overwhelming events when more than 5 ppl submit their feedback for a prototype. Amazing. How do we find the global optimum and not settle for a local one?

    1. Jianlin

      Is it why apple fails on batteries?

      1. gillian

        what do you mean by “fails”?

  21. Jianlin

    I appreciate the thorough thinking about the heuristic evaluation. I have considered similar ideas before, such as heuristics 4, 7, and so on, but not so systematically. It does provide a guideline to be used in evaluation as the author claims, especially for the beginners. Walking through the list can help us generate evaluation ideas from different angle of view.

    In our own project, I think first we can use it through cognitive walkthrough. During the walkthrough process, we can focus on the specific heuristic criteria. Then we can also design an experiment based on the idea of Error prevention, Flexibility and efficiency of use. For example, we can record and compare the number of errors that the users make to complete certain tasks, e.g. to pull out a summary, to compare two different designs.

  22. Dongzi Chen

    After reading the evaluation techniques, I find that no matter want kinds of evaluation techniques we want to use, we must have something which can be evaluated first. At least, the designer began to build it already and they really had a prototype in hand. It is very hard to do an evaluation based on a concept or something just exists on paper. However, very hard does not mean impossible, actually we can take some advantage from the paper prototype phase, because it is much easier to change a design in the early stages.
    The evaluation of the interface is not just to evaluate the interface. The interface is meaningless without the functions behind it. So, at the same time, when we evaluate the evaluation, we evaluate the whole system related to the interface. Easy to read and easy to operate is very important but it is just the surface of the interface, and we also to notice how the interface reflects the whole system to users and gives them users correct feedback immediately. For this reason, it is hard to evaluate our paper prototype project.
    Never the less, what we can do is try to focus on the interface itself and do an experimental evaluation in a sample way. For example, we can set a series of variable at first, and fallow this series to see how our interface will work and try to make the interface neat, clean and constant for user to get the useful information easily.

  23. Jinelle D'souza

    I once did a project using traditional user testing, where we were forbidden from answering any doubts the user had while testing. If the user stumbled upon some aspect of the design and could not proceed forward, we were asked to modify the system to accommodate that need. This process was then repeated and retested. It was time consuming and exhausting for all the parties involved. One of the reasons we were asked to specifically follow this method was due to the fact that the end users were factory workers who would not understand a complex design and needed to use it on a daily basis.

    I would consider heuristic evaluation as the best discount technique. It does not need subject matter expertise, it is cheap and you can use as many evaluators as you deem necessary to test it. The evaluator pool would be large so finding someone for a low budget testing for this purpose will not be an issue. So for any individual project this would be the perfect option to use. According to Nielsen, since the evaluators are not allowed to compare results, their output is much more valuable since we know how sometimes evaluators may change their results based on their peers. Also if evaluators struggle with the mechanics of the design, (which happens quite often) the testing process would move along faster if we could help them get past that. (But I should note this down later for reference) This method would be beneficial since the evaluators specify the reasons they do not like it with reference to the heuristics. This would enable the designers to understand the reasoning behind the faulty design.

  24. Jeffrey

    Heuristic evaluation is a quick and easy approach that can be used to uncover majority of usability issues encountered by evaluators; moreover, it does not involve many user to carry out a proper evaluation. This is advantageous because companies are often limited in time and money. Of course, if insufficient amounts of information/data regarding potential usability problems are provided from the few evaluators participating, additional evaluators can be brought in for testing. Another thing useful about heuristic evaluations is that severity ratings can be used during a heuristic evaluation session. However, this prove to be more beneficial when evaluators are not as focused on finding new usability problems. Therefore, perhaps a questionnaire or survey can be sent to the evaluators after the actual evaluation session. Such questionnaires and surveys can list sets of usability problems discovered, and ask the evaluators to rate the severity of each problem. These ratings are useful as they can be used to allocate more resources to fix the more serious of problems and provide rough estimates of the need for additional usability efforts. Moreover, evaluators can provide additional insights by revisiting interfaces/prototypes or even by relying on their memory and written problem descriptions. Of course, multiple severity ratings from several evaluators will be needed for greater reliability. Heuristic evaluations have been useful in past experiences when I was short on time and when it was difficult to find evaluators (more specifically, doctors) to provide some of their time for design sessions.

  25. Jared Young

    Cognitive walk through is a great way to thoroughly test a design, especially if you perform
    multiple runs with different people. This step by step analysis approach is used in software testing at my
    work. When software transitions into the testing phase, the software is given to the testers who run
    through a list of tasks that are to be performed on the software. The tasks are documented in a list
    and contain a sequence of steps so that every task can be thoroughly tested and analyzed. Many software
    issues and bugs are found this way. Testers often give feedback of their firsthand experience with the software
    which can help improve the software from a usability standpoint.

    For a user interface design, I would presume that a cognitive walk through could not be performed exactly
    like in a software setting. We can not assume that every user is an experienced tester. As the book states,
    it is very important to note who the users are and to gauge their level of expertise or prior knowledge.
    This will help designers provide better test cases and user tasks that cater to the desired user base.
    This may also help users provide more accurate feedback and prevent people from becoming confused. For a
    design project, I would be hesitant to writing many test cases because this tends to be tedious. However,
    ultimately this is a good method for getting a thorough analysis.

  26. Ishita Shah

    Of the discount techniques, I think heuristic evaluation technique works the best because it is quick and flexible
    enough to be suited to a broad variety of design questions. The ten usability heuristics cover most of the design
    principles, but using competitive analysis and user testing, one can build a supplementary list of category-specific
    heuristics as well. If more users are involved, redundancy occurs. So an appropriate number of individuals should be
    selected.

    We plan on using heuristic discount technique for our project, which is an event discovery application. We are building our
    prototype based on the design requirements that we came up with. Then we would show our prototype to individuals and get
    their feedback. Heuristic analysis would help in determining what design changes need to be made.

  27. Shih Chieh Lee

    Heuristic evaluation is my favorite approach, because it reminds me of the standard industrial analytical approach that I learned in the college. The product or the design was entirely explored by the experts and scored in the scale. The evaluation form is clear and straightforward, and the description provides the details whom should be cared of. The form works well if the system has multi-layer interface or larger software architecture, because the evaluation was performed in the organized table. Besides, the score of fixability, which could be easily determined by developers, could point out the order of fixing the prototype, and the sum of fixability and severity would represent the weight of the specific issue.

    Although there are still some drawbacks, I love the concept of evaluation by experts of specific domains. Compared to the user testing, it’s quick and effective, and it would be a very good approach to analyze and presented the qualitative facts and feelings in quantitative way

  28. Jacob

    You only need five users</strong: Brilliant. I can't tell you how many times I've used a new interface (anyone tried Windows 8 on a desktop?) and said to myself "if only they brought in five users to try it out, they certainly would have found this to be a disaster and they would have fixed it." But every time I said this, I also thought, "maybe you need a lot more than five users, so I should give them a break because extensive user testing is expensive and time consuming." But I guess not, there is no excuse to not bring in five of your friends, ask them to play with your software and then buy them In N' Out as a thank you. I am so happy to have read that article!

Comments have been disabled.