«

»

Oct 02

Print this Post

R1: Interactions and Paradigms

Given what you read in DFAB, what do you think of the evolution of computing interaction models? What do you think is the next wave? (Note: these questions took 25 words…. so that means, your response to my post should be around 10 to 20 times as long at the most… which is not much text. Don’t overdo it :) )

Permanent link to this article: https://www.gillianhayes.com/Inf231F12/r1-interactions-and-paradigms/

28 comments

Skip to comment form

  1. Martin

    The chapter began by unpacking Norman’s execution-evaluation model of interaction, which characterizes actions strung together as an interaction with a tool, such as a light. In contrast, Abowd and Beale describe the interaction framework, which places the system and user in a cycle mediated by inputs and outputs. That is, users act as part of a cycle of feedback from a system, sharing input and receiving corresponding output.

    I believe the models forwarded by Norman, as well as Abowd and Beale, describe interactions within a closed task circuit performed by individuals. While Abowd and Beale are more explicit, both models consider the user as interacting with a dynamic system. However, we increasingly work with networked systems and users. Increasingly, we work in global social and organizational systems within social networking sites or international collaborations that reify themselves in distributed user interactions.

    I think future models would likely account for dynamic group behavior across dyads, small groups, organizations, and societies. The user in Abowd and Beale’s model might be one person, but systems increasingly support many people. Supporting their collective awareness (of system changes, and one another) becomes a challenge above and beyond individual feedback, particularly in asynchronous and distributed interactions that lack helpful social cues. An analogy would be using a switch that lights up a building instead of a room, or alternatively, a room with many people. In either case, we often don’t have information about who wants the light on. I think we can begin to imagine a system with inputs and outputs branching from several users, each creating a circuit that intersects with one system. Even this attempt reduces a whole constellation of systems whose availability affects users, the ways they perform individual activities, and interact with each other independently of those systems. I suspect the potential complexities of interactions between people and their tools aren’t clearly bounded in the singular user-system circuit.

    1. Xinlu (Bill)

      The Norman’s interaction model mentioned in DFAB is fairly intuitive. It summarized the process of interaction between human and machine in an ordered and clear way. However, this model takes machine, the computer, as a whole, neglecting the communication between interfaces and system.
      The behaviors between input and output are apparently different, and we still need some translations between input and system. For example, we have only four states for mouse: if left button is pressed, if right button is pressed, and x, y coordinates to indicate the position of cursor. These states will be interpreted as data moving from CPU to memory or vice versa in system.
      So there comes the interaction framework that meets this differentiation. It includes a complete loop from user to input, from input to system, from system to output, and from output back to user. I think still we need to separate physical interfaces like mouse and keyboard from program interfaces like button and textbox, at least for PC.
      In the future, I think there will be no single system. The system part of the interaction framework should be replaced by a cloud or many systems interconnected with each other. And multiple users can also interact with the system at the same time. Moreover, the boundary between input and output will become vague, since we are already using some touch screen devices which combine input and output parts together. And we may directly use devices that let us experience 3D virtual environment. In that case, the operation between input and system need to be redefined.

  2. Armando Pensado

    I think computing interaction models have evolved to better root HCI analysis with reality. The execution-evaluation cycle correctly captures the fact that interaction is a back-and-forth dialog between the system and the user. Next the interaction framework identifies that not only do these entities interact, but they do so using very different methods (i.e. languages). Then ergonomics focuses on the user and the fact that the user must interact with the system in a physical world inside a society. Then people like Csikszentimihalyi understood that interactions are not absolute and cannot be merely defined in terms of efficiency. All of this is a path towards better understanding that computers and humans interact in a very complex setting plus the fact that each of them is very complex themselves.

    As the chapter on paradigms says, computing devices are increasingly ubiquitous in our world. Which means computing interaction models must take into account not only the user (or multiple users) but also other systems (i.e. computing devices). A single user now interacts with a large number of systems in a day. These devices are increasingly expected to work in harmony to be effectively utilized by the user. Thus understanding exactly how users work with multiple systems and how those systems work with or against each other is essential to be able to model and analyze these interactions.

  3. Xiaoyue ( Lily )

    The computing interaction in brief as far as I concern, is a connection or translation between user and computer. A famous model of interaction in Human-Computer Interaction is Norman’s, which is actually an execution/evaluation loop with seven stages included, concentrating on user’s view of the interface. This model illustrates the existence of something that makes systems seem harder to use than others, namely gulf of execution and evaluation. And the framework built up by Abowd and Beale could be regard as an extension of Norman, which breaks the circle into four components and each of them has its own unique language waiting to be translated to other parts. Then, ACM SIGCHI Curriculum Development Group presented a similar model, in which ergonomics issues was involved.

    When it comes to the future, a possible interaction model may be enlarged based on the current ones. That means, some specific part or several part or even each part in the circle would be more than one. For example, there are many people in a meeting hall, each of them has a control in their hands, they could press a button to express as a hungry status, and each control has a specific sensor to receive status, and only if the total amount of “hungry” status pass 50% of the amount of people, many ovens with different food in kitchen could be heated to prepare for supper. That is a sketchy case of multi-components in each part of circle, which has more than one user, input, output and system. As the development of ubiquitous computing, it seems more and more possible to realize any kind of interaction model in the near future, and that is why I am not sure about what it would be like and how complex it would be.

  4. chunzi

    The chapter 3 introduces models of interaction, and the analysis to what models can help to our understanding of interaction and considerations of interaction difficulties. The chapter 3 also talks about the changes ergonomics brings to the design and evaluation of the interaction, especially in the physical characteristic. Then the chapter 3 discusses the styles of the interface, their characteristic, and the user experiences it will bring. In the chapter 4, I read paradigms that promote the usability of interactive systems.
    Both the execution-evaluation cycle and the interaction framework that introduced in the chapter 3 are very explicit, while the interaction framework is more realistic because it includes the system to be a factor to consider. However, both of two models didn’t consider the environment or occasion the user in, which I think is also an important influencing factor to the user experience. For example, the user’s requirements in daytime could be different from in night. In the same way, the interface may need to be modified to fit different environment, such as home and office, sunny and rainy day.
    I think the future models of interaction should be added more factors rather than remaining 4 components (user,system,outputing,inputing),especially factors from society, because people could never been isolated from society.

  5. Charlie Wu

    Norman claims the models of interaction that two core elements of ineteraction are user and system.In the domain of Norman’s model, execution and evaluation are the main phases.Thereby, the interaction between human and computer is how to let user to get control the system.And also how to get information. That’s the concept of the interaction.I think the key points of the interaction are to let user know that what things computers could do.So a successful interaction system should be human send order to computers.However, Norman’s model didn’t consider the system interacts through interfaces.
    Then Abowd and Beale improve the model by adding input and output.The circle now has user, system and I/O. In this circle, we must pay attention to the feedback, and adjust the interface appropriately.

    I want to use a sentence from Nokia to start my next part– “Connecting People.”

    We used to interact with computer by using keyboard, mouse or maybe some writing pad. It’s clear to see the I/O part. As mentioned in the first part, the circle is easy to identify by the model. What I think in the next wave is the I/O parts are going to blur; which means the virtual interaction interface will take the main chair. The problem of the Norman’ model or even the extended model seems relate to small groups. While the knect and Wii came out, the system interact with user with new visual system instead of traditional keyboard and monitor. The gap between human and computer will be smaller and smaller. The system will deal with large number of actions from user, or users. The information circle will be bigger and system will be more like human(A.I). In the movie Iron man, system talks with Tony Stark. Now we talk with siri!

  6. Jie

    The chapter first introduces the most influential model – Norman’s execution-evaluation model. This model focuses entirely on the user part of the interaction, treating the system as a black box. The user, who is the trigger of the whole interaction, needs to execute actions and then evaluate the results.

    The model proposed by Abowd and Beals extends Norman’s model by introducing the input and output components between the user and the system. There is a translation from one component to another forming a circle to present an interaction. The user still is the trigger to start the interaction.

    I think the evolution of interaction models is parallel with the evolution of technologies. As the computers become increasingly more powerful and of smaller size, the interactions will become more complicated and ubiquitous than ever before. That means the inevitable increase of the variability of input and output, and thus more sophisticated translations between components involved in interactions.

    In my opinion, there will be the development of interaction models where a user doesn’t have to execute actions; instead, the input component senses the user’s actions or the environment variables around the user, thus automatically acquiring the input language. After the system’s response shows up on the output component, the user doesn’t need to evaluate the feedback. That means that computing devices are so ubiquitous that users are not explicitly aware of the interactions. However, by reducing user’s efforts in the interactions, the system play a much more significant part in input acquisition, processing, and information presentation.

  7. Xinning Gui

    Norman’s the execution-evaluation cycle depicts the interactions between users and systems, and considers the system as a whole. A system is the interface. But Abowd and Beale’s the interaction framework divides the system into two parts: the interface (Input and Output) and the core function. Norman’s model emphasizes the user’s psychological process, highlights the actions of user, and neglects the dynamics and potential of system. This model is actually system- centered in the sense that the user has to adapt himself or herself to the system. Abowd and Beale’s model breaks the system down to several specific steps, which is inclined to dig the deep and detailed things behind the interface, so it provides more clues for us to evaluate the interface.

    In retrospect, the trend of three waves of computing is that computing devices are more and more embedded, portable, easy-to-use, popular and de-centralized. Based on this, I think the next wave will be characterized by integrating Biology and Computer Science. For instance, chips can be implanted into human body to help people. The boundary between people and computer system will be blurred. Recently, people are researching brain-computer interface. I think this is a good starting point for the next wave.

  8. Sreevatsa Krishnapur Sreeraman

    The two interaction models that are discussed in DFAB, which are execution – evaluation cycle and interaction framework are based on the premise that the user is consciously engaged in using the system. Coming of the third wave of computing, where the user does not realize that there is an interaction going on with a computer system or where it becomes trivial that such an interaction is indeed happening, both the models do not fit this interaction model.

    The future interaction model will be advancements to the agent based and context aware interfaces. This means that the interaction between the user and the system will become passive. In the interaction model, the user is replaced by an intelligent agent, which learns the mannerisms of the user, which will be translated into stimuli for the system. The system thereby performs the operations specified by the agent and changes its state. Now, the agent translates the state change into a physical, or a sensory phenomenon understandable to the user.

    The agent in itself becomes complex that it has to be modeled. The agent monitors the actions (or maybe even thoughts?) of the user. The action of the user and the context under which such an action is performed is recorded. The goal of such an action is also deduced based on various factors and advancements in machine learning and artificial intelligence. The agent then uses this data and inferences to translate the input and output from the system.

  9. Jianlin

    Good points. I would like to consider the models at different scopes. For HCI, the scope is focus on system-user, while CSCW the scope is extended. However, even if we consider the issues of group activities and systems, some of the problems can still be broken down to system-user interactions. The difference is that the latter has to consider the system’s output is not based on single user’s input.

  10. Parul Seth

    The evolution of computer interaction models is metaphorical to a growing child (though not in terms of size & scale). The rudimentary beginning has now transformed into sophisticated paradigms of existence where a person can be in touch with technology anywhere and everywhere in one form or the other. But the question remains, where does this person belong to? Is she from a small village in India, not knowing how to read and write? Is he a child from Africa, for whom having a disability is the end of life?

    The next generation interaction models, as ubiquitous as they are now, should become much more “penetrative” in the world as a whole, in future. I see “affordable” integration of biological sciences and computing models; I see “things” (appliances, automobiles, devices) interacting with each other to offer sustainability and a greener world. I see technology becoming easily “adaptable” by the ones who are from the older generations by becoming a buddy in their lives. All this and definitely a lot more will entitle a paradigm shift and be a new wave to change the world for the better and best! I call this “Penetrative Computing.”

    For the developed world advanced interfaces & auto-everything will become a norm, interactions with technology will happen like day-to-day interactions, we will move towards a life which is completely insured with technology where health is continuously monitored, embedded computing will enter our homes and workplaces. Technology will become less intrusive and more inclusive.

  11. Surendra Bisht

    Since its inception, the human computer interaction has been iterative process. However, in past, interaction with mainframe machines used to take hours and hence iterative cycle of interaction was very slow. Due to continuous advancements in computer technology such as time sharing, processing speed, windowing environments, the turnaround time of this iteration has improved manifold. We have moved from batch processing to direct manipulation paradigm.

    As human computer interaction has become more richer, there was a need to model this interaction to help us understand it better and find any root causes of the issue in this process. As per the book, Norman’s model of “execution and evaluation cycle” is most influential model of interaction. Norman model consist of two phases, namely execution and evaluation, which is further subdivided into seven stages. Although Norman’ model is easy to understand, it is more user centric and computer systems are not modeled as a separate entity. Abowd and Beale extended Norman’s model to include four components – System, User, Input and Output. Among the four components mentioned in interaction model of Abowd and Beale, I think, the three components namely ‘System’, ‘Input’ and ‘Output’ has evolved significantly to help in ‘User’ articulation and observation.

    I think context aware implicit interaction could be the next wave. We already have context aware interaction, such as restaurant suggestion based on individual location or online recommendation based on your purchase history. However, these interaction are still explicit. As an example, Next generation context aware system could monitor your food habits implicitly and advises you based on the available information.

  12. Jacob Heller

    So far in this odyssey of the evolution of computer interfaces we have seen almost no examples of artificial intelligence. With the exception of very primitive voice-recognition (like saying ‘yes’ or ‘no’ into the telephone), the layout of buttons or the tree of menu options has remained entirely static. Or if you use particular features or access certain files more often, you have rearrange things yourself to make things more convenient.

    I just got a new Android phone with Google’s new feature called “Google Now”. In addition to advanced voice recognition, GN keeps track of my latest searches and tries to guess what information I’ll be looking for next. For example, I frequently check the weather on my phone, so when I access GN, the weather always pops up first. It knows what time I typically drive from home to school, and right around that time each day, it will display how long my commute will take in current traffic. I never told GN when I drive home, or that I like to check the weather, it figured these things out by watching my patterns and drawing conclusions.

    I think the next big thing will be predictive interfaces like these that take guesses and what you want, and present those buttons, information or options ahead of other functions. Perhaps Microsoft Word was the first to do this when you clicked on a chart in your document and char settings toolbar suddenly appears. But it doesn’t take much intelligence to assume that when you click on chart, you’ll want to edit it. The next generation will be much smarter.

  13. Anshu

    I appreciate how far human and computer interactions discipline has come. It evolved from a novel idea of a user expressing his intention i.e. ‘task’ to a computer reacting to it and giving the desired result i.e.‘goal’. From there the idea developed into Abowd and Beale’s interaction framework involving User, System, Input, and Output which represents the basic interaction between human and computer in the form of articulation, performance, presentation, and observations.
    However HCI seems to be limited to single user, single input-output at a time. Perhaps this is the essence of HCI. However we define it, I think HCI can be taken to the next level by adding some more abstract features to it like usability and interactivity. We may incorporate these features to some extent at present but can we quantify how much of a usability or interactivity or user-friendly interface is enough?
    Secondly, I think we can focus on creating formal models for design and implementation of HCI principles, this can help new or existing HCI designers design faster and better. They can put their time in actually creating something new or being more creative about existing models rather than iteratively doing the ground work for all similar interactions.
    We encounter many bad designs and interfaces in the digital world, the quality of interfaces and design can be improved by actual testing and validation where real manual testers can do the testing without HCI designer’s intervention. In this way the interface will be purely tested on the basis of correctness, usability, and interactibility.
    Thanks to all the research done in the past, we have our basic tools like framework and interaction styles in place; now we can experiment with putting HCI to use in the broader scheme of things and go beyond the limits set at present.

  14. Dakuo

    The author introduces two models of interaction: the execution-evaluation cycle model from Norman and the interaction framework proposed by Abowd and Beale.

    The execution-evaluation cycle is out of our intuitive understanding of the interaction between human and computer. The user orders a plan of actions then the computer interface executes those actions. The user observes the result, evaluates the feedbacks then determines the further plan. The model has two main stages: execution and evaluation. Both stages are user’s activity. Using this model, Norman explains some problems which user may face to. He defines the differences between the user’s formulation of the actions and the actions allowed by the system as gulf of execution. Also he defines the differences between user expectation and interface outcome as gulfs of evaluation. The Norman’s model including all those terms helps us to understand the interaction, however, it is insufficient in describing the system. It only concentrates on the user’s view while ignores the system aspect.

    To accommodate the realistic of interaction, Abowd and Beale extend Norman’s model and propose the interaction framework. It includes the system explicitly and breaks the whole interaction to four main components: the system, the user, the input and the output. There are four steps in the interactive cycle, each corresponding to a translation from one component to another.

    I would imagine the next generation model might pay more attention on information. Nowadays we are experiencing information explosive era. The massive information is a challenge for both the system side and the user side in a HCI model. From the system aspect, the model would focus more on how to store the information appropriately. From the user aspect, the model should solve the problem how to represent the information effectively. Data mining, information retrieval and information visualization technologies come along with the trend. And I think the interaction model should also evolve to reflect this change.

  15. Karen W

    I apologize in advance for my lack of technical knowledge/language.

    I find that current models are already accounting for dynamic group behavior, though I am sure these systems will continue to improve. The text mentions email as an example of Computer-Supported Cooperative Work (CSCW). Google Docs is another example, enabling multiple users to interact with each other and the one system simultaneously. I recall one class that I TAed earlier this year in which the 250+ clever students collaborated on a class-wide midterm guide in this manner.

    I agree with the text authors that the next wave involves embedding computers in our lives, such that we are barely aware of them, and that we exert minimal effort to interact with computers. I imagine the blending of us and the computers into one entity. Augmented reality, in which computers “enhance” the environment around us, is a step in this direction (See this cool short: http://vimeo.com/46304267). Technologies that transform thoughts into actions will also serve to blur this distinction between humans and computers. I believe there is already preliminary research on technologies that “read” certain words from brain activity, and thought-controlled prosthetics, etc.

    1. gillian

      your knowledge seems just fine to me :) technical jargon is not really all that helpful in understanding things.

  16. Yao

    In Norman’s model, a user executes his actions and evaluates the system’s performance through the interface, but he doesn’t know what’s going on inside the system. That’s why Abowd and Beale introduced the concept of communication between the interface and the system. I think that Norman’s model gives a more straightforward and simpler way for the user to evaluate a system, as the model is based on the user’s point of view. On the other hand, the model introduced by Abowd and Beale should give the designer or some experienced users a better method to fully understand and analyze an interaction.

    For future models, I would extend the model by Abowd and Beale. There might be hundreds of millions of systems around the world, and they all link to a central server via the internet. The server, or the cloud, could have the ability to improve the different aspects of an interaction (articulation, performance, presentation and observation) by learning from the information of evaluations and experiences gathered from different devices. Besides, the articulation part for the user might become easier and more intuitive, like google’s voice search, which enables us to give query inputs to the system effortlessly.

  17. Matthew Chan

    First, Norman’s model of evaluate-execution makes a few assumptions. Notably, we have one user and one system. Given the plethora of tablets, phones, and laptops, the ratio of users to devices has changed drastically. In one (uncited) estimate, by 2014, there will be 6 devices per user–personal phone, tablet, personal computer, work computer, etc.

    With new assumptions, computing interaction models are changing fast and becoming more dynamic. Users are no longer just connected to systems, but connected to other users via systems. Regarding the modeling of these interactions, new technologies are enabling us to do more, such as the Microsoft Kinect and its potential to revolutionize the desktop (see this start up’s mission: http://threegearsystems.blogspot.com/2012/09/hello-from-3gear-systems.html) and the ubiquitous touch screens. What’s next? Probably Skinput–using the skin as a surface to display things.

    Finally, my last thought leads me to the iterative cycle of prototypes and how they are used to constantly refine the interaction models. Both low and high fidelity prototypes simulate the behavior of systems to expose flaws and misunderstandings of a system w/o fully implementing it and wasting unnecessary effort. Design and prototyping is taking a front seat in Silicon Valley, as evident in Square’s acquit ion of a design firm in New York. In another instance, Apple’s law suit with Samsung revealed the +40 different prototypes of the iPhone and how they influenced the final design.

  18. Jianlin

    In Chapter 4, the author discussed a lot of related topics. Some of them are more than I’d expected. First thought is that the interaction is not limited to the interface, although all the interactions go through the interface. e.g. language vs action, hypertext. These topics more likely belong to the mechanism of system operation. But they still change the way of interactions. This may indicate that there is no clear boundary of human computer interaction. It more or less penetrates into every aspect of system design.
    Second thought is that the topics list is less likely well organized. It’s ok to organize them chronologically. But I feel if they can be organized in another meaningful manner, for example, according to different level of interaction, or different distance to user, probably the ideas could make more sense.

  19. Yao

    In Norman’s model, a user executes his actions and evaluates the system’s performance through the interface, but he doesn’t know what’s going on inside the system. That’s why Abowd and Beale introduced the concept of communication between the interface and the system. I think that Norman’s model gives a more straightforward and simpler way for the user to evaluate a system, as the model is based on the user’s point of view. On the other hand, the model introduced by Abowd and Beale should give the designer or some experienced users a better method to fully understand and analyze an interaction.

    For future models, I would extend the model by Abowd and Beale. There might be hundreds of millions of systems around the world, and they all link to a central server via the internet. The server, or the cloud, could have the ability to improve the different aspects of an interaction (articulation, performance, presentation and observation) by learning from the information of evaluations and experiences gathered from different devices. Besides, the articulation part for the user might become easier and more intuitive, like google’s voice search, which enables us to give query inputs to the system effortlessly.

  20. Ramraj

    Chapter three started with Norman’s execution-evaluation cycle which is the most influential model of user interaction. Norman explained the model by taking an example of switching on a light. He explained goal, intention, specifying the action, executing the action, perceiving, interpreting and evaluating the system state with this example. As Norman’s model does not deal with system’s communication through interface, Abowd and Beale proposed a new framework which is an extension to Norman’s model that deals with above problem. Abowd and Beale’s framework has Input, System, Output and User and their interactions. This chapter also included the role of ergonomics in interaction. The interaction styles were discussed. Out of the entire interface styles (command line, menus, natural language, question/answer, spreadsheets, WIMP, point and click), WIMP interface is more popular. The role of other factors such as user experience, user engagement and value was discussed.

    1. Ramraj

      The 15 paradigms of user interaction were discussed and up to some extent the historical advances were discussed. Out of all the paradigms CSCW asynchronous communication was a revolutionary invention in HCI from which users could communicate via email. World Wide Web is another significant development which provides usable interface by hiding the internal complexities. The vision of Mark Weiser in establishing the ubiquitous computing was already established. Looking at the pace of the technical advances (pressure mats, ultrasonic movement detectors, weight sensors and video cameras) in user interface, with in no time we will come up with new innovations which are based on Sensors and this is the time to create good user interfaces for the new products.

  21. Timothy Young

    Motion-controlled games reflect an interesting shift toward new forms of interaction with virtual spaces. Game consoles which used to only rely on controller pads for the player to interact with the game now support motion controllers either through wand devices (Wii controller, Playstation Move), or motion capture (Kinect). The goal of these new devices is to shrink the gulf of execution and gulf of evaluation when a player is attempting to literally immerse themselves into a virtual world.

    The next wave of interaction may involve a similar leap toward interacting with virtual worlds. Upcoming augmented reality devices, such as Google Glass, or complete virtual reality gaming headsets (prototypes by Valve software, and Occulus – http://www.pcgamer.com/2012/09/10/valves-michael-abrash-examines-the-future-of-virtual-and-augmented-reality/). These interfaces appear to further closing the gap between the interaction between the player and virtual worlds. In the case of Google Glass, visual augmentation of the ‘real world’, if adopted by the general public, may prove to push the data streaming into the screen of a mobile device to an augmented heads up display. These new possibilities reflect new frontiers in usable and practical three-dimensional interfaces.

    Toward a farther future, as noted in the previous PCGamer article, Brain-Controlled interfaces may become a much more practical possibility. With augmented or virtual reality glasses, the present day user may still need to use body motion and voice commands to dictate their intent and input. With brain-controlled interfaces, a user can overcome some constraints of the physical world (limited space to move) or speech recognition (ambiguity).

  22. Dongzi Chen

    I can say that I am shocked by what I read from these two chapters. I never realize that HCI (Human Computer Interaction, in my opinion it should be HMI Human Machine Interaction) can be so systematization and diversification. It is much deeper and harder than I used thought. For example, When author talk about interaction, he mentioned Ergonomics, especially, when I think about disability, he mentioned the context of the interaction which makes me associate if we need the bond computer and chair together for selling, because we always spend too much on computer and we need a comfortable chair.

    Paradigm seems much harder than interaction. Even at the end of the chapter 4, the author said,” This shift is so radical that one could even say it does not belong in this chapter about paradigms for interaction!” When I finish the chapter 4, I have the same question that if all the information mentioned in this chapter is a part of paradigm. For me, Sensor-based and context-aware interaction looks like belong to Fundamentals of Compiling and Artificial Intelligence (When the author talk about Language always makes me think about Fundamentals of Compiling and AI). Maybe it is hard to distinguish all the information between these courses originally.

    When I retrospect the “old” interaction methods with the authors’ guide, what I feel quite normal today have such a history and change. I remember the author mentioned “user experience”, which is the most important concept for today’s software engineering, has a struggling and excited development process. Look at myself, just like a spoon-fed child, enjoy the best interaction in history but do not know how it comes. Whoever now, when I use my cellphone and operation system, I think about why it looks like this and how it can be better. Maybe this is the best reward.

  23. Jeffrey Tse

    It is important to understand how users communicate his/her requirements to a system. Such comprehension allow improvements to be made to the interactions translating between the users and systems by making systems more usable and responsive to the user’s needs. Norman’s execution-evaluation model, which concentrates on the user’s view of the interface, illustrates that interaction is a two way process between the system and user in which the user is continuously providing instructions and receiving feedback from the system. Ergonomics, which involves the physical characteristics of interaction, provides additional guidelines for constraining the ways in which specific aspects of a system are designed.

    Over the years, computing interaction models have evolved; new descriptive and predictive models have surfaced, as systems have also become more dynamic. Users today now interact with multiple systems at a time, in which systems must additionally account/interact with other systems. Computing devices are naturally being woven into everyday life in such a way that technology is quietly disappearing into the background of the users lives as users become more effective in using them. Eventually, the barrier between the user’s thoughts of accomplishing various goals and the system’s comprehension of the user’s task will diminish.

  24. Chandra Bhavanasi

    Chapter 3 introduces us to the Norman’s model of execution and evaluation where the user interacts with the system. User interacts with the system using a “task” language and the system understands this and performs using “core” language. At the end of the day, it’s all about getting the user’s task done and the main problem is the translation of the task language to the core language, which is particularly for ubiquitous systems for they are context aware and want to understand how the user thinks. A simple example would be to automatically turn the ringer to silent when you are in the library or auto-attend an emergency call when you’re driving. I think the future interaction models have the user put less effort in interaction, but the systems have to be smart enough to figure out the user goals as well as inputs and does them automatically, and seamlessly.

  25. Jared Young

    The way people interact with systems is changing everyday with the influx of new technologies.
    Touch screen, embedded sensors, 3D displays, mobile, advancements in gpu/cpu power are just a few of the things
    that are changing human-computer interaction. I think the interaction framework(s) provided in
    the text are a good basis for modeling interaction. Certainly many of today’s interactive systems
    can be represented in this way. The framework presented by ACM that includes the social and
    organizational context makes a good point in that there are things that aren’t directly involved with the system or user
    that do have an affect on interaction. My point is that there are also software and hardware components
    that are not directly involved in the interactive system that have a profound impact on interaction. For example,
    everyone praises the iphone for the intuitive user interface design, sleek design and clean graphics.
    If the hardware components were lacking in processing power, or if the database software or graphics engine
    were lacking in efficiency, surely this could have a negative impact on an interactive system. Supporting hardware
    and software components should have a place in the interactive model alongside cultural and organizational context. Future
    models should take into account multiple inputs and outputs that correspond to the technologies listed above.
    With the advancement in virtual systems, it is not uncommon these days to have systems within systems. Certainly this
    is another thing to consider in human-computer interactive design.

Comments have been disabled.