Digital Environments: CW1 - Andru Dunn

Exercise Answers


1.1 – Memory Experiments
(i) Short Term Memory
The results from our short-term memory test had similar properties of results to what the chapter had told us. Those tested found it difficult remembering any numbers in our list that exceed 7 or 8 digits in length. We also found that the people tested to remember the numbers found it easier to remember the first and last digits of the number, showing evidence of the 'recency effect'.

(ii) Long Term Memory
The results from our long-term memory test had similar properties of results to what the chapter had told us. Our simple test had twenty words, including some with similar themes, that were to be memorised and recalled after 10 minutes. The people tested showed some evidence of using a 'semantic' method of recalling the words. This means that they recalled the words that were linked in a similar theme (bus, plane, road) together.


1.2 – Observe skilled and novice operators in a familiar domain. What differences can you discern between their behaviours?
After observing a skilled web-designer and a novice computer user there were clear differences between their interactivity with the computer. The task was to find information on the Internet on MacDonalds, and then present this information as best they could within Microsoft Word.

The skilled web-design naturally found this task very simple, finding themselves at ease using the computer. They were touch-typing very quickly, they found the information the quickest, the used shortcut keys frequently, they made a small amount of mistakes, the formatted their Word document in a stylish manor, and they overall completed the task in a short amount of time.

The novice computer user was quite the opposite. They found the task simple enough process-wise, however they found executing the process a lot more challenging and took longer. They were slowed down by their inability to type fast and their consistent errors. They were slower in their navigation amongst the Internet and within the Word document. They had no experience of shortcut keys and therefore made all simplified commands taken by the skilled web-designer a longer process. The novice also had a very basic design to their Word document at the end of the task.

1.3 – Guidelines for Interface Design
When designing an interface you will need to take into consideration the fact not all users will see the design interface the same way you have designed it originally. You must take into account that to begin with some users may be accessing your interface from a different device to what you originally designed in, or in a different fashion. For example it could be seen on a iPhone rather than a PC screen, so functionality across platforms needs to be taken into account, and also the screen size the user is viewing the interface may be contrastingly different to yours so you cant have any vital information in positions on screen that could potentially be cut off.

First rule of interface design is to make sure that your core focus of the interface is in the middle of screen as this is where users first look on screen. If the focus of the interface is not positioned there, make sure it is highlighted accordingly.

Make sure your font type is easily legible and that it is of a good point size. Writing too large or too small is off-putting for a user and will deter them from your interface.

You need to make sure your colour is not going to be confusing for those who have problems with accessibility. Don't mix blues, browns, reds and greens too much, nor rely on the user to realise the colours (click red button) – some users will simply not understand.

In terms of functionality make sure the layout is clear and links/information is concise. The user should not need to scour the interface for specific information. Keep it short and don't allow it to run past more than seven key points, as found out in the memory testing (1.1), humans cannot remember adequately information in the short term memory past seven items.


1.4 – What are 'mental models', and why are they important in interface design?
People individually create 'mental models' to understand the uses of systems they come into contact with. These models are very important in interface design as the designer needs to take into consideration the differences between their mental model and the potential user's mental model. What the designer may deem logical in navigation and layout, the user may find confusing and deterring. A way the designer can take other mental models into consideration is through testing of their site or program. This will offer them different insights of perception toward their site or program. This is obviously very useful as it gives the designer feedback on what works, what doesn't and what could be included to make it better.



2.1 – Find as many different examples as you can of physical controls and displays

(a) List them

Light Switch
Computer Screen

(b) Try to group them, or classify them 

Physical Control Displays
Light Switch Computer Screen
Telephone iPhone
Keyboard Television

(c) Discuss whether you believe the control or display is suitable for its purpose.
The light switch works as it is meant to, a physical control, it is either on or off. The keyboard works well in the same fashion that the keys let you key in the information you want to. The television works as a display, its purpose; the display on the iPhone works as a display and as a control too however. Although for someone who is blind this is unsuitable as they need to see the screen to work the control, as there aren't any buttons for them to feel to use.


What sort of input does a keyboard support?

A keyboard's input is variable to the key pressed on the keyboard. Each key has a specific value/data entry into the computer, and the computer will read this data in a key value such as ASCII or UNICODE. The program that is running at the time of the key-press will then process that data entry to the correlative command that program associates with that key. Some keys will not be simply data keys, some will modify the representation of the keys pressed - capitals/non-capitals, and some may allow specific commands when held together - cmd + S = Save.

What sort of input does a mouse support?
A mouse gives two forms of input, one being the location and movement of the cursor via the trackball/optical lens, the other the two buttons on the top of the mouse. The input from the mouse can vary as the pressing and releasing of the buttons can change in length of time, or be used together with the movement of the mouse, creating a 'drag' effect on-screen.

Are they adequate for all possible applications?
Together they are suitable for all applications, however on their own they are not. There are some application specific keyboards that show the hot keys in different colours alongside logos to help specialists who are only using that software, like Final Cut Pro for example. Some other peripherals can be used better for more specific tasks/applications. For example a drawing pad is far more interactive and functional to a designer rather than simply using a mouse to draw.

If you were designing a keyboard for a modern computer, and you wanted to produce a faster, easier-to-use layout, what information would you need to know and how would that influence the design?
Before I started I would find out why the QWERTY keyboard dominates the market, and why competitors like the DVORAK keyboard didn't surpass the QWERTY upon release, even though it boasted a faster typing capability. But ultimately I would not try and invent something new, I would aim to adapt the QWERTY keyboard as it is the most familiar to the market, and it would then be a better adaptation of the universal keyboard used across the world.


2.3 – Note different attributes that support certain forms of interaction

Touch Screen
The touch screen is a textile form of interaction. It is very primal in terms of simplicity as it involves simply using your hands and fingers to literally command the device. It offers simplicity in physical design as it is one screen rather than a range of buttons. The touch screen is made with interactivity at its forefront. The iPhone and iPad for example are great examples of touch screen devices and how the touch element can be utilised to make a very interactive experience for the user. The touch screen has allowed a number of drawing applications to be done by hand that would have normally been done by a mouse. The touch screen has revolutionised the way users expect to interact with the Internet and applications. As an iPhone user I have now become used to the capabilities and functionality of touch. I now browse the Internet with ease by a simple scroll of a finger with a simple tap on the screen if I want to select something.

Track-Ball Mouse
The track-ball mouse is a far older piece of equipment that is used with a computer, as optical mice now dominate the market. The track ball somewhat revolutionised the interactivity that the user had with the computer however as they were able to move their cursor much more freely than before, in 360 degrees. The simplicity in design and extensiveness in interactivity made the track-ball mouse a necessary element to the user's computing experience. The mouse allows the user to move freely across the page and select on the page by simply rolling the mouse's track-ball and clicking one of the two keys – dependant on what action they desired.


2.4 – What is the myth of the infinitely fast machine?
The myth of the infinitely fast machine is what most system designer's fall into believing. This is where the system designer designs the system whilst expecting immediate reaction from the system/system process; not taking into consideration users with very slow processing power. This means the desired output of the system could be hindered as some users may not be able to interact with the system in enough time that the system allocates from expecting a fast response similar to that of the designer's computer. One way of getting around this fault is by giving a visual or audio notice to the user so they know the system is working on the process/has done the process, rather than just staying idle. Evidence of this for example is the hourglass loading cursor commonly found when a PC is working on a process. Another is the dial tone on a phone that lets the user know their key input was successful.



3.1 – Choose two interface styles that you have experience of using. Use the interaction framework to analyse the interaction involved in using these interface styles for a database selection task.

The user will be most probably be used to the form-fill style of format, as it used mostly when filling out information online when shopping or doing questionnaires. For the database selection task the input should be quite straightforward for the user as they are able to input information and expect the system to understand it, as the information is limited in input ranges. The output should be relatively precise as the input style was limited in searching data through the form format. The style of output however may reflect this limitation only showing a small amount of results, when the user was after a more general search.

Natural Language
The natural language format can be very simplistic for the user when doing a database selection task. This is because the format allows the user to type in what they want, without limitation unlike the form-fill. The input to the system however may not work, as the translation from what is typed in, to what the system will understand may not be able to equate. The output therefore may result in the user needing to specify their search better and/or checking their spelling. The format of this output may well be more fruitful in comparison the form-fill's as it is less restricted and could show more results.

Which of the distances is greatest in each case?
The form-fill style of input is designed better for the database to understand, so its performance distance is the greatest. However as the natural language style has no real limitation to the search element of the task, its articulation distance is the greatest.


3.2 – Are there any successful natural language interface systems?
Yes, there is an application called "Siri" for the iPhone that answers questions from the user, and can give suggestions to the user through its natural language processing software. Other artificial intelligence examples is the plug-in available in Firefox, "Ubiquity". This plug-in draws together information from other related web-pages to the content on the page the user is on at that moment in time. This is a fantastic tool concept, especially in terms of research as it runs from natural language commands originating from the web page currently being browsed.

For what applications are these most appropriate?
These sorts of applications are most appropriate for search engines that try to understand phrases rather than keywords, and also dictation software that can fully understand the voice patterns emitted by the user, to then be translated to text.

3.3 – What influence does the social environment in which you work have on your interaction with the computer?
The social environment influences my interaction with the computer a great deal. In a learning environment, where there is minimal noise and a calm atmosphere, I find focussing and working a lot easier. In contrast, if I am working in a busy, noisy environment, I find it very hard to concentrate and produce good work. I find myself easily distracted and it's generally a bad environment for working in. However it is an adequate environment for simply searching the web. If I am being overlooked whilst working by classmates or by lecturers I feel that there is a sense of pressure, and I aim to achieve and impress by working to my full potential, yet the surrounding environment can hinder this.

What effect does the organisation to which you belong have on the interaction?
The organisation that I belong to when I'm working does effect my interaction slightly. There is a motivation to achieve if I am working and being overlooked by a teacher or lecturer, however this does add pressure. If I'm doing work for my music projects, I don't feel as much pressure as it's only me looking to see that I achieve my goal. Although this does not mean that I am any less motivated. The goal objectives are similar in both my academic work and my music projects; I want them to look as good as possible. It is the pressure levels I find that vary between tasks.

















Find Page


Character Style

Plain text

Check Spelling




Save as


Find Word


Format Paragraph

Bold text

Word Count






Change Word


Document Layout

Italic text

Renumber Pages

Show Alternative Document




Open File



Page Break

Position On Page


Open Mail





Close File


Go Back



Increase Point Size

Send Mail





Open a Copy


View Index

Index Entry


Decrease Point Size







Repeat Edit

See Table of Contents



Change Font






Print Preview











Page Setup























Why do some functions always seem to be grouped together? / Why do some groups of functions always get categorised correctly? 
This is because the functions are similarly grouped together in the majority of applications used on computers. This familiarity is established and therefore seems a logical order.

Why are some less easy to place under the ‘correct’ heading?
Some functions are less easy to place under a ‘correct’ heading, as some functions are more technical than the general user could specify under the categories. They could also be putting the functions under categories that are application specific so they could be deemed wrong to another user.


4.1 – Ted Nelson Biography [Link]


4.2 – Choose one paradigm of interaction, find 3 examples of it that aren’t in the chapter & identify any general principles of interaction that are embodied in each of your examples
I have chosen the ‘Personal Computer’ paradigm. These are the 3 specific examples of it:

  • Desktop iMac
  • MacBook Laptop
  • iPhone

Comparing the three:
The desktop iMac is a large personal computer; it has to sit on the desktop, as it needs to be connected to the mains power. The MacBook laptop runs on a battery meaning it is portable, but only for a certain amount of time as the battery needs charging. The MacBook itself works and functions in the exact same way the iMac does but it is portable. The iPhone however is far more portable, but does many of the same functions that the desktop and laptop do. However as the iPhone is around an eighth of the size of the smallest MacBook, it can’t be expected to be able to be as functional as the laptop. The iPhone itself is basically a miniature computer, as it runs a similar operating system and performs similar tasks, such as word processing and application execution.


4.3 – What new paradigms do you think may be significant in the future of interactive computing?
I think that the world wide web will be the most significant paradigm in the future of interactive computing as it works as a network, library and source pool of information that can be emphasised and manipulated to become more interactive. This is becoming a reality already as the introduction of HTML 5 is allowing users to interact with content simply within the browser itself, not needing any additional plug-ins such as flash or java. The interactive future will be online, and may be further developed on devices such as the iPhone and iPad.


4.4 – How do you think the first-person emphasis of wearable computing compares with the third-person, or environmental emphasis of ubiquitous computing?
There is a clear difference between what the first-person emphasis of wearable computing can do for the user in comparison to third-person interaction. The user would feel more involved using a first-person wearable computer, particular in the world of gaming. In a first-person-shooter, if the user could interact as the actual character movement-wise, in comparison to using a controller the computer – third person. The environmental emphasis of ubiquitous computing would be revolutionised if there were first person wearable computers as computers would really be everywhere. The environment we would live in would be highly dependant on the digital world. This however, could not necessarily be a good thing.

What impact would there be on context-aware computing if all of the sensors were attached to the individual instead of embedded in the environment?
Context-aware computing would need to develop at a much a higher rate if the sensors were attached to the individual instead of being embedded in the environment. It would make the human wearing the devices more digitally interactive with others wearing the devices too, as they would be updating their computers when their sensors come into contact. It would technically make the humans robotic-like being half digital sensor machine, half human. When would the reliance on these sensors begin and stop?