MA DIM Dissertation (1995): Interface Metaphor

I recently retrieved my old dissertations from MA Design for Interactive Media course back in 1995… This one discusses the use of metaphors in the user interface. As I recall, I was going to title it “Trouble With Liken” but evidently for some reason decided not to. Can’t think why… ;-)

When Worlds Collide – Metaphor and the Interface

Jonathan Hirsch, June 1995

Abstract

This essay explores some of the issues arising from the use of metaphor in the computer interface, drawing in particular on cognitive psychology, Heideggerian philosophy and critical theory. It is suggested that by attaching metaphor to the interface, the user is separated from the computer as a tool, and is prevented from developing an understanding of the system which is sufficiently deep as to make the tool transparent. It is proposed that we should be thinking beyond the metaphor, and should not be afraid to let go of the metaphor when it has outlived its usefulness, or when new functionality is required which cannot be squeezed into it. It is concluded that the use of a good metaphor should not be the goal of interface design but the starting point.

Acknowledgements

I would like to thank Mike Ward and Nicola Millard of BT Research Laboratories, Martlesham Heath for their assistance during the BT Future Customer Handling Project out of which this essay arises.

Contents

Introduction

Why Use Metaphor?

The phenomenology of the computer

Mental models and metaphorical reasoning

Disadvantages of Metaphor in the Interface

Beyond the Metaphor

Conclusion

Bibliography

Introduction

In `Being and Time’, Martin Heidegger (1935) attempts to describe the nature of phenomena in the world. Any phenomenon which we encounter in the world has the characteristic of being present-at-hand – it is nearby. In addition, tools have the characteristic of readiness-to-hand, which they acquire through usage; a good tool, which is appropriate for its task, is not broken, and is understood by its user, becomes transparent – it withdraws from our awareness, leaving us aware only of the task for which it is being used, the towards-which of the tool. A tool which is not appropriate for its task, or is broken, or is not understood, or which is missing (i.e. not present-at-hand ), is not transparent, it loses its readiness-to-hand; it is, rather, un-ready-to-hand.

Good tools, then, are transparent to their task. In using them we are not aware of them as tools, but only of the task for which they are employed. To be transparent, a tool must be both capable of carrying out its task, and be understood by its user. Only when something goes wrong, when the tool breaks or is not appropriate for the task or the user does not understand it, does the tool itself become apparent to us.

If computers are to be considered as tools, or as a medium for tools, then they, or the software which runs on them, must be transparent to their task. One obstacle to this, however, is presented by the very nature of the computer. Most computers currently exist as nondescript plastic boxes with no indication of what exactly they are for, or how they should be used. When we attempt to use them it becomes apparent that they offer a seemingly infinite variety of possible applications by virtue of being able to run software; to make matters worse, not all pieces software are intended to be used, or even understood, in the same way.

To many people, computers are baffling, existing in a strange world beyond their comprehension. Indeed, computers are one of the few tools (perhaps the only tool) known to us which not only exist in the `real’ physical world we are all well-practised at dealing with, but also exist in their own strange world of ones and zeros, to which only a privileged few seem to have access. It therefore becomes understandable why to most people the gulf between these worlds seems unbridgeable – to them, the computer is not transparent to its task; it is not even clear what its task is!

Evidently, an important step towards making the computer more transparent is to make it more understandable, to reduce the gap between its world and ours; the use of metaphor in the interface has become a common way of attempting this. In this essay, I shall begin by looking at the advantages of using metaphor. I shall then go on to examine some of the negative aspects of the use of metaphor, in particular the suggestion that, at present, interface metaphors are misplaced – they separate the user from the system and as a result do not reveal the system at a sufficiently deep level for anyone wishing to use the computer as a tool. A tool is not transparent to its task when it is separated from its user – if we cannot get at the tool because the interface is in the way then the tool is as good as missing; it is not present-at-hand, nor ready-to-hand. It is not transparent.

Why Use Metaphor?

Metaphor is used, more commonly in language, as a device for conveying novel or abstract ideas into a more familiar and accessible form. We speak of time – an abstract concept – in terms of money – a more real concept (Preece, 1994); we describe arguments in terms of war (Erickson, 1990) and so on. Metaphors help us to understand the the unfamiliar.
Underlying the use of metaphor as an aid to comprehension are two psychological principles: that of mental representation, which holds that we develop mental models of systems (be they pieces of machinery, maps, processes etc.), and that of metaphorical reasoning, whereby we draw on prior knowledge to more easily develop an understanding of new subjects. In addition to this, the very nature of the computer is such as to make the use of metaphor almost inevitable. It is to this that I shall turn first.

The phenomenology of the computer

In Heidegger’s terminology, the nature of the being of tools is defined by their purpose – their being-in-order-to. For example, the hammer is `for the hitting of nails’, the saw is `for the cutting of wood’ and so on. In most cases, both the tool and the task for which it is used exist in the same world, the `real’ physical world, and there is a correlation between task and tool. The computer, however, exists across two worlds. As a physical object it is present-at-hand in our world; as a tool it is ready-to-hand in our world. But its being-for-itself exists in another world; and its being-in-order-to exists differently in both. When we carry out a task on a computer, we think of it (in our terms) as doing a particular thing; what the computer actually does (in its terms) in carrying out that task is very different. The being-for-itself of the computer has no correlation to the being-in-order-to we ascribe to it; we can understand it primarily through metaphor.

In using metaphor to bring our world and that of the computer closer together, we ironically set up a paradox. With most tools, we discover their readiness-to-hand by using them; when we have fully mastered them, when we understand them, their operation becomes transparent to us, we are free to concentrate only on the task in hand rather than the tool. The tool becomes transparent to its task. With the computer, the reverse happens. In mastering the computer, in making it transparent to its task, we do not understand its operation. The real nature of its operation does not become transparent but, if anything, more opaque. But what we can hope to understand, and what has the potential more easily to become transparent, is the metaphor.

Mental Models and Metaphorical Reasoning

The notion of mental models has its roots with the cognitive psychologist E.C. Tolman in the 1930s . He carried out experiments on rats learning to navigate a maze, and concluded that they learn something about `what leads to what’ in the maze (Gross, 1992). In other words, they learn expectations as to which parts of the maze lead to other parts; these expectations represent a primitive perceptual map of the maze and are what Tolman referred to as a `cognitive map’. One of the problems with this theory, however, is that, as with all theories of cognitive processes, it is hard to prove, since a cognitive map cannot itself be observed, but can only be inferred from behaviour. However, it is supported by findings (cited in Gross, 1992) that, having learned to run through a maze, the same rats were then able to navigate it if it was flooded, rotated, or entered from a different starting point.

The subject of mental representation is controversial. Although exactly what constitutes a mental representation has been hotly debated, it would seem that the notion of symbolic representation is common to all accounts (Eysenck and Keane, 1992). Symbolic representations can be divided into two types: analogical representation, for example, visual or auditory mental images, and propositional representation, which are language-like, though not necessarily expressed as words, and which capture underlying abstract concepts.

In the 1980s, P.N. Johnson-Laird suggested a third type of mental representation – the mental model (Eysenck and Keane, 1992). For him, a mental model is a representation which can be either wholly analogical, or partly analogical, partly propositional, and which is distinct from, but related to, a mental image. For him, the essential properties of mental models are that they are analogical, determinate and concrete – they represent specific entities. In contrast to this, propositions are indeterminate – they can describe different entities in different states. For example, a proposition might be expressed verbally as “the book is on the shelf”; this could refer to any book at any point on the shelf. An equivalent mental model, however, would represent a specific book at a specific place on a specific shelf. An image is a mental model which is specifically represented visually – in the above example, the corresponding image would consist of a mental picture of the book seen from a particular angle or position. Johnson-Laird argued that mental models and images are analogous to high-level programming languages in that they “free human cognition from operating at a machine-code-like propositional level” (Eysenck and Keane, 1992).

While the term `mental model’ has been used differently by different theorists (Eysenck and Keane, 1992) and there has been some debate as to whether the notion is sufficient to account for human cognitive processes, mental representation is undoubtedly important to the way in which we perceive and understand our world. This suggests that a clear mental model is important for understanding computer systems, for bridging the gap between our world and that of the computer.

The cognitive principle of metaphorical reasoning – that we use prior knowledge to understand new situations – suggests that metaphor can help us to create mental models. Preece (1994) cites various studies which show that verbal metaphors can be useful tools in learning to understand new systems. On the computer, this has been taken a step further with the development of interface metaphors, such as Xerox PARC’s `Star User Interface’ which compared the computer to a real, physical office; real objects found in real offices were represented as icons on the screen. Files, folders and so on were no longer abstract entities existing `somewhere’ inside the computer; now they were representational images, or icons, existing on the virtual desktop. This desktop metaphor has undoubtedly made the computer more accessible to a great number of people, and has become perhaps the most commonly used interface metaphor in personal computing. However, just as there are good reasons for using metaphor in the interface, it also presents many problems; it may not be the bridge we need after all.

Disadvantages of Metaphor in the Interface

“The problem is to design the system so that, first , it follows a consistent, coherent conceptualisation – a design model – and, second, so that the user can develop a mental model of that system – a user model – consistent with the design model” (Norman, 1986). This leads us to one of the fundamental problems with the use of metaphor in the interface. While verbal metaphors invite us to compare the familiar with the unfamiliar, to see the similarities and differences, interface metaphors combines the system and the familiar domain together. For example, the desktop metaphor not only makes the interface appear like a real office, but the metaphor itself is the interface. What this means is that users will develop an understanding of the metaphor rather than of the true nature – the design model – of the underlying system . They will think that moving a file icon into a different folder causes the system to do just that, whereas what the system actually does is to move only the pointer to that file. “Users will tend to develop functional-based mental models of the system and be largely unaware of the structural aspects of the system.” (Preece, 1994) In other words, they will understand the interface but not the tool – and as has already been mentioned, a requirement for a transparent tool is that it be understood.

This may not seem to be too much of a problem initially, but the danger is that it will lead to user-error through a mismatch between their expectations of the system (based on their understanding of it) and the actual capabilities or operationing of the system.

There is also the danger that metaphor is restrictive, that it “will control us without our knowledge” (Swigart, 1990). If we are to encourage users to develop a consistent mental model then, at least in theory, we should adhere to the metaphor. But then “the metaphor becomes a dead weight… every related function has to become a part of it” (Nelson, 1990). We risk sacrificing functionality, and a potentially better way of implementing a given function, for the sake of the metaphor. Nelson further argues that such adherence to the metaphor “prevents the emergence of things which are genuinely new” (ibid). To use the desktop metaphor as an example, not only does it misuse the metaphor (how many real desks have a waste bin which sits on them, or have folders which `gobble up’ files placed on top of them, or have papers which jump to the top when pointed at (ibid)), but when additional functionality is required, additional metaphors are tagged on (for example, scroll bars, menus and windows). However, contrary to what one might expect, such composite metaphors do not seem to cause us too many conceptual problems (Preece, 1994). What might be interesting to discover, though, is whether this makes any difference to the learning curve for new users. Arguably, a system like the Apple Macintosh is relatively easy to learn to use not so much because of any metaphor, but because its graphical interface allows immediate visual feedback on the user’s actions, permitting them quickly to discover the correct way of using it, even if this is not the way they might have expected previously. Does this compensate for any disadvantages which may be inherent in composite metaphors?

Should metaphor be used at all? Further to Tolman’s theory of cognitive maps, the rats’ ability to navigate the maze under differing conditions (for example, when it was flooded, rotated or entered from a different starting point) suggests that it is not the actual actions of walking, running, swimming etc. which constitute the cognitive map, but rather something about the geographical characteristics of the maze itself. This is what Tolman called `Sign Learning Theory’. When applied to HCI, this provides an interesting insight into the use of metaphor. If we substitute the computer for the maze, and users for the rats, does this imply that the specific actions we use in navigating the computer are not what constitute our mental model of it, but rather it is our knowledge of the computer’s geography? And does `the computer’s geography’ refer to the system’s architecture or to the interface?

On the one hand, if the latter is the case, then our mental model should consist of knowing how to navigate the interface (which then becomes subject to the problems raised earlier, namely that we still do not understand the system itself), and the input method (mouse , keyboard, pen, touch-screen etc.) becomes comparatively unimportant (to the issue of mental models, not HCI in general).

But on the other hand, if `the computer’s geography’ refers to the system itself, then this is bad news for the metaphor-based interface. If we build (or try to build) a mental model based on knowledge of the system itself, then the interface is not helpful in the way it is intended to be (it does not necessarily provide enough information about the system to enable us accurately to build that mental model; further, it may even encourage us to build an inaccurate one, since metaphors for the computer are rarely true representations of what really goes on in the system). Indeed, if we do not fully understand the interface, then it may be said to get in the way far more seriously than we might otherwise think – it becomes analogous to the rats having to learn to walk or swim etc. before they can learn to navigate the maze (whereas in the actual experiment, they already knew how to walk at least, if not how to swim also, and so they were building on prior knowledge).

Based on the above, we may hypothesise that what separates people who are `broadly’ computer literate (who can get to grips with virtually any piece of software without too much hassle) and those who are `narrowly’ computer literate (who have learned one or two specific applications and find it hard to adapt their knowledge of them to new software – this may apply more to novice users than experienced ones) is the type of knowledge they possess of computer systems.The former have an understanding of the computer system in general – they understand (in an abstract way) what sort of information the computer requires to carry out their task, or what sort of logic it operates on, whereas the latter only understand the interfaces of their applications – they know a specific set of commands, but do not know why or how they do what they do (an analogy might be that I know how to drive a car, but should one break down, it would be unlikely that I could fix it – I do not know enough about how it works). This might suggest that the issue of interface metaphor (whether or not to use it, what constitutes a good one and so on) is of vital importance to the latter group of people, but perhaps not quite so significant to the former – if you understand what a computer really does when you move a file between folders, for example, then does it matter that, strictly speaking, the interface representation of it is inaccurate?

For the latter group of people, the trouble with metaphor at the interface is that it tends to allow them to develop a mental model of the interface and its metaphor, but not of the system itself. As a result, they “learn software by approximation rather than understanding” (Nelson, 1990). This brings to my mind parallels with the ages of magic, myth and enlightenment discussed in Adorno and Horkheimer’s Dialectic of Enlightenment (1944). Here they describe the age of magic as one of imitation of that which was not understood – “the magician imitates demons; in order to frighten them or to appease them, he behaves frighteningly or makes gestures of appeasement” (p.9). The mythic world was one of anthropomorphic personification, “the projection onto nature of the subjective…the supernatural, spirits and demons are mirror images of men…”(p.6). The age of enlightenment is one of domination by instrumental reason, “the disenchantment of the world; the dissolution of myths and the substitution of knowledge for fancy,” (p.3) aimed at “liberating men from fear and establishing their sovereignty” (p.3).

Could it be that, since the computer is still a comparatively new phenomenon, we have not yet reached enlightenment with respect to it? Perhaps, although we think we use metaphor to make the computer appear more like something with which we are already familiar, we are instead using metaphor to imitate the computer’s workings, which we do not really understand, to make it serve us, just as the magician tried to command the demons, which he did not understand, by appeasing or frightening them?

Perhaps as computers become more `ubiquitous’ (Weiser, 1991), we will move into the mythic age of computing, we will project ourselves onto the machine. Preece (1994) cites Nicholas Negroponte’s discussion of invisible (ubiquitous) computers acting as “a `society of objects’ in the form of virtual butlers, secretaries and housekeepers.” Of course the problem with such anthropomorphic personification is that, apart from being `fanciful’, to use Adorno and Horkheimer’s term, it can lead to users believing the system (or its agents) is more intelligent than it really is, causing frustration when it does not behave as the user expects (ibid).

Taking this comparison further, it follows that we will not fully master the computer (and hence make it transparent) until we are enlightened to it – until we understand it sufficiently without trying to approximate it to something else, until we substitute knowledge for fancy, or what we think the system is doing when we use it. But the use of interface metaphor gives us an understanding of itself and not of the underlying system. The interface is in the way of understanding. So can we develop a mental model of the interface, which is also a model of the system itself? What sort of metaphor might we need to do this, or would we have to remove the metaphor altogether, make the interface the system itself and not the part which is stuck on at the user’s end?

Some answers might be suggested by the Johnson-Laird view of mental representation. If we use mental models in order to avoid having to think in a `machine-code-like way’, then doesn’t the fact that this is precisely how a computer operates cause problems? Surely this makes it inevitable, more so than ever, that our mental model of the system will be nothing whatsoever like the system itself? In the example of the book on a shelf, at least we could be fairly sure that there was some correlation between the state of the actual, physical book and our image or mental model of it. With the computer, there is no correlation between our mental model and its actual state at any level. Even high-level programming languages (including the point-and-click/drag-and-drop environment) do not reveal the actual state of the computer. The only way our mental model would correspond to the computer’s real state would be if it were based on, perhaps, a map, a maze even, with geographical characteristics, in which we go around opening and closing gates to direct the flow of electricity or, one level up, we go around assigning bits of information to memory registers and so on (although there might be many levels at which a computer system can be seen to operate, and for which a metaphor might be developed, the metaphors currently used in interfaces – in particular the desktop metaphor – seem to have no relation to any of these levels). But because we cannot `see’ all this happening inside a computer (except perhaps under a high-powered microscope, which is not something many of us do, or should have to do) we would need a similarly geographical metaphor – perhaps the interface should be represented not as a desktop containing files and folders, but as a piece of silicon containing registers, stores and so on, around which we move bits and bytes. But might this not undermine the whole advantage of using the computer – namely speed? Might it not be rather like going back to using an abacus for mathematical calculations?

If we cannot find a metaphor which not only makes the computer accessible and more understandable to us, but which also allows us to develop a mental model of the underlying system, must we do away with metaphor? What is the alternative? Command-line interfaces? Machine-code programming? The latter might be fine, albeit tedious, for experienced programmers; keen enthusiasts might find the former acceptable; but for `the average user’? Surely not. Is the answer to make all computing ubiquitous? In many cases this would probably be a good thing, but clearly not in all situations – we would lose the flexibility afforded by the desktop computer’s ability to run different applications in a single place. What would happen if we tried to hide the interface of, for example, Adobe Photoshop by building the program into a real, physical painter’s palette, linked to a real, physical easel? Certainly it might become more intuitive to artists, but would we gain anything? Could we swap from one application to another? Would we want to? And, perhaps more importantly, do we need to understand the system behind the interface in order to fully utilise such a program? (Arguably, yes we do, at least at some level deeper than that of the interface itself, since a knowledge of rasters, how memory works, how images are stored and compressed, bit depths, resolutions and so on can be of considerable use, especially when things go wrong).

Perhaps most of us can never hope to understand the computer as it really is – its being-for-itself. Maybe we will only ever understand the metaphor. But is this necessarily a bad thing? After all, metaphor has served us well, not just in the computer, but in life in general. Metaphor is fine up to a point. Surely the problem arises when we want to do more than understand how to use our applications, when we want to understand why the computer keeps crashing, when we do not want to pay a consultant to come and fix it, when we do not want to spend hours reading yet another technical manual? In other words, when we want to do something to which the interface (and hence the metaphor) is not transparent. It would seem that, unless we really can find a metaphor which allows us to develop a mental model analogous to the system and not just to the metaphor, we must truly integrate the interface with the system – they must not provide different models. Of course, the system can be viewed at many different levels, and which one should be referred to by the metaphor ought to be determined by the task and the user’s needs and ability. This might not be easy, but the point is that we must shift the focus of the metaphor from the interface to the system. How? Perhaps we should not use metaphor at all (which might be a step backwards, and at odds with the rest of our lives), perhaps we must make many different metaphors optionally available to the user, or perhaps we must go beyond the metaphor, to stop thinking of interface metaphors and to start thinking of interface language. We must use it not as the goal of interface design, but as the starting point.

Beyond the Metaphor

At present `a good metaphor’ seems to be the central concern of interface design – “Metaphor…seems to be the holy grail at Apple” (Erickson, 1990) . However there are a number of ways in which we might go beyond this notion. The desktop interface, for example, does not go beyond its metaphor in order to take advantages of things which can be done on a computer but which cannot be achieved on a real desk. Too much effort seems to have been spent on making the computer a substitute for a real office instead of capitalising on how it can augment one. This is what I mean by going beyond the metaphor. Take windows, for example. They could be improved in a number of ways which would take them beyond the `documents’ analogy. Just two ideas (which were explored during our BT Future Customer Handling project) are to make windows semi-transparent, or to make them intelligently self-reconfiguring.

In the former case, windows would be layered in the usual way, but would be partially visible through each other, and accessed by means of a `depth cursor’. Thus windows do not need to be brought to the fore in order to get an idea of their contents, but can be if closer inspection, or editing is required. This might also help to overcome the problem of keeping title-bars visible or else losing track of where a given window is, or what it contains, when multiple windows are open on a small screen.

In the second case, windows might not be layered at all, but might reconfigure their size according to what else was on the screen at the time. So when one window was open, it might use all of the available screen space, but when a second one was opened, the first might shrink down to a thumbnail, or other user-predetermined size. Windows could shrink or expand by preset amounts based on `chunks’ of information contained in them (for example a directory might hide one column of information – eg file size, date, label etc, – each time it was shrunk, as opposed to the current method whereby the user has almost infinite control over window size. Individual windows could be given characteristics preventing them shrinking below a certain size, or making them close altogether once a certain number of other windows were on the screen, or they might be made to open simultaneously when a particular other window was opened, and so on.

Among other ideas being researched are `history enriched objects’ (Hill and Hollan, 1994) whereby an object’s interaction history is displayed with it – for example, scroll bars might show the most frequently read or edited parts of a document, and the use of colour in icons to indicate the age of files (Salomon, 1990). Alan Kay (1990) has suggested that active containers, which he calls bins, be used instead of folders – these would be constantly trying to capture icons which are relevant to them.

A further example of how we might go beyond the metaphor is suggested in part of the BT FCH project. Here, we have transparently layered `windows’ which look as though they might be embossed paper, yet have indents containing objects, which would not be possible on real paper. Although when designing this, we started by thinking of metaphors, in particular of an old pane of glass, the final result did not, nor was it intended to, look like any real-world counterpart. There was no intentional metaphor. The window (we probably should not call it a window, but the word has become so much a part of desktop terminology that we no longer associate it with real-world windows – see more on this below) was simply a space on the screen; it was not necessary to present the user with an overt metaphor. Certainly, this may not assist the user in developing a mental model of the underlying system, but neither does it hinder this by presenting the illusion of being like something in the real world with which the user is hopefully already familiar. Perhaps we overestimate the need to make interfaces metaphorical – perhaps it is enough that they are graphical (almost certainly an improvement on command-line interfaces) and directly manipulatable.

A strictly adhered to metaphor might be advantageous to novel users, but once it is learned, it disappears into the background, it becomes transparent (but still subject to all the issues of being analogous to the actual system raised above). “After a while they [interface metaphors] become so much a part of the system that their metaphorical links fade into insignificance.” (Preece, 1994) Now there is room for the user to build on that knowledge – which is precisely what happens when new functionality and new metaphors are added to the interface. But as more and more new metaphors are added, as the original metaphor becomes more composite, we no longer have an overall metaphor, it is “starting to drown in its own enhancements” (Kay, 1990). Yet we still try to adhere to one. Once we reach this stage, can we not let go of the metaphor altogether? This does not have to mean abandoning it and starting over, but rather we should concern ourselves with finding the best way to implement functionality and not with how it fits into the original metaphor or how a new metaphor might successfully be added. We should not be afraid to leave the metaphor behind.

This would be facilitated if we thought not so much of interface metaphor, but of interface language. As the metaphor becomes transparent, as we learn to use it, it develops into a language, rather like a programming language except on an even higher level. It no longer matters that the metaphor is inaccurate, or mixed, or composite, or has little relation to its real-world referents. We use icons, files, folders, buttons and windows etc. in the same way that we use words. And just as any other language evolves, so too does the interface; just as language develops beyond its original words, so we go beyond the original metaphor. We can leave the metaphor behind, stop concerning ourselves with trying to make everything fit into it and concentrate instead on developing the language. Metaphors may still be used, but they would no longer be the backbone of the interface. It seems unfortunate then, that we seem so keen to maintain the desktop metaphor in more or less its original form.

But we are still left with the problem that understanding the interface is not the same as understanding the system. The problem is still that the language, arising out of the metaphor, does not provide a mental model which corresponds with the actual system at any level. So how do we overcome this, especially given that there are many levels at which the system can be understood, any one of which is unlikely to be understood by everybody?

The first step might be to accept that the desktop is not the only metaphor worthy of a computer interface, and also that there is no reason why a single computer terminal should not have many separate interface metaphors (or even separate languages, in the same way that some single countries have many languages) from which the user could choose. Perhaps metaphors should be treated like desktop textures – user definable, all different, and with a different one potentially available at any time. Of course we would still have to ensure all the metaphors corresponded to the system, or at least some aspect of it, at some level, but this would free us from the restrictiveness of metaphor – we could use whichever metaphor is most appropriate to the user, and best suited to their needs at the time, without having to make them all fit together, or having to fit them around a central metaphor. This would be analogous to Wittgenstein’s notion of language games, where we have slightly different languages for different purposes (for example, formal, scientific language as opposed to emotive, literary language and so on).

The important point is that if users could develop a model of the system itself, at any level, then it would not matter what sort of interface was presented to the user – they would be able to adapt to it. Then, the main issue determining the nature of the interface would be one of speed, efficiency and appropriateness to the task, rather than one of the user’s conceptual understanding of it. Metaphor could/should be used purely as a tool to this end, rather than as the main substance of the interface.

Conclusion

A particular interface is not an inherent part of a computer system – there is a potentially huge variety of possible interface designs available to us. But at present we are attaching metaphor to the interface rather than to the system itself – we are misplacing the metaphor. It does not matter that we treat the interface as separate from the system; what is important is that, at present, the metaphor and the interface are not separate; perhaps what should happen is that the metaphor be attached to the system, both of which may then be separate from the interface. Perhaps we need to use a verbal metaphor to describe the interface and the system, allowing us to see the similarities and differences between it and its real-world analogy, instead of an interface metaphor which is part of the interface.

We, when in our interface-designer roles, must constantly remind ourselves that metaphor is only a means to an end, not the end itself. When it no longer serves its purpose, it can be dropped; when new functionality is needed, it should not be forced into the existing metaphor, but rather the metaphor should be allowed to evolve, and if it is lost along the way, then so be it. Any metaphor which is used should aim to allow users to develop a mental model analogous to the system, at a level sufficient for their needs , rather than just the metaphor itself. Unfortunately, however, “much research effort has been spent on developing and evaluating methods for eliciting mental models. Less attention has been paid to prescribing how to design interfaces that will ensure the appropriate user mental model” (Preece, 1994). Broadly, a good metaphor should not be the goal of interface design but the starting point. But with the exception of a few composite metaphors, none of the above seems to be present in current, commercially available, interfaces.

Bibliography

Adorno, T. and Horkheimer, M. (1944) Dialectic of Enlightenment (English trans. J.Cumming, Verso, London, 1979)

Erickson, T.D. (1990) Working with Interface Metaphors, in Laurel (1990) pp.65 – 73

Eysenck, M.W. and Keane, M.T. (1990) Cognitive Psychology – A Student’s Handbook, Lawrence Erlbaum Associates, Hove

Gross, R. (1992) Psychology – The Science of Mind and Behaviour, Hodder & Stoughton, London

Heidegger, M. (1935) Being and Time (English Trans. J.Macquarrie and E.Robinson, Blackwell,Oxford, 1962)

Hill, W.C., Hollan, J.D. (1994) History-Enriched Digital Objects – Prototype and Policy Issues, The Information Society, Vol 10 pp.139-145

Kay, A. (1990) User Interface: A Personal View, in Laurel (1990) pp.191 – 207

Laurel, B. , ed. (1990) The Art Of human-Computer Interface Design, Addison-Wesley, Reading MA.

Negroponte, N (1989) An iconoclastic view beyond the desktop metaphor, International Journal of Human-Computer Interaction, 1, 109-113, cited in Preece (1994)

Nelson, T.H. (1990) The Right Way To Think About Software Design, in Laurel (1990) pp.325 – 243

Norman, D (1986) Cognitive Engineering, in User-Centred System Design (Norman D. and Draper S., eds) cited in Preece (1994)

Preece, J. (1994) Human-Computer interaction, Addison-Wesley, Wokingham

Salomon, G. (1990) New Uses for Color, in Laurel (1990) pp.269 – 278

Swigart, R. (1990) A Writer’s Desktop, in Laurel (1990) pp.135 – 141

Weiser, M. (1991) The computer for the 21st Century, Scientific American, September, pp.66-75

Winograd, T. and Flores F., (1986), Understanding Computers and Cognition, Addison-Wesley, Reading MA.

About Jon

I've been working in the interactive media industry since 1995. I'm a problem-solver with a multi-disciplinary skill set. I work on a freelance / contract basis. I help clients create great digital products.

Leave a Reply