Visual Communication Research (1)

Field of Research/Key Research Questions

It is essential that technical communicators be literate designers of visual information, since the more they know, the better will be the results of their collaboration with graphics artists. The principles governing visual communication evolved from those used for printed information. [1] Due to ever increasing preference for online media, information professionals including graphic designers, multimedia authors, and technical communicators are endeavouring to expand and improve on existing, print-oriented design guidelines to accommodate and exploit the idiosyncrasies and benefits of online delivery.

Seminal works on onscreen visual communication date mostly from the late 1980s to mid 1990s, and these set the precedents endorsed to this day.[2] Much of this literature focuses on the interaction experiences of the expert author with graphical user interfaces (hereafter ‘GUI’) and accompanying other information design elements.[3]

Using case histories, the authors of the article and study analyzed herein[4] attempted to demonstrate how usability research can assist in the evaluation of visual communication quality. They then discussed the usability testing methodologies and their findings, and then the validity and shortcomings of current paradigms in visual communication.

General Methodology and Specific Methods

A case history methodology was employed to address usability testing issues relevant to the three areas of visual communication central to UI design – information access, navigation, and icon recognition. Two case histories compared UIs. The others concentrated on procedure following, icon recognition, and packaging aesthetics. Both qualitative and quantitative data was gathered. To obtain information regarding particulars, tapered-questioning was applied.

Each case history was followed by in-depth discussion of lessons learned on matters relating both to usability testing methodologies and visual communication guidelines.[5]

In case history #1, systems administrators manipulated a GUI to perform a double installation procedure. This test was designed to identify problems with visual aids for navigation and information item layout. Testers prompted when necessary: of 18 subjects, 11 required explanations and one succeeded unprompted.

Case history #2 was a comparative usability test of home page designs for three products. This test examined usage of text coloring for orientation (highlighting indicating present position, etc.). Testers probed subjects with questions such as ‘Where are you now?’ and ‘How can you tell?’, and drew subjects’ attention to specific items using questions such as ‘What does the green background tell you?’ Here too, prompting occurred only when necessary. Of the 12 subjects, five discovered key navigational items unassisted and four distinguished between different forms of button bar.

Case history #3 compared two draft designs for Web pages, each featuring three ‘layers’ of information. In one design, the layers were stacked vertically and tabbed for identification. In the other, the layers were diagonally offset. Subjects accessed and manipulated the information layers to view their content and then selected other information items from menus. As subjects moved deeper into the site, both designs offered product information via menus and submenus. Selected items changed color, but only three out of the eight subjects recognized this. Subjects were asked which UI they preferred and why, their reasons informing the researchers regarding visual navigation issues.

Case history #4 involved 40 participants viewing and submitting opinion on 81 different icon designs (27 icons in three forms). Subjects were tested through interactive slideshows over four sessions. 10 subjects used individual workstations, the usability team walking among them to observe subject behaviour. Subjects saw a randomly selected icon and keyed in their opinion regarding its meaning. They then selected for each icon a definition from a list of five alternatives (including ‘other {fill in}’). The slideshow had no backtracking capability, so subjects could not modify their initial freeform input. The random selection of icons obviated learning curve influence (ensuring counterbalance) and subject cooperation. Freeform responses were evaluated as ‘right’, ‘partly right’, or ‘wrong’. The test generated quantitative data to use in identifying high recognition icons. For an icon to be ranked as ‘high recognition’, all answers (freeform and multiple-choice – by novice, intermediate and advanced subjects) had to attain pre-specified recognition levels. Subjects’ freeform answers were utilized in ascertaining definitions for icons that did not rank as ‘high-recognition’.

For case history #5, the UI was extended to the outer packaging of a software product. A total of 13 subjects representing three major characteristics participated. These characteristics were previous product experience, size of organization, and self-classification as either developers or systems administrators (this latter group representing highly technical users). Subjects were questioned on their opinion of packaging visuals.

(In anticipation of reluctance of technical users to express opinion on aesthetic matters, testers’ scripts included probing questions.)

Contribution to Visual Communication Research

Although usability studies of GUIs and Web sites generate findings of value to visual communication, few address issues with sufficiently specific criticality to the field, leaving an empirical/theoretical void to be traversed. The authors of this study intended to address the dearth of visual communication-centred usability research and demonstrate how such research can contribute to effective evaluation of visual communication quality.

The case histories provided subject preference data and information about both usability testing methodologies and the efficacy of existing visual communication guidelines. The authors recommended their pre-take questioning method for its ability to record subjects’ expectations, which can be both empirically informative and subsequently applicable as a referent baseline.

The findings of the case histories stand to assist in the development of guidelines for the creation of icons.[6] Describing a methodology for testing recognition of icon designs will aid researchers and hands-on technical communicators to better align their skills with graphic designers for validation and prioritization of specific techniques for icon design. Moreover, adoption of effective usability testing as general practice produces the secondary benefits of performance and preference data relating to product use.

Finally, these studies, by raising debate on issues besides visual complexity and figure-meaning association (some simpler visual images were recognized but poorly), foster a case for exploration of tangential theoretical matters.


[1] And this legacy was very visible in website design when this article was written.

[2] The influence of such works was much stronger when this article was written than it is today, ten years on.

[3] Although following the experience of experts is valid as a method of acquiring new techniques, it lacks support by user data.

[4] ROSENBAUM, S. and BUGENTAL, J. Measuring the Success of Visual Communication in User Interfaces. Journal of the Society for Technical Communication, 45 (4), pp. 517-528.

[5] The authors also discussed associated issues such as visual clutter.

[6] The test results support the findings of researchers such as Byrne (1993) who claimed that simpler icons are the most recognizable.

0