L'efficacité cognitive: vers une théorie des
médias
revisée. Ceux qui oeuvrent dans la technologie de
l'éducation
et dans des domaines connexes sont généralement d'accord
avec Clark (1983 et 1994b) lorsqu'il postule que tous les médias
se prêtent également bien à la transmission des
savoirs
et que le choix d'un média plutôt que d'un autre
dépend
plus du coût et de l'efficacité de transmission que des
styles
de cognition et d'apprentissage des utilisateurs du système. Or,
si l'efficacité cognitive est un facteur dans le calcul de
l'efficacité
générale d'un système, alors on pourrait maintenir
que le choix d'un média dépend du style de cognition et
d'apprentissage
des utilisateurs et que ce choix se ferait plus judicieusement suite
à
une analyse des processus de cognition humaine. Une telle analyse
permettrait
une nouvelle mise en rapport des études sur les médias et
sur l'apprentissage sans toutefois aller à l'encontre de
l'argumentation
de Clark. Elle permettrait également l'intégration de
nouvelle
recherche sur les codes informatiques de surface dans la recherche
actuelle
sur les médias.
Educational technology as a field now seems in a mood to move beyond this issue, to acknowledge that media are here to stay in any case, and drop the learning issue without resolving it, or lose it in a soft-focus vision without separable causes or effects--in which media and other variables "interact synergistically," or words to that effect. However, the issue can be resolved in a more principled manner with one minor adjustment to Clark's position. If a recurring concept in his discourse, "efficiency," is expanded to include "cognitive efficiency," then media choices become connected with learning, in some circumstances. Such an expansion is motivated by recent developments in cognitive research, as will be shown in the second part of this paper.
When a debate can neither die nor be resolved, there is a case for
suspecting
the apparent debating point is not the real one. I argue here that the
real resistance to Clark's position is to his sub-text, always implied
and occasionally stated, that media development is technician-level work,
unconnected with the interesting questions about learning. This idea has
been intuitively resisted by media developers, who think that at least
some of their design decisions, even when only about efficient ways to
deliver instruction, can benefit from an understanding of how people
think
and learn. It will be argued that this intuition is correct.
the most essential challenge to young researchers ... is to go beyond the media enthusiasms that brought many of us to this field. Our ambitions far exceed the narrow efficiency questions that are available in the media area. (1984, p. 241)The "narrow efficiency questions" are the media choices that remain after accepting that any medium can in principle deliver any instruction, for example between a computer tutorial and a stand-up teacher as the more practical way to implement a branching mode of instruction in a particular setting, once branching has already been chosen on the basis of method.
Out in the field, Clark's work has had two sorts of influence. One has been salutary: with the rapid expansion of communications technology, there is a need for educators to be skeptical of inflated media claims; to notice when expensive media are promoted where cheap would do; to center instructional designs on the learner rather than the medium; to track learning effect to instructional cause at the lowest level of analysis possible (medium attribute rather than medium per se, method rather than medium, message rather than method). On these points, educational technology has grown up under Clark's strong discipline.
But Clark's writings may have had another sort of influence too. By downgrading the importance of instructional media so thoroughly and apparently so irrefutably, and making outcomes-based media research seem impossible to do correctly, they probably helped widen the divide between learning research and media development unnecessarily. There is little doubt that such a divide exists. It is a common observation that the ongoing development of novel media, particularly involving computers, is proceeding on a commercial basis without much input from learning research (e.g., Dick, 1991). The reasons for this are no doubt many, but it is easy to imagine that several rounds of advice in the past decade from a senior figure in educational technology would have discouraged more than a few doctoral candidates from pursuing a career in learning and media. With such a divide in place, the logic of Clark's analysis becomes circular and self-fulfilling: media do not affect learning, so few learning specialists are attracted to media work, and then instructional media are produced commercially without input from learning research, and are indeed largely equivalent to each other and peripheral to learning.
In the particular field of media research and development that I know about, computer-assisted language learning (CALL), practical effects of media-equivalence theory are in evidence. Publications in this area cite Clark's views regularly. For example, the following by Chappelle (1996) summarizes the brief history of CALL research, placing Clark at a major turning point:
Influenced by research in educational technology, early CALL researchers typically attempted to demonstrate CALL's effectiveness by using quasi-experimental research designs; this research typically compared cognitive and affective outcomes of learners who participated in computer-based instruction with those who participated in regular classrooms. [However,] the research design focusing on outcomes [has been] criticized by educational-technology researchers such as Clark (1985; 1994) who for over a decade has argued that the concept of investigating "computer effects" is conceptually flawed (p. 138).In the wake of Clark's major papers, leading CALL theorists like Jamieson and Chappelle (1988) and Dunkel (1991), counseled researchers to move beyond the quasi-experimental, outcomes-and-comparisons model and seek new paradigms to explore roles for computers in language learning.
Conrad (1996) looks at where the quest for a post-comparative paradigm in CALL has led. Since the 1980s, he observes, there has indeed been a "shift in research focus away from the traditional single-focus media comparison." Away from that, but toward what? In a random sample of six language-acquisition journals between 1992 and 1995, Conrad found that 6.2 percent of the articles had a CALL focus, but less than 20 percent of them (1.2 percent of the total) involved an empirical study. In other words, CALL software was described and discussed in five times more articles than it was tested: "publications focus on the presentation of software or guidelines for implementation without having put any of the implied pedagogical assumptions to an empirical test" (p. 172). For many researchers, giving up on empirical comparisons apparently meant giving up on empirical research altogether. Behind this, I am suggesting, is an uncritical acceptance outside our field that educational technologists have unequivocally established that "there are no learning benefits to be gained from employing different media in instruction," and that outcomes-based media research is full of unresolved conceptual problems and so best avoided.
Whether or not similar patterns can be found in other subject areas
involving media, in educational technology itself many seem to have felt
some downside to Clark's position, judging from the number who have
attempted
to pick holes in it since 1983. However, it is not clear that Clark's
critics
have really come up with the "novel theory of media" he called for.
What I find curious about this criticism is that pilots learned to land planes before there were computer displays of dynamic skeletal cues. In fact, blind pilots have successfully landed planes. My point is that these media attributes must be unique contributors to learning if they are to be considered necessary for learning to take place.In other words, Petkovich and Tennyson have missed the point. With the "unique" stipulation, clearly no particular medium is or ever will be "necessary" for any particular learning to take place, and Clark's version of media is unassailable.
Another line of rebuttal has been to look at Clark's methodology, or rather the methodology of the original comparison studies on which Clark's meta-criticisms are based. Romizowski (1988) attributes the no-difference findings in the 1970s studies to an external-internal validity problem at the heart of the comparisons paradigm. In a comparison study it is necessary to compare comparables, while any normal media selection process would focus on differences:
Naturally you compare two media on a topic where both have a reasonable chance of success. No one would set up a comparison of a printed book and a tape recording for a course on bird-song recognition: one medium is obviously inappropriate. So you choose an experimental topic which does not seem to favour either medium particularly, and are then surprised when no significant differences are found in the experimental results (p. 60).This is clearly correct; a comparison between a printed book and a tape recording for learning bird-song recognition would probably yield a significant difference.
Even so, Clark's main point is not threatened. Some amount of learning can take place with the printed book, maybe more in the case of a bird song expert, so the tape recording could not be described as the unique medium for the job. The main choices are ultimately about method, for example whether to impart the information through definition and description (in a book) or exemplification (in a recording). Following that, the media choices are only about efficiency and cost of delivery.
Other methodology rebuttals come from Kozma (1991; 1994) and Ullmer (1994). Kozma argues that in the years since Clark's work, both media and theories of learning and instruction have changed so much that new research methodologies are required to describe and guide them. Clark's analysis depends on a non-interactive model of learning (a load of instruction is packaged for delivery to a learner, who passively receives it, etc.), while in the present era of constructivism and distributed cognition, learning has been redefined as a highly interactive set of events shared between a learner and various human/non-human agents, tools, and media in differing proportions, dynamics, and synergies through time. In this scenario, isolated variables like method, medium, or even learner make little sense.
The appropriate research model for this new learning, Kozma argues, is Salomon's (1991) "systemic" model, in which quantitative and qualitative approaches, both incomplete in themselves, are integrated. However, Kozma's main interest is clearly in the qualitative part of the integrated model. The quantitative part receives little mention, apart from a vague plan to enlist "smallest space analysis" (1994, p. 15) to chart the course of highly interactive learning episodes, but with no details about the exact hypotheses to be tested. In the meantime, Kozma's examples of outstanding new media benefits are just the familiar old benefits that confuse medium and method, as Clark (1994 a,b) tirelessly points out.
Ullmer (1994) joins Kozma in proposing that media researchers adopt
Salomon's model, but unlike Kozma makes it clear he sees little
likelihood
that the quantitative and qualitative halves can ever be integrated. So
media researchers should resign themselves to looking for "two kinds of
truth."
First logic: Clark argues that in any media choice, it is instructional method that is the active causal variable. Methods are necessary and unique, but media are not, because any number of media can realize an instructional method. But as Shrock (1994) has noticed, this argument can be turned around. What instructional method is unique or necessary? All teachers know that any content can be delivered by a variety of instructional methods, and indeed perform informal method-comparisons research on an hourly basis. For instance, new concepts can be learned through definitions or examples, explicitly or incidentally, and so on-learning theory has provided no final settlement on this or any other method question. The choice depends on what is known about past methods in relation to what is known about present learners. Methods may be more unique, so to speak, than media, in the sense of less numerous, but they are far from absolutely unique. And where there is no clear difference between methods, or it is impractical to determine what the difference is, then the basis of method decisions is cost and efficiency, just as it is for media decisions.
It is not clear that Clark would object to the analysis so far, because it still gives priority to methods over media. Or he might, since it reduces method decisions to cost and efficiency, the yardstick reserved for media. Actually, the latter is more likely, because throughout Clark's writings on media, cost and efficiency are presented as lower-order concerns with little bearing on learning. For example, here is more advice for newcomers to educational technology (1984, p. 240):
graduate students who are enthusiastic about media should limit their research questions to delivery issues (e.g. cost, efficiency, equity, and access). While I personally think that issues in that area are less engaging than those connected with learning, delivery is a crucial aspect of any instructional technology. [Emphasis added.]By implication, media efficiency research, though officially "crucial," is not "connected with learning." Nor has Clark weakened this position in the meantime, but rather the opposite: "there is no cognitive learning theory that I have encountered where media, media attributes or any symbol system are included as variables that are related to learning" (1994b, p. 7).
If efficiency is not related to learning, does this mean that learning bird songs with a tape recording might be easier and faster than learning it with a book, but this would have "nothing to do with learning"? Even if the cognitive effort were 50 or 100 times greater with the book? The idea flies in the face of common sense. However, as already discussed, book vs recording is really a method difference, description vs exemplification, and the superiority of the recording is really the superiority of exemplification in this particular case.
But other method-medium configurations for the scenario can be imagined. Suppose the book contained not text, but the song encoded as written music. Then, the choice between the book and the recording is not between two methods but between two media, since in either case the method is exemplification. Further, the choice between them can clearly be made only with reference to variables "connected with learning," specifically with learners' prior skills and knowledge-ability to read music, and prior knowledge of bird songs. For most learners, the recording would be the obvious choice; for learners who could read music and knew much about bird songs, the book might be more efficient. Such indeed is little more than common sense-the common sense we have somehow lost sight of by confining ourselves to the terms of Clark's argument.
Undeniably, learning from either medium is logically possible--the beginner could learn to read music, etc--so it is simply efficiency rather than cognition per se that makes up the difference. But surely "efficiency" has been framed more narrowly than it needs to be, and could be usefully broadened to include a space for "cognitive efficiency," as distinct from the economic or logistic kind. Such a conceptual broadening would re-admit to the discussion many important learning features of otherwise equivalent media, such as one medium being more or less effortful than another, more or less likely to succeed with a particular learner, or interacting more or less usefully with a particular prior-knowledge set.
Cognitive efficiency would have varying degrees of relevance to media decisions. Take the role of color (which can be considered a medium when representing arbitrarily related information) in learning to distinguish two objects: if the task is learning to recognize your new car in a crowded parking lot, then the importance of color in is not very great, since the learning will depend not on color alone but also on shape and size, and will normally take place under conditions of high error tolerance. But if one were charged with designing the world's first traffic lights, the role of color in learning might be more important.
Logically, there is no doubt that drivers could learn to associate stop-wait-go with any three colors, or even shades of colors-say, three shades of blue. But for the first few months with three shades of blue, the accident rate would be steeper than the learning curve. Given the rods and cones of the human visual system, color associations are learned faster initially and accessed faster ever after if the colors are distinct (red and green are "processed independently" in the neural system, according to Marr (1982, p. 258). Compared to blue in three shades, red, yellow, and green are a cognitively efficient learning medium, leading to faster learning, fewer errors, and in this case fewer injuries and fatalities.
Not that the historical choice of red, yellow, and green was necessarily the uniquely best choice (color-blind people confuse red and green), just a better choice than some others. After all, the "unique" stipulation, so long a red flag to media specialists, may actually have been more of a red herring. Uniqueness and necessity play relatively minor roles in social science research, especially education. Take instructional method: instructional designers choose between methods all the time without being sure they have found the uniquely best one for a given task, just the best from among the alternatives they can think of in the time available. So if method takes precedence over medium, then why should medium be held to a stricter standard than method? There may be no unique medium for any job, but this does not mean that one medium is not better than another, or that determining which is better is not an empirical question.
Efficiency-based empirical media research would tackle such questions as these:
Clark's recent writings suggest that he himself sees the need for some changes in emphasis. Consider the evolution between 1983 and 1994 in the analogies (media) he uses to deliver the idea of media equivalence. In 1983, the idea was represented by the equivalent function of the trucks that deliver food to a market; in 1994 by the equivalent forms that a medicine might take--"tablets, liquid suspension, suppositories, or injections" (1994a, p. 26)--and yet remain the same medicine. The ingestion theme has been maintained, but medicine replaces food as the ingested substance. The pharmaceutical analogy seems intended to communicate the notion of equivalence more clearly, or perhaps more freshly (given the mileage on the truck), but in fact it introduces several uncontrolled novelties.
The new analogy includes the consumer-patient-learner, who was formerly outside the picture. It focuses on the point of consumption, rather than a remote point in the chain of delivery. It raises the issue that wrong medicine is normally more serious than wrong food. Most interestingly, it elevates the media specialist from truck driver to physician.
A truck driver might accept that it hardly matters which truck a food is delivered in (to a market), but no physician will accept that it hardly matters which form a medicine is delivered in (to a body). Tablet and suspension, suppository and injection-each has a different way of getting into a body, and interacts differently with different types and conditions of bodies. Knowing about this is a large part of a physician's expertise. True, the medicine is the same whatever the delivery, and efficiency is the only consideration in choosing--but an efficiency that can mean wellness or illness, life or death. In other words, Clark's images of equivalence are far from equivalent.
There is no need to decide whether the new analogy or the old one is
correct, or even which is better. Each captures a possible relationship
between learning and medium: sometimes media choices are as remote from
learning as trucks are from consumers, sometimes as intimate with it as
forms of medicine are to the ill and wounded. A comprehensive approach
to media will acknowledge the conditional appropriateness of both images.
What is needed is an expansion rather than a replacement.
"Message" in education is roughly "medicine" in the health sciences. While we educators debilitate ourselves worrying about how to separate method and message from medium, medical theorists accept that a medicine must enter a body through some means of delivery and that there is no neutral delivery that does not interact with the body to some degree. Medical research proceeds in the face of this problem, mainly by building up a taxonomy of interaction effects, because its brief is to cure the ill, not to close the hospital until clean variables are available. This is in the nature of an applied science.
Nor does medical research retreat from empiricism as a response to indeterminacy, as is typically proposed in education. In medicine, the efficiencies of candidate delivery systems are compared empirically with regard to outcome, in full knowledge that unique or final causes may not be forthcoming, immediately or ever. All social science research proceeds in the face of an ultimate unknowable, the relative contributions of nature and nurture in human affairs, but still finds ways to proceed on a mainly empirical basis. It is hard now to imagine why education should have been modeled on a philosophical program (positivism?) or a basic science (physics?) rather than an applied science (medicine).
Of course, trucks-to-market theory was not an application of positivist or physics principles, but rather of some research in early cognitive psychology which seemed at the time to provide a firm, scientific, post-behaviorist basis for instructional theory and practice, including a theory of instructional media. For a representative listing of this research, see the reference list to Clark and Salomon (1986). The remainder of this paper will contextualize trucks-to-market theory in this early cognitive research, and then will look at more recent cognitive research to see how the theory could evolve.
To summarize, I am proposing to include cognitive efficiency as a
variable
in media studies which provisionally links media to learning. The
rationale
for this inclusion is that while different media may not create different
cognitive products, such as concepts, schemas, and mental models
(frankly,
the jury is still out on this question), they clearly do create different
cognitive processes at different levels of efficiency (with regard to
speed,
ease, and effectiveness). In other words, the form in which information
is presented can determine how it is processed in a mind, and hence how
it can be learned.
ASSUMPTION 1: SYMBOL SYSTEMS ARE NON-CAUSAL
The importance of the stimulus in behaviorist learning theory is well known. Since an instructional medium can be seen as a collection of stimuli organized for maximal associative learning, it was probably inevitable that the cognitive attack on stimulus-response theory would entail a diminished status for instructional media in educational theory. One of Richard Clark's early projects was to break the link between educational technology and behaviorism-based audiovisualism (see, for example, Clark & Snow, 1975), and much of his position on media and its wide acceptance follows from this.
The behaviorists, however, were not the only students of human affairs to believe in a causal role for various kinds of stimuli in learning. Theorists in literature and art history had long held that information codes such as painting, music, or particular forms of literacy played causal roles in human cognition. For example, McLuhan (1962) believed that the Greeks, by adding vowels to the consonantal script of the Phoenicians, laid the very basis for Western civilization: "it is by the alphabet alone that men have detribalized or individualized themselves into civilization" (p. 63).
The clearest and most influential statement of the causal hypothesis came from Whorf (1956), who argued that different symbol systems (in this case languages) create different concepts, and indeed different mental universes:
We dissect nature in lines laid down by our native languages. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds-and this means largely by the linguistic systems in our minds (cited in Pinker, 1994, p. 59).However, no empirical evidence was adduced for these exciting ideas by either Whorf or McLuhan, and when tested empirically by early cognitive researchers they crumbled.
Take literacy effects, such as McLuhan's vowelled and unvowelled scripts: In a large, fine-grained study in West Africa, Scribner and Cole (1981) tested empirically the age-old assumption that certain kinds of literacies either caused or enabled certain kinds of cognition. After comparing illiterates and literates in two types of writing systems on a large number of cognitive measures, these researchers were forced to conclude that there are no cognitive effects of literacy per se-of knowing a particular script, or indeed of knowing any script.
Or take Whorf's linguistic determinism: Rosch (1978) put this idea to the test in New Guinea, using cross-cultural color conceptualization as her laboratory. Color was chosen because the spectrum is a wavelength on a continuous dimension, with no non-arbitrary dividers between red, orange, yellow, and green, etc.-roughly the "kaleidoscope" mentioned above-so following Whorf's reasoning, different naming systems should create different color concepts. However, when Rosch tested several speakers of a language in Papua-New Guinea who had no words for color except "dark" and "light," she found them nonetheless able to learn color words easily. Furthermore, they were able to learn, name, and remember solid colors (as defined in seven-color languages like English) more easily than off-colors (like puce or chartreuse). In other words, color may be a continuum in physics, but in human physiology the rods and cones of the visual system pick out focal bands for emphasis, and this is true whether or not the bands happen to need naming within a particular evolutionary niche. In other words, these subjects' cognitive systems were not limited to the coding system they happened to be using. Therefore, surface coding systems are only tangentially related to underlying cognitive systems.
And if symbolic media do not cause thoughts, do they have any role in learning? Clark and Salomon (1986, p. 470) spelled out the instructional meaning of the early cognitive message: "the particular surface-symbolic appearance of a message may be relatively less consequential in learning, as it is going to be handled propositionally anyway during deeper processing". In other words, learning happens at a propositional or abstract level, and it makes little difference by which route the message arrives there, so long as it does somehow (trucks to market).
The prototype of the idea that surface form is irrelevant to learning
is Chomsky's (1975) theory of input in language acquisition. Any natural
language input, however "degraded," is sufficient to activate a child's
internal grammar; and this grammar, when fully formed, will be no
different
from anyone else's, and "vastly underdetermined" by the input. However,
drawing educational implications from this line of research, particularly
in areas outside of language acquisition, may have been premature on both
theoretical and empirical grounds.
Maybe in the interests of doing one thing at a time, or in reaction to the behaviorists' emphasi s on learning, early cognitive researchers did not normally deal with learning questions. As Glaser (1990) pointed out, the early cognitive research agenda was performance and did not entail a learning theory--even in the expert-novice paradigm where one might have been expected--much less an instructional theory. And if a learning theory, and following that an instructional theory, both logically precede a media theory, then a cognitive media theory in the early 1980s was premature.
Second, on empirical grounds: Although learning was not a priority in early cognitive research, some of it nonetheless had implications for learning. The many studies of problem solving were essentially studies of trial-and-error learning, since the task before the subjects was to solve a novel problem, i.e. learn to solve it. In several of the problem-solving studies, a clear role was indicated for the importance of surface information and its form. For example, Rumelhart (1980) had subjects solve a problem represented by one of two surface representations (media). One group were given four cards bearing either a letter or a number, for example F, 8, 7, E, and asked to indicate which cards had to be turned over to verify the truth of the statement, "If there is a vowel on one side of the card there is an odd number on the other." The correct cards were identified by 13 percent of the subjects. Other subjects were given the same problem represented in more familiar terms: the cards were order forms from a furniture store with statements such as the following on their visible faces
This same point was made in many contexts, for example by psychologists working in literacy acquisition. Gleitman and Rozin (1973) studied American children with reading problems and found that for some reason they could learn to read English if it was first recoded as a logography (like Chinese) or a syllabary (like Korean) rather than a phoneme-based alphabet. Similarly, Tzeng and Hung (1981), after years of work with subjects learning radically novel scripts and displaying widely varying levels of difficulty, concluded that different writing systems impose very different learning demands.
So did scripts, symbol systems, and media play a causal role in
cognition
after all?
The proposal that reading in different writing systems may entail different processes, which in turn pose different problems for the beginning reader, in a sense argues for a view of linguistic determinism. However, it differs from the renowned Whorfian hypothesis in its particular emphasis on the formation of written languages (rather than spoken language per se) and on processing differences (rather than production differences) [1981, p. 238; emphasis added.]Tzeng and Hung were criticized in the early 1980s for being unreconstructed Whorfians, but studies of script effects in text-processing research now take their product-process distinction for granted.
Working within this framework, psycholinguists and applied linguists have studied empirically and in detail the differential processing demands of several scripts. One strand in this research has focused on the processes by which English and Arabic speakers read their native scripts, particularly with regard to lexical access (decoding word meaning). Arabic and Roman script are relatively similar (compared to Chinese), with the main difference that vowels are not normally written in Arabic, a seemingly small coding difference, but one that causes some large processing differences.
Koda (1988) showed that Arabic script facilitates meaning recovery via a mainly phonological route, English via a mainly visual route. Randall and Meara (1988) showed that Arabic readers fixate on centres of words, English readers on a series of points over lengths of words. Abu Rabia and Segal (1995) showed that while skilled lexical access in English is context free, in Arabic it is characterized by reliance on context. None of this research, of course, shows that Arabic and English speakers are living in different conceptual universes, just that their writing systems create handling differences, at levels that can be called "cognitive" since they involve the processing and not merely intake of information.
"Neurological" may even be the appropriate word in some cases, as shown in Sasanuma's (1975) studies of Japanese dyslexics. Japanese uses two writing systems, Kanji (Chinese characters) and Kana (sound-based syllables), and Sasanuma showed that Japanese dyslexics could lose or recover these two systems independently of one another, suggesting they are processed at different brain locations. This research led Coulmas (1989) to conclude "it is clear that the differences between writing systems are not just superficial differences of coding, but relate to neuropsychological differences concerning the storage and processing of written language units" (p. 135). Indeed, few if any researchers doing empirical work in this area any longer regard script differences as "superficial differences of coding."
In summary, it was never shown that symbol systems, stimuli, and media played no role in cognition and learning. Some early cognitive research appeared to downgrade the importance of surface information codes, but this research did not distinguish between cognitive products and processes, and in any case did not deal explicitly with learning. However, both inadequacies are now being addressed. Research in specialized areas like psycholinguistics has shown that cognitive processes are strongly affected by surface forms of information such as script configurations. And mainstream cognitive research is now explicitly dealing with learning processes (Anderson, 1995). Following these lines of development, the ground for a post-behaviorist learning theory may soon be cleared, and following that an instructional theory, and eventually even a media theory. Whatever shape these theories take, they are unlikely to cast instructional media in the role of trucks to market.
In the meantime, there is no cognitive theory of media. There are
merely
guidelines from cognitive research for media design and development, to
be discussed below.
Such is the current enthusiasm for distributed cognition that it takes an effort to remember the idea, or at least its articulation, is relatively novel. How did cognitive research ever proceed without an architecture of distribution? According to Zhang and Norman (1994), early cognitive researchers handled distributed cognition in one of two ways; they either ignored activity that was cognitive but not individual, or else miscategorized as individual activities that were actually distributed between individuals or between individuals and symbolic media. Zhang and Norman discuss these mechanisms in the context of a classic program of early research, the Tower of Hanoi studies of problem solving (e.g. Hayes & Simon, 1977). These studies were seen at the time as dealing mainly with feats of individual cognition, but in fact their tasks incorporated uncontrolled proportions of internal and external information storage and processing.
Briefly, the Tower of Hanoi puzzle involves moving three disks of different sizes from one peg to another, from a starting configuration on the first peg to a terminal configuration on the third (say, big-medium-small to small-medium-big). The disks were moved to and fro several times to reach the target configuration. The object was to discover patterns of human problem solving within a limited, well-structured, and totally defined task for which the entire "problem space" of possible moves was known. Rules were imposed to vary task difficulty in a controlled manner, for example stipulating that only one disk could be moved at a time. Following the equivalence-of-media assumption, it was not considered that the way rules were represented would affect performance. In fact, Zhang and Norman point out, the content of these rules could be represented entirely verbally, posing a heavy memory demand, or else also represented in the environment (for example by using disks too large to lift more than one of conveniently), reducing the memory demand to an unspecified degree--but with no experimental distinction made.
The reasons for this, Zhang and Norman conclude, was that early
cognitive
theorists had little awareness of the nature of external representations,
and indeed "no means of accommodating them" within their assumptions or
methodology. External objects, if they had anything to do with cognition
at all, were "at most peripheral aids" such as mnemonics (p. 88). Clark's
view of instructional media is clearly compatible with this outlook.
However, it is not impossible to multiply with Roman numerals, so no unique or necessary efficiencies are claimed for Arabic. Indeed, efficiency can be measured only against an objective--usually short-term efficiency of learning vs. long-term efficiency of use. For example, simple addition in Roman notation is easy to learn, involving little more than counting natural symbols (I + II=III), while in Arabic, addition cannot even begin until numeric sets (three objects) have been recoded as arbitrary symbols ("3"), in other words until much preparatory learning has taken place.
A clear example of a short vs. long term efficiency trade-off is Chinese vs. Roman script. Chinese characters allow faster reading than Roman script at comparable levels of literacy, because the mind processes shapes and pictures faster than it does graphemes. In other words, Chinese is more efficient than Roman because it does more of the cognitive processing. However, Chinese also involves a longer learning process before reading can begin. Learning the characters proceeds by memorization on a largely piecemeal basis over many years (Martin, 1972), while learning Roman script, after the initial difficulty of recoding sounds as letters, proceeds to maturity on a productive (Perfetti, 1985, p. 208) or auto-instructional basis (Adams, 1990, p. 38). Efficiency of eventual performance must be weighed against efficiency of learning. The classic problem in China has been that the learning process was too arduous and lengthy to be completed by more than a scholarly elite (Balmuth, 1982, p. 31), leaving the mass of folks illiterate and the script's potential efficiencies unrealized. China's periodic interest in pinyin, a romanized script, must be seen in this context--a case of "media selection" on a grand scale.
In diverse research areas, from the evolution of human cognition
(Donald,
1993) to connectionist models of learning (A. Clark, 1993), a vastly
expanded
role is now regularly granted to the invention and use of symbolic media,
external representations, and cognitive tools--the "things that make us
smart" (Norman, 1993). An answer may even be in sight to Miller's (1956)
ancient riddle, that if working memory is confined to seven bits of
information
then how is complex cognition possible? The answer may lie less with
in-the-head
strategies (like chunking, automatization, top-down processing, forward
reasoning, and skilled memory) and more in people's ability to offload
or "circumvent" cognitive work (Salthouse, 1991) through the skilled
invention
and employment of symbolic media. Even Einstein said that the concept of
relativity would never have occurred to him had he not been working with
a particular notation called curved-space geometry (reported in Pagels,
1988).
However, at least one concrete step has been taken toward an empirical methodology of distributed representations. Zhang and Norman (1994) have proposed and demonstrated a methodology of "representational analysis" consisting of the identification, separation, and principled reintegration of all the internal and external representations and computations that are relevant to a particular cognitive task. Up to now, their research has looked only at well-structured toy problems like the Tower of Hanoi, but their findings are suggestive.
Their experimental design used four versions of the Hanoi puzzle, each a carefully specified proportion of information held inside and outside of memory, in contrast to the uncontrolled proportions of the original experiments. For example, one rule was that "a disk can be placed only on another disk smaller than itself": in one experimental version, the rule was represented verbally so that it was held in memory over the course of the task; in another version the rule was encoded externally, for example as a stack of three sizes of full coffee-cups, such that if a cup was placed on a larger cup it would fall in and spill coffee--creating no memory burden for subjects who already believed that spilling coffee was undesirable.
Zhang and Norman's levels of distributedness were able to predict in
detail subjects' problem-solving performance: with more information
processed
out of working memory, tasks were easier, performance faster, and errors
fewer. In other words, cognitive efficiency was greater.
It goes without saying that the most efficient medium would not necessarily be ideal for every stage of learning. The goal is to have a principled and empirical way to calculate optimal information distributions at various points in different types of learning processes, including of course terminal distributions. Airline pilots are destined always to share major parts of their cognitive work with their instruments, trapeze artists to get most of the work packed into their heads. The way forward in media design is to model learner and medium as distributed information systems, with principled, empirically determined distributions of information storage and processing over the course of learning. Zhang and Norman's experiment shows that in principle this is possible. Clearly, ways of calculating efficiencies and distributions will be needed for problem spaces far more complex and ill structured than the Tower of Hanoi puzzle. One can only hope that Zhang and Norman's methodology will be further developed and extended.
However, even a conceptual version of their methodology can shed new light on some old media conundrums. For example, it gives Petkovich and Tennyson an answer to Clark's remark that their computer program was hardly necessary if blind people could learn to land airplanes. Clark intends a comparison between two instructional media, voice vs computer, implying there is no interesting difference between them. But the comparison may be more usefully represented as two information distributions: one involves verbal learning via the medium of voice or braille, with all information held in memory, while the other involves verbal-visual learning via the computer program, with controlled amounts of information processed outside the mind and remembered on a computer screen. As in Zhang and Norman's experiment, the default prediction is that the greater the proportion of work performed in memory, the more arduous and error-prone the learning (a view apparently shared by the airline industry, which invests heavily in flight simulators, less in books or lectures).
Or, take Rumelhart's problem solvers. Subjects could decide which
cards
to turn over more easily when the problem was phrased in terms of a
furniture
store than when presented as decontextualized vowels and consonants.
Normally
this is attributed to the presence of a "store schema" in the former
condition.
The schema explanation, however, merely begs another question: what does
a schema do? A distribution-of-information analysis suggests an answer,
that schematized information (including for example the idea that large
purchases may be subject to special controls) is to a large extent
preprocessed
in a consumer culture, and so imposes a low memory demand when called up
for problem solving. But unfamiliar relations between decontextualized
letters and numbers are fully processed in working memory with
predictably
poor results.
1. There is no further reason for media researchers to accept that their work has "nothing to do with learning." First, it is now generally recognized that the ability to interface with symbolic media and integrate their outputs is nearer the heart of human cognition than the periphery. Second, different representational forms of the same underlying information clearly affect how the information can be processed and learned. Therefore, the design of such forms is an activity that can be aided by an understanding of cognitive processes.
2. There is no further reason for media discussions to be limited by the idea that only unique or necessary media solutions are worth talking about. There are clearly many media for any instructional job, but this does not mean they all do it at the same level of efficiency--whether economic, logistic, social, or cognitive. It is precisely the job of the media specialist to know the range of media that can realize any instructional methodology, and to find the ones that best match all the resources of their target learners.
3. There is no further reason for media researchers to accept that the
only methodology available to them is qualitative. As useful as
qualitative
studies may be for exploring new technologies and formulating relevant
hypotheses, the hypotheses themselves should be tested empirically. At
present, there is no reason why the cognitive efficiencies of otherwise
equivalent media cannot be compared empirically, for example on
uncontroversial
measures like ease, speed, and effectiveness of learning. For the future,
empirical methodologies are being developed for exploring distributed
cognition
that may be adaptable to the goal of modeling learners and media as
distributed
systems, and this is clearly a promising area for further research.
Adams, M.J. (1990). Beginning to read: Thinking and learning about print. Cambridge, Mass. : MIT Press.
Anderson, J.R. (1995). Learning and memory. New York: Wiley.
Balmuth, M. (1982). The roots of phonics. New York: Teachers College Press.
Chappelle, C. (996). CALL--English as a second language. Annual Review of Applied Linguistics, 16, 139-157.
Chi, M.T.H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R.J. Sternberg (Ed.), Advances in the psychology of human intelligence, Vol. 1. Hillsdale, NJ: Erlbaum.
Chomsky, N. (1975). Reflections on language. New York: Pantheon.
Clark, A. (1993). Symbolic invention: The missing (computational) link? Behavioral and Brain Sciences, 16, 753-754.
Clark, R.E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53 (4), 445-459.
Clark, R.E. (1984). A reply to Petkovich and Tennyson. Educational Communications and Technology Journal, 32, 238-241.
Clark, R.E. (1985). Confounding in educational computing research. Journal of Educational Computing Research, 1 (2), 137-148.
Clark, R.E. (1994a). Media will never influence learning. Educational Technology Research and Development, 42 (2), 21-29.
Clark, R.E. (1994b). Media and method. Educational Technology Research and Development, 42 (3), 7-10.
Clark, R.E., & Salomon, G. (1986). Media in teaching. In M. Wittrock (Ed.), Third handbook of research on teaching (pp. 464-478). Chicago: Rand McNally.
Clark, R.E., & Snow, R.E. (1975). Alternative designs for instructional technology research. Audio-Visual Communications Review, 23 (4), 373-394.
Conrad, K.B. (1996). CALL- Non-English L2 instruction. Annual Review of Applied Linguistics, 16, 158-181.
Coulmas, F. (1989). The writing systems of the world. Oxford: Blackwell.
Dick, W. (1991). An instructional designer's view of constructivism. Educational Technology, May, 41-44.
Donald, M. (1993). Precis of Origins of the modern mind: Three stages in the evolution of culture and cognition. Behavioral and Brain Sciences, 16, 737-791.
Dunkel, P. (1991). The effectiveness research on computer-assisted instruction and computer-assisted language learning. In P. Dunkel (Ed.), Computer-assisted language learning and testing: Research issues and practice. New York: Newbury House.
Glaser, R. The reemergence of learning theory within instructional research. American Psychologist, 45 (1), 29-39.
Gleitman, L.R., & Rozin, P. (1973). Teaching reading by use of a syllabary. Reading Research Quarterly, 8, 447-483.
Hayes, J.R., & Simon, H.A. (1977). Psychological differences in problem isomorphs. In N.J. Castellan, D.B. Pisoni & G.R. Potts (Eds.), Cognitive theory. Hillsdale, NJ Erlbaum.
Jamieson, J., & Chappelle, C. (1988). Using CALL effectively: What do we need to know about students? System, 16 (2), 151-162.
Koda, K. (1988). Cognitive processes in second-language reading: Transfer of L1 reading skills and strategies. Second Language Research, 4 (2), 133-156.
Kozma, R.B. (1991). Learning with media. Review of Educational Research, 61 (2), 179-211.
Kozma, R.B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42 (2), 7-19.
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. New York: Freeman.
Martin, S.E. (1972). Nonalphabetic writing systems: Some observations. In J.F. Kavanagh, & I.G. Mattingly (Eds.), Language by eye and by ear: The relationship between speech and reading (pp. 81-102). Cambridge MA: MIT Press.
McLuhan, M. (1962). The Gutenberg galaxy. Toronto: University of Toronto Press.
Miller, G.A. (1956). The magical number seven, plus-or-minus two: Some limitations on our capacity for information processing. Psychological Review, 63, 81-97.
Norman, D.A. (1993). Things that make us smart: Defending human attributes in the age of the machine. Reading, MA: Addison-Wesley.
Pagels, H.R. The dreams of reason: The computer and the rise of the sciences of complexity. New York: Bantam Books.
Perfetti, C.A. (1985). Reading ability. New York: Oxford University Press.
Petkovich, M.D., & Tennyson, R.D. (1984). Clark's Learning from Media: A critique. Educational Communication and Technology Journal, 32 (4), 233-241.
Pinker, S. (1994). The language instinct: How the mind creates language. New York: William Morrow and Co.
Randall, M., & Meara, P. (1988). How Arabs read Roman letters. Reading in a Foreign Language, 4 (2), 133-145.
Romizowski, A. (1988). The selection and use of instructional media: For improved classroom teaching and for interactive, individualized instruction. London: Kogan Page.
Rosch, E. (1978). Principles of categorization. In E. Rosch and B. Lloyd (Eds.), Cognition and categorization. Hillsdale, NJ: Erlbaum.
Rumelhart, D.E. (1980). Schemata: The building blocks of cognition. In R.J. Spiro, B.C. Bruce, & W.F. Brewer (Eds.), Theoretical issues in reading comprehension. Hillsdale, NJ: Erlbaum.
Salomon, G. (1991). Transcending the qualitative-quantitative debate: The analytic and systemic approaches to educational research. Educational Researcher, 20 (6), 10-18.
Salthouse, T.A. (1991). Expertise as the circumvention of human processing limitations. In K.A. Ericsson & J. Smith, Toward a general theory of expertise: Prospects and limits. New York: Cambridge University Press.
Sasanuma, S. (1975). Kana and Kanji processing in Japanese aphasics. Brain and Language, 369-383.
Scribner, S. & Cole, M. (1981). The psychology of literacy. Cambridge MA: Harvard University Press.
Shrock, S. (1994). The media effects question: Read the fine print, but don't lose sight of the big picture. Educational Technology Research and Development, 42 (2), 49-53.
Tzeng, O.J.L., & Hung, D.L. (1981). Linguistic determinism: A written language perspective. In O.J.L. Tzeng & H. Singer (Eds.), Perception of print: Reading research in experimental psychology. Hillsdale, NJ: Erlbaum.
Ullmer, E. J. (1994). Media and learning: Are there two kinds of truth? Educational Technology Research and Development, 42 (1), 21-32.
Whorf, B.L. Language, thought and reality. Edited by J.B. Carroll. Cambridge MA: MIT Press.
Zhang, J., & Norman, D.A. (1994). Representations of distributed cognitive tasks. Cognitive Science, 18, 87-122.