Tom Cobb
Dept. de linguistique et de didactique des langues
Université du Quebec à Montreal
Montreal, Canada
Comments to: cobb.tom@uqam.ca
Last revised: 2 April 2004
DRAFT
Bernard and his colleagues’ How Does Distance Education Compare to Classroom Instruction? A Meta-Analysis of the Empirical Literature is long overdue. As various versions of distance or otherwise heavily mediated education have become more and more prominent under the familiar pressures, we educators have sidelined ourselves from its development. We have busied ourselves with isolated studies that subsequently can not be added up (e.g., the many that could not be used in the meta-analysis), or with interesting but unproductive worries about whether media cause learning (as opposed to how they can best support it since they will be used anyway), or about whether instructional designs involving media ought to be compared at all. This study will make it harder in future for us to remove ourselves from the rough and tumble of “going online.”
By pulling together those comparative studies since 1985 that bear such refinements as control groups and standard deviations, and bringing them together in a fine-grained interpretive framework that is both theoretically and practically motivated, Bernard et al expose what we have not been doing in our research and by implication propose what we should be doing. Apparently we have not all been doing research that accumulates around common themes and measures, with the result that when our administrators push for distance models we have little convincing evidence to offer at the choice-points that would steer decisions toward maximum learning for end users. In methodological terms, this study is a call for us to do research from which strong and useful findings can be extracted and used to guide policy and spending. And in meta-methodological terms, it is an affirmation that comparison is a fit topic for research even where media are involved. Without comparative information, what leverage do educators have to influence how, when, and why distance education (DE) will be implemented?
There may well be lingering conceptual and definitional problems in classroom vs. distance comparisons, but we no longer have the luxury of waiting until every one of them is resolved before coming up with useful information. Institutions are under intense competition for the world-wide market of online learners, and courses will go online with or without an input from us who make a living studying the conditions and processes of learning. To eliminate ourselves from the fray through delicacy, because there are problems and potential confounds with comparing media, is no longer a valid option. Either we or else the business consultants hired by our administrators will be the main players in a transformation that is already under way. In other words, there would be a case for doing quick and dirty comparison studies just to make sure that learning joins cost as a consideration, but fortunately we do not have to because Bernard et al provide a model for doing proper ones.
It is timely to go beyond the taboo on comparative research. Richard Clark’s (1983; 1994) warnings about the pitfalls of this research into or involving media were appropriate at a time when new (televised, then computational) media were still poorly understood in their own terms and educators were trying to think about them in simplistic comparisons to their predecessors (in what Bernard et al call “iron horse” thinking, p. 33). A humorous instance from Mielke (1968) comes to mind, where researchers tried to find differences between a lecture delivered by a live human standing at a podium vs. sitting inside a television set. The foibles of such comparisons have been well exposed, and the problem now is that we have heard more about how comparisons can go wrong than how to do them right.
Two main problems with media comparison studies were that (1) causality was attributed to media, and (2) media effects were considered apart from the systems in which they were embedded. Our field collectively seems to have thought its way around these problems, with Bernard et al’s work both profiting from and being an example of it. First causality: in a world now permeated with computer-driven mediated experiences, it seems clear that few researchers are any longer interested in investigating the effects of such media per se but rather of the attributes of different media and technologies and the potential of these to support, stimulate, encourage, or discourage learning. Bernard et al follow Smith and Dillon (1999) in focusing on attributes in their comparisons, and this seems to reflect a general maturation in our thinking on this question. Surry and Ensminger (2001) develop this line of reasoning (preferring the term intra-media variables to attributes) and stress the need to study these in relation to learner variables in an update of the ATI (aptitude treatment interaction) paradigm. Similarly, I have argued (Cobb, 1997) for the validity of comparing the cognitive efficiencies of different media, these being a function of technology attributes such as speed or visualization of processing, again in relation to learner characteristics such as prior knowledge. Clark argued that any differences we thought were caused by a medium would on close inspection turn out to be caused by an instructional method merely carried by a medium, a piece of clear thinking that is now generally accepted, and developers typically define instructionally relevant attributes of media as method variables or design variables.
Second, systems integration: The other problem with media comparisons was that media effects were often investigated in isolation not only from learners but also the wider instructional, institutional and social systems they function within (as noted by many proponents of the systems view, e.g., Daniels, 2002). A strength of Bernard et al’s study is its investigation of the media issue within a broader distance-classroom context. Despite the various problems we still have defining DE (Keegan, 1996), there seems broad agreement that all versions of it are more heavily mediated than other instructional formats and that the development of both media and DE will henceforth be tightly interwoven. In Bernard et al’s study we have a framework for conceptualizing this relationship and (I will argue) some early indications of better and worse ways of pursuing its development.
Comparison studies of or involving media have been in official disrepute in the research journals since the 1980s, but in fact they were never off the agenda. Two educational technology graduate students, Surry and Emsinger (2001), conducted a poll in which 143 users of the trade Listserv ITFORUM “surprised” them with their amount of support for the media comparison design. Really, there is nothing surprising. Educational technologists (at least outside universities) work with spenders, spenders make choices, and these choices are based on comparative information. So if learning research is to find its way into spending decisions, it will be mainly via comparative studies. The researchers in the Bernard et al team clearly have practical realities in mind when they state that
well-designed studies can suggest to administrators and policy-makers not only whether DE is a worthwhile alternative, but in which content domains, with which learners, under what pedagogical circumstances and with which mix of media the transformation of courses and programs to DE is justified. (p. 35)
The point, in other words, is to do these studies well and to set
them up to collect fine-grained, usable information.
Bernard et al show us some ways of doing comparison well. Their meticulous and no doubt laborious gathering of data, their careful stating of assumptions, their incorporation of the criticisms by Clark and others, their reliance on effect size as opposed to statistical significance as principle measure, and their investigation of a broad domain (media within a DE-classroom context) with variables carved into fine units (e.g., synchronous and asynchronous models of DE)—all these should make this a study that practical decisions can be based on.
Ironically, the result of all this methodological work is a rather modest and possibly predictable finding, that DE is more effective than classroom learning under some circumstances and less under others. The circumstances are, of course, the main interest, and the one that can create a difference is, less predictably, the synchronicity of a DE venture:
Even though the literature is large, it is difficult to draw firm conclusions on what works and doesn’t work in DE, except to say that the distinction between synchronous and asynchronous forms of DE does moderate effect sizes in terms of both achievement and attitudes. (p. 59)
Synchronous models are those where learners must meet at a certain time and place, say to watch a recorded lecture or participate in a video-conference, in other words in a traditional classroom at one remove. Asynchronous models are those where learners really do study any time, any place, and are mainly on their own with whatever resources the DE module has provided them.
And is it synchronous or asynchronous DE that enjoys the effect-size advantage? When DE and classroom are made as similar as possible (e.g. synchronous), then classroom learning and motivation are stronger than DE. This is perhaps not surprising since synchronous DE is just classroom instruction minus some undetermined number of its features. But when the unique opportunities of the DE model are exploited (e.g., in an asynchronous model), then DE tends to be more successful than classroom. This finding echoes that of Ehrlich, 1995, but from a much larger database; Ehrlich found the correlates of successful DE to be an asynchronous dimension with opportunities for both independent and collaborative problem-solving and project work within a tool-rich environment.
This finding appears to refute the famous “NSD [no significant difference] finding” of, for example, Russell (1999). Gross or box-score meta-comparisons may yield NSD results across many studies, but, when distinctions are drawn within the data, then interesting differences are found. These differences do not attach to DE per se, of course, but to one of its attributes, synchronicity. Future studies could explore this variable further, for example testing whether there is an interaction between synchronicity and content-area characteristics or learner characteristics like self-direction. Here, then, is at least a hint of a guide or direction for resource allocators: put your money into research and development that exploits what is unique about DE, such as its potential asynchronicity. This may not seem like much, except that it is probably roughly the opposite of the direction that is being followed in most DE work at present.
Bernard et al are unwilling to speculate about where DE should go from here on the basis of their data: “So what is the best mix of media and methods for asynchronous distance education? The answer to this question has not been revealed in this meta-analysis and remains largely unknown” (p. 65). However, I will permit myself the liberty of seeing in synchronicity a basic distinction between two broad approaches to distance learning: make it as similar as possible to classroom learning, versus exploit its differences to classroom learning. The rest of his chapter will elaborate on what it might mean to exploit classroom-DE differences.
Many, maybe most, current DE ventures are already literally asynchronous, in that learners do not meet at a particular time. But this in itself does not mean that any unique opportunities of DE are being realized. The majority of enrolled distance learners pursue their learning online through one of the growing number of generic platforms like WebCT, Blackboard, e_College, TopClass, or others, which deliver courses to learners’ computers. Several researchers have studied the nature of the instruction typically provided within these platforms, notably their basic metaphors and the nature of the learning they promote. In terms of metaphors, Jonassen (2002) finds that the majority of them imitating the classroom as closely as possible rather than exploiting their differences from it. These platforms all, in slightly different ways, orchestrate the delivery of mainly text materials to students and mainly text assignments back to professors; they keep records; and they allow certain kinds of communication and certain kinds of quizzes. In other words, they are replicated classrooms (or, iron horses) whose main purpose is to let professors get their courses online with a minimum of adaptation.
The nature of the learning within a generic platform has also been investigated from a number of different perspectives by researchers including Jonassen (2002), Marra and Jonassen (2001), and Fierdyiwek (1999). Their studies are in broad agreement that these platforms embody epistemological assumptions that support one learning model and limit the development of others. What is the learning model they support? In their attempt to imitate the classroom of the past as closely as possible—seemingly the rather distant past—these platforms support a knowledge-transmission, discrete item-learning model with a focus on retention and regurgitation of pre-constructed information, and, conversely, they do not support integration, construction, transfer, collaborative problem solving, alternative forms of knowledge representation, authentic forms of assessment, or distributed tools to scaffold different forms of reasoning (Jonassen, 2002, p. 75). These researchers argue that for DE to be successful, it must offer its learners problem-solving tasks based on constructivist learning principles, and they provide examples of how what they call CLE’s (complex learning environments) can be designed and delivered over the Internet for a variety of subject areas.
Are not some DE programs doing some
of what Jonassen et al propose already? Apparently, not many. In their
massive review of the literature, Bernard et al find few interesting
uses of media or computing. They find media of teaching in abundance
(presumably referring to WebCT and its cohort) but few media of learning:
One of the most disappointing aspects of this analysis is the difficulty that has been experienced in separating the media of teaching from the media of learning (Keegan’s distinction between “distance teaching” and distance learning”). It was Kozma and Cobb who surmised that the balance of Clark’s conclusion concerning the effects of media might shift as media more usefully empowered students to engage in deeper learning processes to achieve learning outcomes that go beyond those of retention and comprehension. However, largely because of the lack of sophistication in research design, measurement and reporting, it has been difficult to draw conclusions regarding the possibility of such a shift. (p. 64)
Presumably if few media of learning
are reported then few are included in the majority of DE courses, so that the
computers employed in the vast majority of them are used for little more than
the delivery of text materials.
If learners are to be removed from classrooms and learning is to remain equal or better, then the learners will ideally be compensated for their removal, for example with something resembling Jonassen’s learning environments or other computer-based cognitive tools that allow meaningful engagement with a subject in the absence of, or reduced exposure to, teachers and classmates. But if this is not often the case, then how do these learners outperform their classroom peers? For now, we can only surmise that these learners are basically left to get on with their correspondence packages when and where they like, as opposed to having to get to an off-campus learning site or similar, and that they do well enough with that (no doubt a reflection of the independent mindedness that would draw them to DE in the first place). But one can only assume these learners could do much better with materials that were specifically designed for their situation.
Many in DE research do not have confidence that learning tools will not come from learning specialists, at least not from those who work in education departments. For example, Vrasidas and Glass (2002, p. 37) go so far as to argue that, “Not since Samuel Pressley in the 1920s or Skinner’s teaching machine of the 1950s … has a technological innovation been designed by educationists or psychologists” (with the exception of Papert’s Logo). If this analysis is even half correct, then the default supplier of learning software is the for-profit software company.
What is wrong with for-profit
learning tools? Quality learning software has occasionally been produced by
companies, sometimes with academic participation and often for free
distribution. For example Intel offers a collection of ideas for putting their
and Microsoft’s products to work as cognitive tools in the classroom (in their Teach
to the Future program). However, there is a basic problem with marketing
innovative learning products to inexpert consumers, which is that the
innovation must be made to resemble its predecessor as much as possible in
order to seem familiar and support a confident purchase. In other words, it must
be an iron horse innovation. Commercial learning tools are mainly versions of
classroom activities but without the benefits of the classroom, its complex
brew of sticks and carrots reduced to beeps and pop-ups. It is therefore predictable
that, when tested (though this is rare), such tools often show poorer learning
results than classrooms, as in the celebrated case of the Reader Rabbit computer-based
reading skills tutorial (Oppenheimer, 1997).
So where will a timely supply of interesting learning tools for DE learners come from, that do not simply replicate classroom learning badly? Fortunately, Glass et al’s comments about not expecting much from educators are somewhat exaggerated. Many researcher-developers working in or on the fringes of education in the enterprise known as intelligent tutoring produced a wide range of interesting applications over the 1980s (at a time when mainstream educators were traumatized by the taboo on comparing). To sample: Barbara White from computer science developed a program called Thinker Tools that allows young learners to explore the laws of physics in a game setting (White, 1984; White & Horwitz, 1987); J.R. Frederiksen from psychology worked on both reading tutors (Frederiksen, 1984) and electronics trouble-shooters (Frederiksen, White, Collins & Eggan, 1988); William Clancy in medicine adapted a meningitis database as a tutorial for teaching diagnosis (Clancey & Letsinger, 1984); Anderson from psychology developed a number of misconception-based geometry and algebra tutors for school learners (Koedinger, Anderson et al, 1997) – to note some highlights. Less technically inclined educators at the same time have been working on intelligent tutoring’s little brother, ‘cognitive tools’ (Lajoie & Derry, 1993; Lajoie, 2000; Cobb, 1999).
To summarize, there is no
shortage of models for learning tools that could give DE learners something
seriously interesting to do and which are far better suited to independent or
small group learning than to usual forms of classroom learning. So, with no
apparent lack of activity, why are so few interesting cognitive tools or “media
of learning” showing up in the DE data? A possible reason is simply the time
frame for development. Most interesting computer learning tools are developed
over years, tested over more years, and then need still more years to be put
online and reach the learner audience they deserve. This is a process that is
quite likely to run out of energy, funding, or both well before fruition. To illustrate, let me turn to my own work in
the field of applied linguistics.
Learning tools in one domain
Language learning is a strong candidate for a DE treatment, especially now that the Internet can deliver whole multimedia foreign-language environments to learners who do not have much access to either target language or culture. There are literally millions of language learners worldwide whose academic or career success depends greatly on learning a foreign language, usually English. Two particularly promising areas for DE are academic reading (Cobb, 2003a) and language teacher training.
The problems of existing DE approaches for language learning echo many of those discussed above. Leading second language acquisition researchers are now turning their attention to the growing phenomenon of DE in their area, and they find very little that is based on the learning research that they have painstakingly put together. For example, Doughty and Long (2003) criticize the quality of learning proposed by several online DE language systems as breaking all ten methodological principles that they have extracted from their own research for the benefit of teachers. A major problem is once again the iron horse, the generic delivery platform purchased system-wide by an institution and into which language instruction is “shoehorned” (p. 7-8). Specifically, the interactions proposed by the generic platforms encourage a passive, language-as-object type of learning which has been generally criticized (e.g., by Jonassen, 2002, cited above) and shown to be particularly unuseful in the language domain. Within this scenario, my own work involves developing computer based cognitive tools for trainee language teachers. It may give a clue to the question raised above, Where are the learning tools?
Over the 1990s, applied linguistics researchers generated an interesting array of computer-based tools. These were used mainly for text analysis (including transcribed spoken text) and led to significant and useful findings. Some examples include Laufer and Nation’s (1995) measure of lexical richness, Vocaprofile; de Cock et al’s (1998) measure of phrase recurrence, Tuples; and Granger’s (1998) concordance analyses of learner writing. These have proven extremely useful tools for language researchers and might well prove similarly useful for trainee language teachers. Within the constructivist paradigm, learners will ideally generate knowledge for themselves using the tools and concepts of scientists rather than just read the results of others who have done so (Cobb, 1999; 2003b). These tools, however, were all developed as standalone offline programs, for professional users, and are difficult or impossible to access, and if accessed are not simple to use. In response, over a period of several years, I have been adapting these tools for Internet delivery as well as increasing their transparency, scaffolding capacity, and general user-friendliness (see the collection at www.lextutor.ca) to a waiting audience. Teacher trainees, teachers, and graduate students in several parts of the world, and particularly the developing world, in DE programs and independently, use these tools as a means of participating actively in an exciting branch of the research that informs the profession they are entering. So I suspect that a similar slow adaptation of cognitive tools may be under way in many other areas—an adaptation that takes roughly ten years and may depend on individual opportunities.
So, the answer to the question Where are the cognitive tools? may be that they are coming, but that the journey is long. Maybe longer than it needs to be.
Obstacles to developing
learning tools
To summarize the argument so far, I am extending Bernard et al’s finding to mean that the majority of DE courses are unmodified classroom courses delivered online which do not exploit either the asynchronicity or the cognitive tools-based learning which are their main potential. I have suggested that while there has been no lack of activity in tool development, it is slow getting into users’ hands. There are two rather tricky paradoxes to resolve before this can happen on any scale, both to do with the economics of modern universities.
Suppose a professor finds herself wanting or needing to put a course online, and she would like to do more to support her distance learners than just send them materials and due dates. If this more should involve developing or adapting something in the line of a set of computer based or delivered learning tools, then, as already noted, this would require a significant investment of time. Unfortunately, this time may be systematically unavailable. Reeves (2002) convincingly argues that even the simplest DE course is far more time consuming than a classroom course to run, particularly as students resort to heavy use of email in an attempt to ride the iron horse themselves, i.e. to replicate the conditions of the classroom, since nothing has been supplied to compensate for its absence. Meantime, the professor’s publication and committee requirements have not been reduced in compensation for taking on the development of a DE course. In other words, the time has simply not been provided to do more than get the materials into WebCT or Blackboard, and probably even this is exhausting. So the first paradox of DE course development is that more time is needed while less is provided.
The second paradox is tied to the first. The main alternative to the difficulty of having professors develop their own DE courses as anything more than a collection of readings, assignments and due dates is to “unbundle” the professor’s functions. Rather than the professor being in one person the designer, motivator, mediator, controller, and content expert of the course, the unbundled professor instead is just the content expert and the other roles are taken up by specialists (instructional designers, technicians, programmers, media experts). This unbundling can be seen as positive (UNESCO materials on DE describe it as departing from a craft model of instructional design for a systems model), or it can be seen as negative (Reeves, 2002, p. 148, speculates that professors may soon not be needed at all if their functions can be performed by holograms).
Institutions wanting actually to
do something interesting with DE will probably be more interested in the
specialization model rather than in budgeting professors’ time for extra DE
course development. Indeed, specialization is the classic formula that many of
us in educational technology cut our teeth on in instructional design courses.
Unfortunately, the specialization model assumes that learning is similar across
domains, and hence can be orchestrated by an instructional designer, while the
learning theory of the last 20 years suggests instead that learning is rather
different in different domains (e.g., Carey & Gelman, 1991). In other
words, learning a language or learning math may be somewhat different from
learning biology or learning business administration, and this is something the
content specialist will usually know more about than the instructional
designer. So the second paradox is that efficiency and learning may pull in
opposite directions as we attempt to move beyond the iron horse in DE.
On the bright side
Some ways of resolving these paradoxes and building on the potential strengths of DE can be imagined. One would be that professors could be given time to put into their DE courses, for example by reducing their research requirements, or by counting DE course development as a sort of publication. Another would be that given the arduousness of developing distance deliverable learning resources, particularly if these are to some extent domain specific, that these resources be developed and housed collaboratively for maximum use and minimum duplication of effort. Pipe dreams? Not entirely.
A recent initiative known as the MERLOT project (Multimedia Educational Resource for Learning and Online Teaching, Young, 2000) is promoting the peer review of online learning resources developed by faculty. It has assembled more than 6,000 web-based learning resources and set up teams of reviewers in 13 disciplines, a development that Reeves (2002) sees as a step in the direction of both improving the quality of resources and giving resource development the same status as traditional research and publication. A recent title in the Chronicle of Higher Education even goes so far as to suggest that, “Ever So Slowly, Colleges Start to Count Work With Technology in Tenure Decisions” (Young, 2002). And where would these resources be published so others could see and use them? There is a promising movement to gather, evaluate, archive and promote the sharing of the “reusable learning objects” in projects like the Canadian Core Learning Object Metadata Application Profile (Cancore) or its U.S. IEEE Learning Object Metadata Profile equivalent.
Which is stronger – the obstacles to DE development or the impetus to do it right and get it right? We can only hope the latter, because how we manage this question in North America will have repercussions throughout the developing world where distance education is not an option but a necessity. It is always important to remember that while DE in North America is mainly about reducing costs or even turning a profit, in much of the developing world it is a crucial component of development that can not be provided in any other way (UNESCO, 2002). North America is the default leader in an historical educational expansion with truly global reach.
Conclusion
Bernard et al’s study found in favour of asynchronous DE—an advantage that I argue can be built on through the expansion of independent (including collaborative) learning opportunities, particularly in the area of computational learning tools. We should not try to make mediated learning as similar as possible to its predecessor, classroom learning, but to highlight the differences and the opportunities therein. In this chapter, I have tried to spell out what I think this means. My argument goes somewhat “beyond the information given.” It can be tested in another meta-comparison a few years from now, when hopefully we will know more about what works best in a classroom or best in DE and more about how to exploit what is unique in DE for those areas where it is indicated.
An strong era of resource development seems already underway, to some extent capitalizing on the offline work of the 1980s and 1990s. But so far, little of it is ending up on learners’ computer screens. This will happen better and faster if institutions take into account the real costs and potential benefits of distance learning and invest in it for the long term, rather than enjoying short term cost reduction while also risking a reduction in the quality of learning for thousands if not millions of learners. The extent of the changes needed to make DE work, in professorial reward structure, investment strategy, and level of commitment to instruction, are great. After all, the lack of investment in DE up to now is simply the latest example of a longstanding neglect of teaching in higher education. But the rewards are greater: a world where top quality learning resources, text and other, are available any time, any place.
Bernard, R., Lou, Y., Abrami, P. Wozney, L., Borokhovski, Wallet,
P., Wade, A., & Fiset, M. (2003). How Does Distance Education Compare to Classroom Instruction? A
Meta-Analysis of the Empirical Literature. Montreal: CSLP Report.
S. Carey & R. Gelman (Eds), The epigenesis of mind: Essays on biology & cognition. Hillsdale NJ: Erlbaum.
Clancey, W. & Letsinger, R. (1984). NEOMYCIN: Reconfiguring a rule-based expert system for application to teaching. In W.J. Clancey and E.H. Shortliffe (Eds.), Medical artificial intelligence: The first decade (pp. 361-381). Reading, MA: Addison-Wesley.
Cobb, T. (2003a). Internet and literacy in the developing world: Delivering the teacher with the text. Proceedings of 3rd Pan-African Congress on Reading for All, Literacy without borders, Kampala, Uganda, August 2003.
Cobb, T. (2003b.) Analyzing late interlanguage with learner corpora: Quebec replications of three European studies. Canadian Modern Language Review, 59(3), 393-423.
Cobb, T. (1999). Applying constructivism: A test for the learner as scientist. Educational Technology Research & Development, 47 (3), 15-33.
Cobb, T. (1997). Cognitive efficiency: Toward a revised theory of media. Educational Technology Research & Development, 45 (4), 21-35.
Daniel, J. (2002) Preface. In C. Vrasidas & G. Glass (Eds.), Distance education and distributed learning (pp. ix-x). Greenwich, CT: Information Age Publishing.
De Cock, S., Granger, S, Leech, G., & McEnery, T. (1998). An automated approach to the phrasicon of EFL learners. In Granger (Ed.), Learner English on computer (pp. 67-79). London: Longman.
Doughty, C., & Long, M. (2003). Optimal psycholinguistic environments for distance foreign language learning. Language Learning & Technology, 7 (3), 50-80. Retrieved online in January 2004 at http://llt.msu.edu/vol7num3/doughty/.
Ehrmann,
S. (1995). Asking the right questions: What does the research tell us about
technology and higher learning? Change, 27 (2), 20-27.
Frederiksen,
J. (1987). Final report on the development of computer-based instructional
systems for training essential components of reading. Cambridge, MA: Bolt
Beranek Newman, Report 6465.
Frederiksen, J., White, B., Collins, A., & Eggan, G. (1988). Intelligent tutoring systems for electronic troubleshooting. In Psotka, J., Massey, L.D., & Mutter, S.A. (Eds.), Intelligent tutoring systems: Lessons learned. Hillsdale, NJ: Lawrence Erlbaum Associates.
Fierdyiwek, Y. (1999). Web-based
courseware tools: Where is the pedagogy? Educational Technology, 39 (1),
29-34.
Granger, S., Ed. (1998). Learner English
on computer.
Intel Corp. Teach to the Future program
portal. Retrieved online in January 2004 at http://www.intel.com/education .
Jonassen, D. (2002). Learning to
solve problems online, in C. Vrasidas & G. Glass (Eds.), Distance education and distributed learning (pp. 75-98). Greenwich, CT: Information Age
Publishing.
Keegan, D. (1996). Definition of distance
education. In Foundations of distance education. (3rd ed). London: Routledge.
Lajoie, S. & Derry, S. (Eds.). (1993). Computers as cognitive tools. Hillsdale, NJ: Lawrence Erlbaum Associates.
Lajoie, S. P. (Ed.). (2000). Computers as cognitive tools (vol. 2): No more walls. Mahwah, NJ: Lawrence Erlbaum Associates.
Mielke, K (1968). Questioning the questions of ETV research. Educational Broadcasting Review, 2, 6-15.
Oppenheimer, T. (1997). The computer delusion. Atlantic Monthly, July. Retrieved online in January 2004 at http://www.theatlantic.com/issues/97jul/computer.htm .
Reeves, T. (2002). Distance education and the professorate: The issue of productivity. In C. Vrasidas & G. Glass (Eds.), Distance education and distributed learning (pp. 135-156). Distance education and distributed learning. Greenwich, CT: Information Age Publishing.
Russell, T.L. (1999). The no
significant difference phenomenon. Chapel Hill, NC: Office of Instructional
Telecommunications, North Carolina State University.
Smith, P. L. & Dillon, C. L.
(1999). Comparing distance learning and classroom learning: Conceptual
considerations. American Journal of Distance Education, 13, 107-124.
Surry, D & Ensminger, D. (2001). What’s wrong with media comparison studies? Educational Technology, 41 (4), 32-35.
UNESCO (2002). Open and distance learning:
Trends, policy, & strategy considerations. Paris: UNESCO, Division of
Higher Education. Retrieved online in January 2004 at http://unesdoc.unesco.org/images/0012/001284/128463e.pdf .
Vrasidas, C., & Glass, G. (2002). A conceptual framework for studying distance education. In C. Vrasidas & G. Glass (Eds.), Distance education and distributed learning (pp. 31-55). Distance education and distributed learning. Greenwich, CT: Information Age Publishing.
White, B. (1984). Designing computer games to help students understand Newton's laws of motion. Cognition and Instruction 1, 69-108.
White, B., & Horwitz, P. (1987). ThinkerTools: Enabling children to understand physical laws (BBN Inc. Report No. 6470). Cambridge, MA: BBN Laboratories Inc.
Young, J. (2000). New project brings peer review to web materials for teaching. Chronicle of Higher Education. Retrieved online in January 2004 at http://chronicle.com.
Young, J. (2000). Ever So Slowly, Colleges Start to Count Work With Technology in Tenure Decisions. Chronicle of Higher Education, 48 (24). Retrieved online in January 2004 at http://chronicle.com.