CALL = Computer Assisted Language Learning. Yes, I know: these days ICT is the preferred acronym, but I’m going to call it CALL, because that’s what I’ve always called it and because I can never remember what ICT stands for.
I was a pioneer of CALL when micro-computers came on the market in the early 80s; I wrote some programs, set up a self-access centre, was part of the TESOL International CALL Steering Committee, gave a few talks at conferences, and generally strutted my stuff for a few years. Then, after lots of skulduggery and false starts, Windows (version 3) finally got going, whereupon I lost the plot and was reduced to a curious but bemused onlooker. Last week Nicola shook my hammock by inviting me to review Duolingo for ELT Jam.. Writing the review made me think that I should say something about CALL here.
Most MA courses introduce the study of CALL by giving a history of CALL where its primitive beginnings are described and its subsequent development portrayed as if it were the history of the development of archaeology or medicine. Mark Warschauer’s account, where the history of CALL is said to consist of three “phases”, is a typical example of such a treatment. The first phase is “Behavioristic CALL” (US spelling!), which Warschauer quickly dispenses with.
Programs of this phase entailed repetitive language drills and can be referred to as “drill and practice” (or, more pejoratively, as “drill and kill”). Drill and practice courseware is based on the model of computer as tutor (Taylor, 1980). …… Behavioristic CALL was undermined by two important factors. First, behavioristic approaches to language learning had been rejected at both the theoretical and the pedagogical level. Secondly, the introduction of the microcomputer allowed a whole new range of possibilities.
Warschauer goes on to describe phases 2 and 3: “Communicative CALL”, then “Integrative CALL”, which is where we are today. While other CALL experts divide the history into more phases and use different labels, there’s the same emphasis on “progress” where the errors of the past, particularly “behaviouristic CALL” are left behind. In my opinion, such an easy dismissal of tutorial CALL is unwarranted and unfair.
A better approach is that taken by Grgurović, Chapelle and Shelley (2013), who, in a very thorough and rigorous meta-analysis of studies on CALL over the last 40 years, eschew the “stages of development” approach, and look instead at ahistorical categories of CALL. They identify these:
1) CALL program (a computer program originally made for language learning). This obviously includes tutorial programs like Dynamic English, The Rosetta Stone, and Duolingo.
2) Computer application (a computer program not originally made for language learning, e.g., Microsoft Word).
3) CMC (computer mediated communication program that allows synchronous or asynchronous communication).
4) Web (use of authentic materials and resources on the WWW).
5) Course management system (a system for managing course content, e.g., WebCT).
This avoids the implication that more recent is better, and allows us to evaluate for ourselves the strengths and weaknesses of different applications, including tutorial CALL.
Grgurović, Chapelle and Shelley’s (2013) analysis of 37 CALL studies provides an empirically-based answer to the question of whether pedagogy supported by computer technology can be effective in promoting second/foreign language development relative to pedagogy conducted without technology. Just as an aside, it’s interesting that studies based on the use of tutorial programs offer some of the most rigorous and reliable studies. Anyway, the results show that “across the various conditions of technology use, second/foreign language instruction supported by computer technology was at least as effective as instruction without technology. When comparisons between CALL and non-CALL groups were made in rigorous research designs, the CALL groups performed better than the non-CALL groups”. One notable gloss on this general finding is that advanced and intermediate learners did better in the CALL conditions than did beginner learners using CALL, which belies the commonly-held opinion that CALL is better at helping beginners.
SLA Theory and CALL
The meta-study briefly discussed above is part of an approach to studying CALL which is informed by an attempt to address the question posed by Garrett (1991), namely:
What is the relationship between a theoretically and empirically based understanding of the language learning process and the design and implementation of technology-based materials?
It’s a very tough question, but surely it’s a very good one. Both the technologies used in CALL and theories of SLA have become more varied and complex since Garrett asked her question, and even back then she was hard-pushed to give any convincing answer. Chapelle (2009) made a recent attempt to answer the same question and I think she managed to convey the complexity of the issues more successfully than she managed to pinpoint any relationship. The decision to examine how 13 different theoretical approaches to SLA (Universal Grammar; Autonomous Induction Theory; Concept-Oriented Approach; Processability Theory; Input Processing Theory; Interactionist Theory; Associative–Cognitive Approach; Skills Acquisition Approach, Sociocultural Theory; Language Socialization Theory; Conversation Analysis; Systemic–Functional grammar; Complexity Theory) might best be implemented in CALL was perhaps not a wise one, but at least it shows how far we are from any unified theory of SLA, and how many different ways we can go about evaluating CALL software and procedures. Table 2 below shows studies which attempt to map just one SLA theory to CALL practice. While I applaud these principled studies, I think they illustrate how enormous the field of research is.
You can, of course, go about things the other way round. Doughty’s (1991) study on the acquisition of relative clauses in English used CALL as a means of delivering different conditions for learning, namely form-based and meaning-based instruction. She was interested in how learner–computer interactions can be used to infer learners’ processes and strategies that are relevant from the perspective of one particular theory, so she looked at students’ response times and their use of various help functions in order to infer whether the processes they used were controlled or automatic, an issue of central concern in information processing theory.
So you can use theories of SLA to assess the use of CALL or you can use CALL to help investigate SLA, and it seems obvious to me that we need more of this 2-way traffic.
Tutorial CALL Technologies
Given the scope of CALL these days, I’d like to focus on tutorial CALL because, as I’ve said, I think it’s wrong to simply dismiss it as having no place in modern ELT, and also because it gives one the chance to look at some modern CALL kit. Levy (2009) describes a range of technologies which can be used in tutorial CALL packages. These include
* Error analysis, diagnosis, and feedback based on natural-language processing (NLP), “parser-based CALL,” and intelligent CALL, or ICALL. Parser-based CALL systems keep a detailed record of student performance and develop sophisticated student models that shape subsequent student–computer interactions, especially in terms of feedback, assessment, and remediation. Concordancing, and in particular learner corpora where learner errors are tagged and categorized, are used in NLP systems so that when a learner error is identified, the learner can get not just feedback in the usual sense but also access to the learner corpus in which errors of a similar kind may be reviewed in their various contexts. Such systems are further enhanced by the possibilities of annotation and error categorization.
* L2 vocabulary learning software packages or components offer systematic recycling of new items at optimal intervals, recontextualization, memory support to promote recall, and production and feedback opportunities. Levy (2009) gives the example of the vocabulary learning site Lexical Tutor (http://www.lextutor.ca/). Technologies used include software developed by Nakata (2006) to provide optimal scheduling of feedback and rehearsal opportunities to improve the effectiveness and efficiency of vocabulary learning. Computer-based lexical activities are being developed using principles drawn from current SLA research in cognitive psychology, psycholinguistics, and sociolinguistics. Levy cites the work of Lafford, Lafford, and Sykes (2007), who have proposed 10 design features to underpin the creation of Spanish CALL materials for lexical acquisition.
* In computer-aided pronunciation training (CAPT) technological advances in acoustic phonetic software help learners improve their pronunciation and speaking competence by providing models, measuring fluency, and displaying pitch curves. One example given by Levy is of a package that provides detailed learner feedback on pronunciation for Japanese learners of English. The software identifies the aspects of English pronunciation with which the learners are experiencing difficulties, specifically searching for 10 areas predicted as being problematic for Japanese learners. After identifying the areas in which the students requiremore practice, the software then automatically provides feedback and practice in those areas in which errors are detected.
* In listening, Levy argues that learners initially need to distinguish and learn the sounds of the L2 — the prosody of the language, including intonation, rhythm, and stress – and then they need to sample and understand authentic, natural speech in a variety of contexts so that they can identify patterns and predict what comes next. CALL technologies facilitate segmentation, repetition, speed regulation, interactivity, and links to further information. Specific types include advanced organizers and prelistening/viewing tools to activate learners’ prior knowledge and learning strategies, annotated information links (text, image, etc.), and captioned video to enhance comprehensible input. Levy gives the example of multimedia CALL software described by Hulstijn (2003). Using connectionist models of language processing, the software helps the learner analyse the continuous speech stream in real time and convert “meaningless tiny bits of acoustic information into meaningful units, such as speech sounds, syllables and words” (Hulstijn, 2003, p. 414). Levy cites Chan, Chen, and D¨opel (2008) who describe their use of podcasts in a beginner level German language program. They created a fully integrated series of podcasts, practice and extension activities, curriculum review, cultural content, and development of learning strategies. At an average length of 13 minutes, the typical structure and content of a podcast included a preview, musical interludes, listening and culture material, learning strategies, and meta information.
A CALL Evaluation Framework
If you look at a package like The Rosetta Stone, all these technologies and many more are used. But the big question is HOW the technologies are used. Evaluating CALL demands a pragmatic and holistic view of materials and their use, combined with regard for theoretical principles of language learning. Technology-based materials and tasks should be evaluated in terms of the opportunities they provide learners for second language acquisition, and thus, as Chapelle (2001) argues, frameworks and guidelines are needed. Drawing on the concepts and practices used in evaluating tests, Chapelle (2001) outlines a framework for the evaluation of CALL which provides a means of drawing on SLA theory in the concrete work of evaluation. The framework defines six characteristics of materials, which I paraphrase below.
1. Language learning potential. Examine the quality of the interactions learners engage in, the utility of the selected input for acquisition of particular areas, and the quality of the practice learners receive.
2. Meaning focus. Examine the extent to which learners have rich, interesting input that provides an opportunity to comprehend and/or produce meaning.
3. Learner fit. Examine the level of the language.
4. Authenticity. Examine the linguistic match between the language that learners see in the instructional tasks and language that they will engage with beyond the classroom. This aspect can be examined through a systemic linguistic analysis.
5. Positive impact. Examine the benefits — not necessarily linguistic — that learners might derive from working on the tasks.
6. Practicality. Examine the degree to which learners have access to and skills needed for the work on the tasks. This brings in the real-world factors that greatly influence success.
To end, here’s Chapelle (2009) again:
Current textbooks, workbooks, and classroom practices seem to be driven by the idea that professional judgment and practice are sufficient to produce the desired goals. Working in the area of language materials development, Tomlinson (2003) suggested principles extrapolated from research on SLA. These principles assert that materials should achieve impact, that they should expose learners to language in authentic use, and that learners’ attention should be drawn to linguistic features in the input (Tomlinson, 2003, p. 21). Writing about materials evaluation, Ellis (1998) suggested the need for empirical SLA theory-based evaluation for language learning materials. However, these ideas are innovative relative to the widespread view that textbook writing is a creative art and that evaluation can be accomplished on the basis of teachers’ judgments alone. Technology can bring to language learning materials a novelty and expense, which create an opportunity for multiple forms of rich input and interaction as well as a data collection capacity unknown to authors of paper materials. The result has been an unprecedented attention to SLA theory and push for practice-relevant theory that can inform the design and evaluation of technology-based materials.
Chapelle, C. (2001). Computer applications in second language acquisition: Foundations for teaching, testing, and research. Cambridge: Cambridge University Press.
Chapelle, C. (2009) The Relationship Between Second Language Acquisition Theory and Computer-Assisted Language Learning. Modern Language Journal, Volume 93, Issue Supplement s1, 741–753.
Doughty, C. (1991). Second language instruction does make a difference: Evidence from an empirical study of SL relativization. Studies in Second Language Acquisition, 13, 431–469.
Garrett, N. (1991). Technology in the service of language learning: Trends and issues. Modern Language Journal, Vol. 75, 74–101.
Grgurović, M., Chapelle, C., and Shelley, M. (2013) A meta-analysis of the effectiveness studies on computer technology-supported language learning. ReCALL, Vol. 25, Issue 02, 165 – 198.
Levy, M. (2009) Technologies in Use for Second Language Learning. Modern Language Journal, Volume 93, Issue Supplement s1, 769 – 782.
Warschauer, M. (1996) CALL: An Introduction http://fis.ucalgary.ca/Brian/BibWarschauer.html