Interlanguage Development: Some Evidence

As a follow-up to my two previous posts, here’s some information about interlanguage development.

Doughty and Long (2003) say

There is strong evidence for various kinds of developmental sequences and stages in interlanguage development, such as the well known four-stage sequence for ESL negation (Pica, 1983; Schumann, 1979), the six-stage sequence for English relative clauses (Doughty, 1991; Eckman, Bell, & Nelson, 1988; Gass, 1982), and sequences in many other grammatical domains in a variety of L2s (Johnston, 1985, 1997). The sequences are impervious to instruction, in the sense that it is impossible to alter stage order or to make learners skip stages altogether (e.g., R. Ellis, 1989; Lightbown, 1983). Acquisition sequences do not reflect instructional sequences, and teachability is constrained by learnability (Pienemann, 1984).

Let’s take a look at the “strong evidence” referred to, beginning with Pit Corder and error analysis.

Pit Corder: Error Analysis

Corder (1967) argued that errors were neither random nor systematic results of L1 transfer; rather, they were indications of learners’ attempts to figure out an underlying rule-governed system. Corder distinguished between errors and mistakes: mistakes are slips of the tongue, whereas errors are indications of an as yet non-native-like, but nevertheless, systematic, rule-based grammar. Interesting and provocative as this was, error analysis failed to capture the full picture of a learner’s linguistic behaviour. Schachter (1974) compared the compositions of Persian, Arabic, Chinese and Japanese learners of English, focusing on their use of relative clauses. She found that the Persian and Arabic speakers had a far greater number of errors, but she went on to look at the total production of relative clauses and found that the Chinese and Japanese students produced only half as many relative clauses as did the Persian and Arabic students. Schachter then looked at the students’ L1 and found that Persian and Arabic relative clauses are similar to English in that the relative clause is placed after the noun it modifies, whereas in Chinese and Japanese the relative clause comes before the noun. She concluded that Chinese and Japanese speakers of English use relative clauses cautiously but accurately because of the distance between the way their L1 and the L2 (English) form relative clauses. So, it seems, things are not so straightforward: one needs to look at what learners get right as well as what they get wrong.

The Morpheme Studies

Next came the morpheme order studies. Dulay and Burt (1974a, 1974b) claimed that fewer than 5% of errors were due to native language interference, and that errors were, as Corder suggested, in some sense systematic, that there was something akin to a Language Acquisition Device at work not just in first language acquisition, but also in SLA.

The morpheme studies of Brown in 1973 resulted in his claim that the morphemes below were acquired by L1 learners in the following order:

1 Present progressive (-ing)

2/3 in, on

4 Plural (-s)

5 Past irregular

6 Possessive (-’s)

7 Uncontractible copula (is, am, are)

8 Articles (a, the)

9 Past regular (-ed)

10 Third person singular (-s)

11 Third person irregular

12 Uncontractible auxiliary (is, am, are)

13 Contractible copula

14 Contractible auxiliary

This led to studies in L2 by Dulay & Burt (1973, 1974a, 1974b, 1975), and Bailey, Madden & Krashen (1974), all of which suggested that there was a natural order in the acquisition of English morphemes, regardless of L1. This became known as the L1 = L2 Hypothesis, and further studies (by Ravem (1974), Cazden, Cancino, Rosansky & Schumann (1975), Hakuta (1976), and Wode (1978) all pointed to systematic staged development in SLA.

Some of these studies, particularly those of Dulay and Burt, and of Bailey, Madden and Krashen, were soon challenged, but over fifty L2 morpheme studies have since been carried out using more sophisticated data collection and analysis procedures, and the results of these studies have gone some way to restoring confidence in the earlier findings.

Selinker’s Interlanguage.

The third big step was Selinker’s (1972) paper, which argues that the L2 learners have their own autonomous mental grammar (which came to be known as interlanguage grammar), a grammatical system with its own internal organising principles. One of the first stages of this interlanguage to be identified was that for ESL questions. In a study of six Spanish students over a 10-month period, Cazden, Cancino, Rosansky and Schumann (1975) found that the subjects produced interrogative forms in a predictable sequence:

  1. Rising intonation (e.g., He works today?),
  2. Uninverted WH (e.g., What he (is) saying?),
  3. “Overinversion” (e.g., “Do you know where is it?),
  4. Differentiation (e.g., “Does she like where she lives?).

Then there was Pica’s study of 1983 which suggested that learners from a variety of different L1 backgrounds go through the same four stages in acquiring English negation:

  1. External (e.g., No this one./No you playing here),
  2. Internal, pre-verbal (e.g., Juana no/don’t have job),
  3. Auxiliary + negative (e.g., I can’t play the guitar),
  4. Analysed don’t (e.g., She doesn’t drink alcohol.)

Apart from these two examples, we may cite the six-stage sequence for English relative clauses (see Doughty, 1991 for a summary) and sequences in many other grammatical domains in a variety of L2s (see Johnston, 1997).

 Pienemann’s 5-stage Sequence.

Perhaps the most extensive and best-known work in this area has been done by Pienemann whose work on a Processability Theory started out as the Multidimensional Model, formulated by the ZISA group mainly at the University of Hamburg in the late seventies. One of the first findings of the group was that all the children and adult learners of German as a second language in the study adhered to the five-stage developmental sequence shown below:

Stage X – Canonical order (SVO)

die kinder spielen mim bait //// the children play with the ball

(Romance learners’ initial SVO hypothesis for GSL WO is correct in most German sentences with simple verbs.)

Stage X + I – Adverb preposing (ADV)

da kinder spielen //// there children play

(Since German has a verb-second rule, requiring subject—verb inversion following a preposed adverb {there play children), all sentences of this form are deviant. The verb-second (or ‘inversion’) rule is only acquired at stage X + 3, however. The adverb-preposing rule itself is optional.)

Stage X + 2 – Verb separation (SEP)

alle kinder muss die pause machen //// all children must the break have

(Verb separation is obligatory in standard German.)

Stage X+3 – Inversion (INV)

dam hat sie wieder die knock gebringt //// then has she again the bone brought

(Subject and inflected verb forms must be inverted after preposing of elements.)

Stage X+4 – Verb-end (V-END)

er sagte, dass er nach house kommt //// he said that he home comes

(In subordinate clauses, the finite verb moves to final position.)

Learners did not abandon one interlanguage rule for the next as they progressed; they added new ones while retaining the old, and thus the presence of one rule implies the presence of earlier rules.

A few words about the evidence. There is the issue of what it means to say that a structure has been acquired, and I’ll just mention three objections that have been raised. In the L1 acquisition of morphemes, a structure was assumed to be acquired when it occurred three times in a row in an obligatory context at a rate of 90%. The problem with such a measurement is, first, how one defines an “obligatory” context, and second, that by only dealing with obligatory contexts, it fails to look at how the morphemes might occur in incorrect contexts. The second example is that Pienemann takes acquisition of a structure as the point at which it emerges in the interlanguage, its first “non-imitative use”, which many say is hard to operationalise. A third example is this: in work reported by Johnson, statistical measures using an experimental group of L2 learners and a control group of native speakers have been used where the performance of both groups are measured, and if the L2 group performance is not significantly different from the control group, then the L2 group can be said to have acquired the structure under examination. Again, one might well question this measure.

To return to developmental sequences, by the end of the 1990s, there was evidence of stages of development of an interlanguage system from studies in the following areas:

  • morphemes,
  • negation,
  • questions,
  • word order,
  • embedded clauses
  • pronouns
  • references to the past

Discussion

Together these studies lend very persuasive support to the view that L2 learners follow a fairly rigid developmental route. Moreover, it was seen that this developmental route sometimes bore little resemblance to either the L1 of the learner, or the L2 being learnt. For example, Hernández-Chávez (1972) showed that although the plural is realised in almost exactly the same way in Spanish and in English, Spanish children learning English still went through a phase of omitting plural marking. It had been assumed prior to this that second language learners’ productions were a mixture of both L1 and L2, with the L1 either helping or hindering the process depending on whether structures are similar or different in the two languages. This was clearly shown not to be the case. All of which was taken to suggest that SLA involves the development of interlanguages in learners, and that these interlanguages are linguistic systems in their own right, with their own sets of rules.

There are lots of interesting questions and issues that I haven’t even mentioned here about interlanguage development in general and about orders of acquisition in SLA in particular. It’s worth pointing out that Corder’s and Selinker’s initial proposal of interlanguage as a construct was an attempt to explain the phenomenon of fossilisation. As Tarone (2006) says:

Second language learners who begin their study of the second language after puberty do not succeed in developing a linguistic system that approaches that developed by children acquiring that language natively. This observation led Selinker to hypothesize that adults use a latent psychological structure (instead of a LAD) to acquire second languages.  

The five psycholinguistic processes of this latent psychological structure that shape interlanguage  were hypothesized (Selinker, 1972) to be (a) native language transfer, (b) overgeneralization of target language rules, (c) transfer of training, (d) strategies  of communication, and (e) strategies of learning.

It wasn’t long before Krashen’s Monitor Model claimed that there was no evidence of L1 transfer in the morpheme studies, denied the central role of L1 transfer which the original Interlanguage Hypothesis gave it, and also denied that there were sensitive (critical) periods in SLA. Generativist studies of SLA also minimised the role of L1 transfer. And there have been some important updates on the interlanguage hypothesis since the 1980s, too (see Tarone (2006) and Hong and Tarone (2016) for example).

My main concern in discussing interlanguage development, as you must be all too well aware by now, is to draw attention to the false assumptions on which coursebook-based ELT are based. Coursebooks assume that structures can be learned on demand. If this were the case, then acquisition sequences would reflect the sequences in which coursebooks present them, but they do not. On the contrary, the acquisition order is remarkably resilient to coursebook presentation sequences. Long (2015, p. 21) gives some examples to demonstrate this:

…. Pica (1983) for English morphology by Spanish-speaking adults, by Lightbown (1983) for the present continuous -ing form by French-speaking children in Quebec being taught English as a second language (ESL) using the Lado English series, by Pavesi (1986) for relative clauses by children learning English as a foreign language (EFL) in Italy and Italian adults learning English naturalistically in Scotland, and by R. Ellis (1989) for English college students learning word order in German as a foreign language.

Long goes on to point out that accuracy orders and developmental sequences found in instructed settings match those obtained for the same features in studies of naturalistic acquisition, and that the striking commonalities observed suggest powerful universal learning processes are at work. He concludes (Long, 2015, p.23):

… instruction cannot make learners skip a stage or stages and move straight to the full native version of a construction, even if it is exclusively the full native version that is modelled and practiced. Yet that is what should happen all the time if adult SLA were a process of explicit learning of declarative knowledge of full native models, their comprehension and production first proceduralized and then made fluent, i.e., automatized, through intensive practice. One might predict utterances with occasional missing grammatical features during such a process, but not the same sequences of what are often completely new, never-modelled interlingual constructions, and from all learners.

While practice has a role in automatizing what has been learned, i.e., in improving control of an acquired form or structure, the data show that L2 acquisition is not simply a process of forming new habits to override the effects of L1 transfer; powerful creative processes are at work. In fact, despite the presentation and practice of full native norms in focus-on-forms instruction, interlanguages often stabilize far short of the target variety, with learners persistently communicating with non-target-like forms and structures they were never taught, and target-like forms and structures with non-target-like functions (Sato 1990).

Conclusion

That’s a taste of the evidence. We can’t conclude from it, as a few insist, that there’s no point in any kind of explicit teaching, but it does mean that, in Doughty and Long’s words (2003):

The idea that what you teach is what they learn, and when you teach it is when they learn it, is not just simplistic, but wrong.

The dynamic nature of SLA means that differentiating between different stages of interlanguage development is difficult – the stages overlap, and there are variations within stages – and so the simplistic view of a “Natural Order”, where a learner starts from Structure 1 and reaches, let’s say, Structure 549, is absurd. Imagine trying to organise stages such as those identified by Pienemann into ordered sets! As Gregg (1984) points out:

If the structures of English are divided into varying numbers of ordered sets, the number of sets varying according to the individual, then it makes little sense to talk about a ‘natural order’. If the number of sets varies from individual to individual; then the membership of any given set will also vary, which makes it very difficult to compare individuals, especially since the content of these sets is virtually completely unknown.

So the evidence of interlanguage development doesn’t mean that we can design a syllabus which coincides with any “natural order”, but it does suggest that we should respect the learners’ internal syllabuses and their developmental sequences, which most coursebooks fail to do. Doughty and Long (2003) argue that the only way to respect the learner’s internal syllabus is

by employing an analytic, not synthetic, syllabus, thereby avoiding futile attempts to impose an external linguistic syllabus on learners (e.g., the third conditional because it is the third Wednesday in November), and instead, providing input that is at least roughly tuned to learners’ current processing capacity by virtue of having been negotiated by them during collaborative work on pedagogic tasks.

Long has since (Long, 2015) given a full account of his own version of task-based language teaching, and whether or not we are in a position to implement a similar methodology in our own teaching situations, at least we can agree that we’d be well-advised to concentrate more on facilitating implicit learning than on explicit teaching, to give more carefully-tuned input, and to abandon the type of synthetic syllabus used in coursebooks in favour of an analytic one.

 

Bibliography

Sorry, can’t give all references. Here are a few of “key” texts. Tarone (2006) free to download (see below) is a good place to start.

Adjemian , C. (1976) On the nature of interlanguage systems. Language Learning 26, 297–320.

Bailey,N., Madden, C., Krashen, S. (1974) Is there a “natural sequence” in adult second language learning? Language Learning 24, 235-243.

Corder, S. P.  (1967) The  significance  of  learners’ errors. International Review of

Applied Linguistics (IRAL) 5, 161-9.

Corder, S. P. (1981) Error analysis and interlanguage. Oxford: Oxford University Press.

Dulay, H. and Burt, M. (1974a) Errors and strategies in child second language acquisition. TESOL Quarterly 8, 12-36.

Dulay, H. and Burt, M. (1974b) Maturational sequences in child second language acquisition. Language Learning 24, 37-53.

Doughty, C. and Long, M.H. (2003) Optimal Psycholinguistic Environments for Distance Foreign Language Learning. Downloadable here: http://llt.msu.edu/vol7num3/doughty/default.html

Gregg, K. R. (1984) Krashen’s monitor and Occam’s razor. Applied Linguistics 5, 79-100.

Hong, Z. and Tarone, E. (Eds.) (2016) Interlanguage Forty years later. Amsterdam, Benjamins.

Krashen S (1981) Second language acquisition and second language learning.  Oxford: Pergamon Press.

Long, M. H. (2015) SLA and TBLT. Oxford, Wiley Blackwell.

Nemser W (1971) Approximative systems of foreign language learners.’ IRAL 9, 115–23.

Selinker L (1972). ‘Interlanguage.’ IRAL 10, 209–231.

Selinker L  (1992).  Rediscovering interlanguage.   London: Longman.

Schachter, J. (1974) An error in error analysis. Language Learning 24, 3-17.

Tarone E (1988) Variation in interlanguage. London: Edward Arnold.

Tarone, E. (2006) Interlanguage. Downloadable here: http://socling.genlingnw.ru/files/ya/interlanguage%20Tarone.PDF

 

What good is relativism?

screen-shot-2014-01-11-at-8-59-11-pm

Scott Thornbury (2008) asks “What good is SLA Theory?” . This is a question beloved of populists, all of whom agree that it’s of no use to anyone, except the rarefied crackpots who dream it up. Thornbury sets the tone of his own populist piece by saying that most teachers display a general ignorance of, and indifference to, SLA theory, due to “the visceral distrust that most practitioners feel towards ivory-tower theorising”.  If he’d said that most English language teachers have an ingrained distrust of academic research into language learning, we might have asked him for some evidence to support the assertion, but who can question that ivory tower theorists are not to be trusted? Note how Thornbury, who teaches a post-graduate course on theories of SLA at a New York university, and who has published many articles in serious, peer-reviewed journals, smears academics with the “ivory tower” brush, while himself sidling up to the hard-working, down to earth sceptics who read the English Teaching Professional magazine

Thornbury gives a brief sketch of 4 types of SLA theory and then gives 4 reasons why “knowledge of theory” is a good thing for teachers. But you can tell that his heart’s not in it.  He knows perfectly well that “knowledge” of the theories of SLA he mentions is of absolutely no use to anybody unless those theories are properly scrutinised and evaluated, but, rather than attempt any such evaluation, Thornbury prefers to devote the article to reassuring everybody that there’s no need to take SLA theories too seriously.

To help him drive home this anti-intellectual message, Thornbury turns to “SLA heavyweight” John H Schumann. Most SLA scholars regard the extreme relativist position Schumann adopts in his 1983 article as almost comically preposterous, while his acculturation theory is about as “heavyweight” as Dan Brown’s theory of the Holy Grail.  But anyway, judge for yourself.  Schumann (1983) suggests that theory construction in SLA should be regarded not as a scientific task, but as a creative endeavour, like painting. Rather than submitting rival theories of SLA to careful scrutiny, looking for coherence, logical consistency and empirical adequacy, for example, Schumann suggests that competing theories of SLA should be evaluated in the same way that one might evaluate different paintings.

“When SLA is regarded as art not science, Krashen’s and McLaughlin’s views can coexist as two different paintings of the language learning experience… Viewers can choose between the two on an aesthetic basis favouring the painting which they find phenomenologically  true to their experience.”

Thornbury seems to admire this suggestion. He comments:

“This is why metaphors have such power. We tend to be well disposed to a theory if its dominant imagery chimes with our own values and beliefs. If we are inclined to think of learning as the meeting of minds, for example, an image such as the Zone of Proximal Delevopment is more likely to attract us than the image of a black box.”

Schumann’s paper was an early salvo in what, 10 years later, turned into a spirited war between academics who adopted a relativist epistemology; and those who held to a rationalist epistemology. The war is still waging, and, typically enough, Thornbury stays well clear of the front line, while maintaining friendly relations with both camps. But let’s be clear: relativism, even though not often taken to the extreme that Schumann does, is actually taken seriously by many academics, including Larsen-Freeman and sometimes (depending on how the wind’s blowing) by Thornbury himself. Rational criteria for the evaluation of rival theories of SLA, including logical consistency and the weighing of empirical evidence, are abandoned in favour of the “thick description” of different “stories” or “narratives”,  all of them deemed to have as much merit as each other. Relativists suggest that trying to explain SLA in the way that rationalists (or “positivists” as they like to call them) do is no more than “science envy”, and basically a waste of time. Which is actually the gist of Thornbury’s argument in the 2008 article discussed here.

In response to this relativist position, let me quote Larry Laudan, who says

“The displacement of the idea that facts and evidence matter by the idea that everything boils down to subjective interests and perspectives is—second only to American political campaigns—the most prominent and pernicious manifestation of anti-intellectualism in our time.”

sin-titulo1

Thornbury asks “What good is SLA theory?” without making any attempt to critically evaluate the rival theories he outlines. But then, why should he? After all, if you adopt a relativist stance, then no theory is right, none is of much importance, so why bother to sort them out? Instead of going to all that unnecessary trouble, all you have to do is take a quick look at Thornbury’s little summary in Table 1 and choose the theory that grabs you, or rather, choose the “dominant metaphor” which best chimes with your own values and beliefs. And if you can’t be bothered to check out which theory goes best with your values and beliefs, then why not use some other, equally arbitrary subjective criterion? You could toss a coin, or stare intently at a piece of toast, or ask Jeremy Harmer.

“What good is SLA theory?” is actually a very stupid question. It’s as if “SLA theory” were some sort of uncountable noun, like toothpaste. What good is toothpaste? It doesn’t actually make much difference to brushing your teeth. But “SLA theory” is not uncountable; some SLA theories are very bad, and some are very good, and consequently, we need to agree on criteria for evaluating them so as to concentrate on what we can learn from the best theories. Instead of pandering to the misinformed view that SLA theories are equally unscientific, equally based on metaphors, equally relative in their appeal, Thornbury could have used the space he had in the journal to examine – however “lightly”- the relative merits of the theories he discusses, and the usefulness to teachers of the best theories.  He could have mentioned some of the findings of psycholinguistic research into the influence of the L1; age differences and sensitive periods; error correction; incomplete trajectories; explicit and implicit learning, and much besides. He could have mentioned one or two of the most influential current hypotheses about SLA, for example that instruction can influence the rate but not the route of interlanguage development.

He could have also pointed out that those adopting a relativist epistemology have achieved very little; that Larsen-Freeman’s exploration of complexity theory has achieved precisely nothing; that his own attempts to use emergentism to conjure up “grammar for free” have been equally woeful; and that the relativists he supports are more responsible than anyone else for the popular view that academics sit in an ivory tower writing unintelligible articles packed with obscurantist jargon for publication in journals that only they bother to read.

References

Laudan, L. (1990) Science and Relativism: Dialogues on the Philosophy of Science. Chicago, Chicago University Press.

Schumann, J. H. (2003) Art and Science in SLA research. Language Learning, 33, 409 – 75.

Larsen Freeman’s IATEFL 2016 Plenary

metaphor

In her plenary talk, Larsen Freeman argued that it’s time to replace “input-output metaphors” with “affordances”. The metaphors of input and output belong to a positivist, reductionist  approach to SLA which needs to be replaced by “a new way of understanding” language learning based on Complexity Theory.

Before we look at Larsen Freeman’s new way of understanding, let’s take a quick look at what she objects to by reviewing one current approach to understanding the process of SLA.

Interlanguage and related constructs 

There’s no single, complete and generally agreed-upon theory of SLA, but there’s a widespread view that second language learning is a process whereby learners gradually develop their own autonomous grammatical system with its own internal organising principles. This system is referred to as “interlanguage”.  Note that “interlanguage” is a theoretical construct (not a fact and not a metaphor) which has proved useful in developing a theory of some of the phenomena associated with SLA; the construct itself needs further study and the theory which it’s part of  is incomplete, and possibly false.

Support for the hypothesis of interlanguages comes from observations of U-shaped behaviour in SLA, which indicate that learners’ interlanguage development is not linear. An example of U-shaped behaviour is this:

educ-1817-intro-to-tesol-18-638

The example here is from a study in the 70s. Another example comes from morphological development, specifically, the development of English irregular past forms, such as came, went, broke, which are supplanted by rule-governed, but deviant past forms: comed, goed, breaked. In time, these new forms are themselves replaced by the irregular forms that appeared in the initial stage.

This U-shaped learning curve is observed in learning the lexicon, too, as Long (2011) explains. Learners have to master the idiosyncratic nature of words, not just their canonical meaning. While learners encounter a word in a correct context, the word is not simply added to a static cognitive pile of vocabulary items. Instead, they experiment with the word, sometimes using it incorrectly, thus establishing where it works and where it doesn’t. The suggestion is that only by passing through a period of incorrectness, in which the lexicon is used in a variety of ways, can they climb back up the U-shaped curve. To add to the example of feet above, there’s the example of the noun shop. Learners may first encounter the word in a sentence such as “I bought a pastry at the coffee shop yesterday.” Then, they experiment with deviant utterances such as “I am going to the supermarket shop,” correctly associating the word ‘shop’ with a place they can purchase goods, but getting it wrong. By making these incorrect utterances, the learner distinguishes between what is appropriate, because “at each stage of the learning process, the learner outputs a corresponding hypothesis based on the evidence available so far” (Carlucci and Case, 2011).

Automaticity

The re-organisation of new information as learners move along the U-shaped curve is a characteristic of interlanguage development. Associated with this restructuring is the construct of automaticity. Language acquisition can be seen as a complex cognitive skill where, as your skill level in a domain increases, the amount of attention you need to perform generally decreases . The basis of processing approaches to SLA is that we have limited resources when it comes to processing information and so the more we can make the process automatic, the more processing capacity we free up for other work. Active attention requires more mental work, and thus, developing the skill of fluent language use involves making more and more of it automatic, so that no active attention is required. McLaughlin  (1987) compares learning a language to learning to drive a car. Through practice, language skills go  from a ‘controlled process’ in which great attention and conscious effort is needed to an ‘automatic process’.

Automaticity can be said to occur when associative connections between a certain kind of input and output pattern occurs. For instance, in this exchange:

  • Speaker 1: Morning.
  • Speaker 2: Morning. How are you?
  • Speaker 1: Fine, and you?
  • Speaker 2: Fine.

the speakers, in most situations, don’t actively think about what they’re saying. In the same way, second language learners’ learn new language through use of controlled processes, which become automatic, and in turn free up controlled processes which can then be directed to new forms.

Sequences

There is a further hypothesis that is generally accepted among those working on processing models of SLA, namely that L2 learners pass through developmental sequences on their way to some degree of communicative competence, exhibiting common patterns and features across differences in learners’ age and L1, acquisition context, and instructional approach. Examples of such sequences are found in the well known series of morpheme studies; the four-stage sequence for ESL negation; the six-stage sequence for English relative clauses; and the sequence of question formation in German (see Long, 2015 for a full discussion).

Development of the L2 exhibits plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours. No matter what the learners’ L1 might be, no matter what the order or manner in which target-language structures are presented to them by teachers, learners analyze the input and come up with their own interim grammars, and they master the structures in roughly the same manner and order whether learning in classrooms, on the street, or both. This led Pienemann to formulate his learnability hypothesis and teachability hypothesis: what is processable by students at any time determines what is learnable, and, thereby, what is teachable (Pienemann, 1984, 1989).

All these bits and pieces of an incomplete theory of L2 learning suggest that learners themselves, not their teachers, have most control over their language development. As Long (2011) says:

Students do not – in fact, cannot – learn (as opposed to learn about) target forms and structures on demand, when and how a teacher or a coursebook decree that they should, but only when they are developmentally ready to do so. Instruction can facilitate development, but needs to be provided with respect for, and in harmony with, the learner’s powerful cognitive contribution to the acquisition process.

Let me emphasise that the aim of this psycholinguistic research is to understand how learners deal psychologically with linguistic data from the environment (input) in order to understand and transform the data into competence of the L2. Constructs such as input, intake, noticing, short and long term memory, implicit and explicit learning, interlanguage, output, and so on are used to facilitate the explanation, which takes the form of a number of hypotheses. No “black box” is used as an ad hoc device to rescue the hypotheses. Those who make use of Chomsky’s theoretical construct of an innate Language Acquisition Device in their theories of SLA do so in such a way that their hypotheses can be tested. In any case, it’s how learners interact psychologically with their linguistic environment that interests those involved in interlanguage studies. Other researchers look at how learners interact socially with their linguistic environment, and many theories contain both sociolinguistic and psycholinguistic components.

So there you are. There’s a quick summary of how some scholars try to explain the process of SLA from a psychological perspective. But before we go on, we have to look at the difference between metaphors and theoretical constructs.

Metaphors and Constructs

A metaphor is a figure of speech in which a word or phrase denoting one kind of object or idea is used in place of another to suggest a likeness or analogy between them. She’s a tiger. He died in a sea of grief. To say that “input” is a metaphor is to say that it represents something else, and so it does. To say that we should be careful not to mistake “input” for the real thing is well advised. But to say that “input” as used in the way I used it above is a metaphor is quite simply wrong. No scientific theory of anything uses metaphors because, as Gregg (2010) points out

There is no point in conducting the discussion at the level of metaphor; metaphors simply are not the sort of thing one argues over. Indeed, as Fodor and Pylyshyn (1988: 62, footnote 35) say, ‘metaphors … tend to be a license to take one’s claims as something less than serious hypotheses.’ Larsen-Freeman (2006: 590) reflects the same confusion of metaphor and hypothesis: ‘[M]ost researchers in [SLA] have operated with a “developmental ladder” metaphor (Fischer et al., 2003) and under certain assumptions and postulates that follow from it …’ But of course assumptions and postulates do not follow from metaphors; nothing does.

In contrast, theoretical constructs such as input, intake, noticing, automaticity, and so on, define what they stand for, and each of them is used in the service of exploring a hypothesis or a more general theory. All of the theoretical constructs named above, including “input”, are theory-laden: they’re terms used in a special way in the service of the hypothesis or theory they are part of,  and their validity or truth value can be tested by appeals to logic and empirical evidence. Some constructs, for example those used in Krashen’s theory, are found wanting because they’re so poorly-defined as to be circular. Other constructs, for example noticing, are the subject of both logical and empirical scrutiny. None of these constructs is correctly described as a metaphor, and Larsen Freeman’s inability to distinguish between a theoretical construct and a metaphor plagues her incoherent argument.  In short: metaphors are no grounds on which to build any theory, and dealing in metaphors assures that no good theory will result.

Get it? If you do, you’re a step ahead of Larsen Freeman, who seems to have taken several steps backwards since, in 1991, she co-authored, with Mike Long, the splendid An introduction to second language acquisition research.

Let’s now look at what Larsen Freeman said in her plenary address.

The Plenary

Larsen Freeman read this out:

lf2

Then, with this slide showing:

lf

she said this:

Do we want to see our students as black boxes, as passive recipients of customised input, where they just sit passively and receive? Is that what we want?

Or is it better to see our learners as actively engaged in their own process of learning and discovering the world finding excitement in learning and working in a collaborative fashion with their classmates and teachers?

It’s time to shift metaphors. Let’s sanitise the language. Join with me; make a pledge never to use “input” and “output”.

You’d be hard put to come up with a more absurd straw man argument; a more trivial treatment of a serious issue. Nevertheless, that’s all Larsen Freeman had to say about it.

Complex

With input and output safely consigned to the dustbin of history, Larsen Freeman moved on to her own new way of understanding. She has a “theoretical commitment” to complexity theory, but, she said:

If you don’t want to take my word for it that ecology is a metaphor for now, .. or complexity theory is a theory in keeping with ecology, I refer you to your own Stephen Hawkins, who calls this century “the century of complexity.”

Well, if the great Stephen Hawkins calls this century “the century of complexity”, then  complexity theory must be right, right?

With Hawkins’ impressive endorsement in the bag, and with a video clip of a flock of birds avoiding a predator displayed on her presentation slide, Larsen Freeman began her account of the theory that she’s now so committed to.

lf4

She said:

Instead of thinking about reifying and classifying and reducing, let’s turn to the concept of emergence – a central theme in complexity theory. Emergence is the idea that in a complex system different components interact and give rise to another pattern at another level of complexity.

A flock of birds part when approached by a predator and then they re-group. A new level of complexity arises, emerges, out of the interaction of the parts.

All birds take off and land together. They stay together as a kind of superorganism. They take off, they separate, they land, as if one.

You see how that pattern emerges from the interaction of the parts?

Notice there’s no central authority: no bird says “Follow me I’ll lead you to safety”; they self organise into a new level of complexity.

What are the levels of complexity here? What is the new level of complexity that emerges out of the interaction of the parts? Where does the parting and reformation of the flock fit in to these levels of complexity? How is “all birds take off and land together” evidence of a new level of complexity?

What on earth is she talking about? Larsen Freeman constantly gives the impression that she thinks what she’s saying is really, really important, but what is she saying? It’s not that it’s too complicated, or too complex; it’s that it just doesn’t make much sense. “Beyond our ken”, perhaps.

Open

The next bit of Larsen Freeman’s talk that addresses complexity theory was introduced by reading aloud this text:

lf5

After which she said:

Natural themes help to ground these concepts. …………….

I invite you to think with me and make some connections. Think about the connection between an open system and language. Language is changing all the time, its flowing but it’s also changing. ………………

Notice in this eddy, in this stream, that pattern exists in the flux, but all the particles that are passing through it are constantly changing.  It’s not the same water, but it’s the same pattern. ………………………..

So this world (the stream in the picture) exists because last winter there was snow in the mountains. And the snow pattern accumulated such that now when the snow melts, the water feeds into many streams, this one being one of them. And unless the stream is dammed, or the water ceases, the source ceases, the snow melts, this world will continue. English goes on, even though it’s not…. the English of Shakespeare and yet it still has the identity we know and call English. So these systems are interconnected both spatially and temporally, in time. 

Again, what is she talking about? What systems is she talking about? What does it all mean? The key seems to be “patterns in the flux”, but then, what’s so new about that?

At some point Larsen Freeman returned to this “patterns in the flux” issue. She showed a graph of the average performance of a group of students which indicated that the group, when seen as a whole, had made progress. Then she showed the graphs of the individuals who made up the group and it became clear that one or two individuals hadn’t made any progress. What do we learn from this? I thought she was going to say something about a reverse level of complexity, or granularity, or patterns disappearing from the flux from a lack of  interaction of the parts, or something.  But no. The point was:

When you look  at group average and individual performance, they’re different.

Just in case that’s too much for you to take in, Larsen Freeman explained:

Variability is ignored by statistical averages. You can make generalisations about the group but don’t assume they apply to individuals. Individual variability is the essence of adaptive behaviour. We have to look at patterns in the flux. That’s what we know from a complexity theory ecological perspective.

Adaptation

Returning to the exposition of complexity theory, there’s one more bit to add: adaptiveness. Larsen Freeman read aloud the text from this slide

lf6

The example is the adaptive immune system, not the innate immune system, the adaptive one. Larsen Freeman invited the audience to watch the video and see how the good microbe got the bad one, but I don’t know why. Anyway, the adaptive immune system is an example of a system that is nimble, dynamic, and has no centralised control, which is a key part of complexity theory.

And that’s all folks! That ‘s all Larsen Freeman had to say about complexity theory: it’s complex, open and adaptive. I’ve rarely witnessed such a poor attempt to explain anything.

Affordances

Then Larsen Freeman talked about affordances. This, just to remind you, is her alternative to input.

There are two types of affordances

  1. Property affordances. These are in the environment. You can design an affordance. New affordances for classroom learning include providing opportunities for engagement; instruction and materials that make sure everybody learns; using technology.
  2. Second Order Affordances. These refer to the learner’s perception of and relation with affordances. Students are not passive receivers of input. Second order affordances Include the agent, the perceiver, in the system. Second order affordances are dynamic and adaptive; they emerge when aspects of the environment are in interaction with the agent. The agent’s relational stance to the property affordances is key. A learner’s perception of and interaction with the environment is what creates a second order affordance.

To help clarify things, Larsen Freeman read this to the audience:

lf9

(Note here that their students “operate between languages”, unlike mine and yours (unless you’ve already taken the pledge and signed up) who learn a second or foreign language. Note also that Thoms calls “affordance” a construct.)

If I’ve got it right, “affordances” refer first to anything in the environment that might help learners learn, and second to the learner’s relational stance to them. The important bit of affordances is the relational stance  bit: the learner’s perception of, and interaction with, the environment. Crucially, the learner’s perception of the affordance opportunities, has to be taken into account. “Really?” you might say, “That’s what we do in the old world of input too – we try to take into account the learner’s perception of the input!”

Implications for teaching

Finally Larsen Freeman addresses the implications of her radical new way of understanding for teaching.

Here’s an example. In the old world which Larsen Freeman is so eager to leave behind, where people still understand SLA in terms of input and output, teachers use recasts. In the shiny new world of complexity theory and emergentism, recasts become access-creating affordances.

lf8

Larsen Freeman explains that rather than just recast, you can “build on the mistake” and thus “manage the affordance created by it.”

And then there’s adaption.

lf14

Larsen Freeman refers to the “Inert Knowledge Problem”: students can’t use knowledge learned in class when they try to operate in the real world. How, Larsen Freeman asks, can they adapt their language resources to this new environment?  Here’s what she says:

So there’s a sense in which a system like that is not externally controlled through inputs and outputs but creates itself. It holds together in a self-organising manner – like the bird flock –  that makes it have its individuality and directiveness in relation to the environment.  Learning is not the taking in of existing forms but a continuing dynamic adaptation to context which is always changing  In order to use language patterns , beyond a given occasion, students need experience in adapting to multiple and variable contexts.

“A system like that”??  What system is she talking about? Well it doesn’t really matter, does it, because the whole thing is, once again, beyond our ken, well beyond mine, anyway.

Larsen Freeman gives a few practical suggestions to enhance our students’ ability to adapt, “to take their present system and mold (sic) it to a new context for a present purpose.”

You can do the same task in less time.

Don’t just repeat it, change the task a little bit.

Or make it easier.

Or give them a text to read.

Or slow down the recording.

Or use a Think Aloud technique in order to freeze the action, “so that you explain the choices that exist”. For example:

If I say “Can I help you?”, the student says:

“I want a book.”

and that might be an opportunity to pause and say:

“You can say that. That’s OK; I understand your meaning.”

But another way to say it is to say

“I would like a book.”

Right? To give information. Importantly, adaptation does not mean sameness, but we are trying to give information so that students can make informed choices about how they wish to be, um,… seemed.

And that was about it. I don’t think I’ve left any major content out.

Conclusion

This is the brave new world that two of the other plenary speakers – Richardson and Thornbury – want to be part of. Both of them join in Larsen Freeman’s rejection of the explanation of the process of SLA that I sketched at the start of this post, and both of them are enthusiastic supporters of Larsen Freeman’s version of complexity theory and emergentism.

Judge for yourself.      

 

References

Carlucci, L. and Case, J. (2013) On the Necessity of U-Shaped Learning.  Topics in Cognitive Science, 5. 1,. pp 56-88.

Gregg, K. R. (2010) Shallow draughts: Larsen-Freeman and Cameron on complexity. Second Language Research, 26(4) 549–56.

McLaughlin, B. (1987) Theories of Second Language Learning.  London: Edward Arnold.

Pienemann, M. (1987) Determining the influence of instruction on L2 speech processing. Australian Review of Applied Linguistics 10, 83-113.

Pienemann, M. (1989) Is language teachable? Psycholinguistic experiments and hypotheses. Applied Linguistics 10, 52-79.

A New Term Starts!

a-girl-fell-asleep-while-reading-a-book-in-the-library-b96r5j

Here we go again – a new term is starting at universities offering Masters in TESOL or AL, so once again I’ve moved this post to the front.

Again, let’s run through the biggest problems students face: too much information; choosing appropriate topics; getting the hang of academic writing.

1. Too much Information.

An MA TESOL curriculum looks daunting, the reading lists look daunting, and the books themselves often look daunting. Many students spend far too long reading and taking notes in a non-focused way: they waste time by not thinking right from the start about the topics that they will eventually choose to base their assignments on.  So, here’s the first tip:

The first thing you should do when you start each module is think about what assignments you’ll do.

Having got a quick overview of the content of the module, make a tentative decision about what parts of it to concentrate on and about your assignment topics. This will help you to choose reading material, and will give focus to studies.

Similarly, you have to learn what to read, and how to read. When you start each module, read the course material and don’t go out and buy a load of books. And here’s the second tip:

Don’t buy any books until you’ve decided on your topic, and don’t read in any depth until then either.

Keep in mind that you can download at least 50% of the material you need from library and other web sites, and that more and more books can now be bought in digital format. To do well in this MA, you have to learn to read selectively. Don’t just read. Read for a purpose: read with a particular topic (better still, with a well-formulated question) in mind. Don’t buy any books before you’re abslutely sure you’ll make good use of them .

2. Choosing an appropriate topic.

The trick here is to narrow down the topic so that it becomes possible to discuss it in detail, while still remaining central to the general area of study. So, for example, if you are asked to do a paper on language learning, “How do people learn a second language?” is not a good topic: it’s far too general. “What role does instrumental motivation play in SLA?” is a much better topic. Which leads me to Tip No. 3:

The best way to find a topic is to frame your topic as a question.

Well-formulated questions are the key to all good research, and they are one of the keys to success in doing an MA. A few examples of well-formulated questions for an MA TESL are these:

• What’s the difference between the present perfect and the simple past tense?

• Why is “stress” so important to English pronunciation?

• How can I motivate my students to do extensive reading?

• When’s the best time to offer correction in class?

• What are the roles of “input” and “output” in SLA?

• How does the feeling of “belonging” influence motivation?

• What are the limitations of a Task-Based Syllabus?

• What is the wash-back effect of the Cambridge FCE exam?

• What is politeness?

• How are blogs being used in EFL teaching?

To sum up: Choose a manageable topic for each written assignment. Narrow down the topic so that it becomes possible to discuss it in detail. Frame your topic as a well-defined question that your paper will address.

3. Academic Writing.

Writing a paper at Masters level demands a good understanding of all the various elements of academic writing. First, there’s the question of genre. In academic writing, you must express yourself as clearly and succinctly as possible, and here comes Tip No. 4:

In academic writing “Less is more”.

Examiners mark down “waffle”, “padding”, and generally loose expression of ideas. I can’t remember who, but somebody famous once said at the end of a letter: “I’m sorry this letter is so long, but I didn’t have time to write a short one”. There is, of course, scope for you to express yourself in your own way (indeed, examiners look for signs of enthusiasm and real engagement with the topic under discussion) and one of the things you have to do, like any writer, is to find your own, distinctive voice. But you have to stay faithful to the academic style.

While the content of your paper is, of course, the most important thing, the way you write, and the way you present the paper have a big impact on your final grade. Just for example, many examiners, when marking an MA paper, go straight to the Reference section and check if it’s properly formatted and contains all and only the references mentioned in the text. The way you present your paper (double-spaced, proper indentations, and all that stuff); the way you write it (so as to make it coherent); the way you organise it (so as to make it cohesive); the way you give in-text citations; the way you give references; the way you organise appendices; are all crucial.

Making the Course Manageable

1. Essential steps in working through a module.

Focus: that’s the key. Here are the key steps:

Step 1: Ask yourself: What is this module about? Just as important: What is it NOT about? The point is to quickly identify the core content of the module. Read the Course Notes and the Course Handbook, and DON’T READ ANYTHING ELSE, YET.

Step 2: Identify the components of the module. If, for example, the module is concerned with grammar, then clearly identify the various parts that you’re expected to study. Again, don’t get lost in detail: you’re still just trying to get the overall picture. See the chapters on each module below for more help with this.

Step 3: Do the small assignments that are required. If these do not count towards your formal assessment , then do them in order to prepare yourself for the assignments that do count, and don’t spend too much time on them. Study the requirements of the MA TESL programme closely to identify which parts of your writing assignments count towards your formal assessment and which do not. • Some small assignments are required (you MUST submit them), but they do not influence your mark or grade. Don’t spend too mch time on these, unless they help you prepare for the main asignments.

Step 4: Identify the topic that you will choose for the written assignment that will determine your grade. THIS IS THE CRUCIAL STEP! Reach this point as fast as you can in each module: the sooner you decide what you’re going to focus on, the better your reading, studying, writing and results will be. Once you have identified your topic, then you can start reading for a purpose, and start marshalling your ideas. Again, we will look at each module below, to help you find good, well-defined, manageable topics for your main written assignments.

Step 5: Write an Outline of your paper. The outline is for your tutor, and should give a brief outline of your paper. You should make sure that your tutor reviews your outline and gives it approval.

Step 6: Write the First Draft of the paper. Write this draft as if it were the final version: don’t say “I’ll deal with the details (references, appendices, formatting) later”. Make it as good as you can.

Step 7: If you are allowed to do so, submit the first draft to your Tutor. Some universities don’t approve of this, so check with your tutor. If your tutor allows such a step, try to get detailed feedback on it. Don’t be content with any general “Well that look’s OK” stuff. Ask “How can I improve it?” and get the fullest feedback possible. Take note of ALL suggestions, and make sure you incorporate ALL of them in the final version.

Step 8: Write the final version of the paper.

Step 9: Carefully proof read the final version. Use a spell-checker. Check all the details of formatting, citations, Reference section, Appendices. Ask a friend or colleage to check it. If allowed, ask your tutor to check it.

Step 10: Submit the paper: you’re done!

3. Using Resources

Your first resource is your tutor. You’ve paid lots of money for this MA, so make sure you get all the support you need from him or her! Most importantly: don’t be afraid to ask help whenever you need it. Ask any question you like (while it’s obviously not quite true that “There’s no such thing as a stupid question”, don’t feel intimidated or afraid to ask very basic questions) , and as many as you like. Ask your tutor for suggstions on reading, on suitable topics for the written assignments, on where to find materials, on anything at all that you have doubts about. Never submit any written work for assessment until your tutor has said it’s the best you can do. If you think your tutor is not doing a good job, say so, and if necessary, ask for a change.

Your second resource is your fellow students. When I did my MA, I learned a lot in the students’ bar! Whatever means you have of talking to your fellow-students, use them to the full. Ask them what they’re reading, what they’re having trouble with, and share not only your thoughts but your feelings about the course with them.

Your third resource is the library. It is ABSOLUTELY ESSENTIAL to teach yourself, if you don’t already know, how to use a university library. Again, don’t be afraid to ask for help: most library staff are wonderful: the unsung heroes of the academic world. At Leicester University where I work as an associate tutor on the Distance Learning MA in Applied Linguistics and TESOL course, the library staff exemplify good library practice. They can be contacted by phone, and by email, and they have always, without fail, solved the problems I’ve asked them for help with. Whatever university you are studying at, the library staff are probably your most important resource, so be nice to them, and use them to the max. If you’re doing a presential course, the most important thing is to learn how the journals and books that the library holds are organised. Since most of you have aleady studied at university, I suppose you’ve got a good handle on this, but if you haven’t, well do something! Just as important as the physical library at your university are the internet resources offered by it. This is so important that I have dedicated Chapter 10 to it.

Your fourth resource is the internet. Apart from the resources offered by the university library, there is an enormous amount of valuable material available on the internet. See the “Doing an MA” and “Resources” section of this website for more stuff.

I can’t resist mentioning David Crystal’s Encyclopedia of The English Language as a constant resource. A friend of mine claimed that she got through her MA TESL by using this book most of the time, and, while I only bought it recently, I wish I’d had it to refer to when I was doing my MA.

Please use this website to ask questions and to discuss any issues related to your course.

Theoretical Constructs in SLA

Here is my contribution to Robinson, P. (ed) 2013 The Encyclopedia of SLA London, Routledge.

1. Introduction
Theoretical constructs in SLA include such terms as interlanguage, variable competence, motivation, and noticing. These constructs are used in the service of theories which attempt to explain phenomena, and thus, in order to understand how the term “theoretical construct” is used in SLA, we must first understand the terms “theory” and “phenomena”.

A theory is an attempt to provide an explanation to a question, usually a “Why” or “How” question. The “Critical Period” theory (see Birdsong, 1999) attempts to answer the question “Why do most L2 learners not achieve native-like competence?” The Processability Theory (Pienemann, 1998) attempts to answer the question “How do L2 learners go through stages of development?” In posing the question that a theory seeks to answer, we refer to “phenomena”: the things that we isolate, define, and then attempt to explain in our theory. In the case of theories of SLA, key phenomena are transfer, staged development, systemacity, variability and incompleteness. (See Towell and Hawkins, 1994: 15.)

A clear distinction must be made between phenomena and observational data. Theories attempt to explain phenomena, and observational data are used to support and test those theories. The important difference between data and phenomena is that the phenomena are what we want to explain, and thus, they are seen as the result of the interaction between some manageably small number of causal factors, instances of which can be found in different situations. By contrast, any type of causal factor can play a part in the production of data, and the characteristics of these data depend on the peculiarities of the experimental design, or data-gathering procedures, employed. As Bogen and Woodward put it: “Data are idiosyncratic to particular experimental contexts, and typically cannot occur outside those contexts, whereas phenomena have stable, repeatable characteristics which will be detectable by means of different procedures, which may yield quite different kinds of data” (Bogen and Woodward, 1988: 317). A failure to appreciate this distinction often leads to poorly-defined theoretical constructs, as we shall see below.

While researchers in some fields deal with such observable phenomena as bones, tides, and sun spots, others deal with non-observable phenomena such as love, genes, hallucinations, gravity and language competence. Non-observable phenomena have to be studied indirectly, which is where theoretical constructs come in. First we name the non-observable phenomena, we give them labels and then we make constructs. With regard to the non-observable phenomena listed above (love, genes, hallucinations, gravity and language competence), examples of constructs are romantic love, hereditary genes, schizophrenia, the bends, and the Language Acquisition Device. Thus, theoretical constructs are one remove from the original labelling, and they are, as their name implies, packed full of theory; they are, that is, proto-typical theories in themselves, a further invention of ours, an invention made in our attempt to pin down the non-observable phenomena that we want to examine so that the theories which they embody can be scrutinised. It should also be noted that there is a certain ambiguity in the terms “theoretical construct” and “phenomenon”. The “two-step” process of naming a phenomenon and then a construct outlined above is not always so clear: for Chomsky (Chomsky, 1986), “linguistic competence” is the phenomenon he wants to explain, to many it has all the hallmarks of a theoretical construct.

Constructs are not the same as definitions; while a definition attempts to clearly distinguish the thing defined from everything else, a construct attempts to lay the ground for an explanation. Thus, for example, while a dictionary defines motivation in such a way that motivation is distinguishable from desire or compulsion, Gardener (1985) attempts to explain why some learners do better than others, and he uses the construct of motivation to do so, in such a way that his construct takes on its own meaning, and allows others in the field to test the claims he makes. A construct defines something in a special way: it is a term used in an attempt to solve a problem, indeed, it is often a term that in itself suggests the answer to the problem. Constructs can be everyday parlance (like “noticing” and “competence”) and they can also be new words (like “interlanguage”), but, in all cases, constructs are “theory-laden” to the maximum: their job is to support a hypothesis, or, better still, a full-blown theory. In short, then, the job of a construct is to help define and then solve a problem.

2. Criteria for assessing theoretical constructs used in theories of SLA

There is a lively debate among scholars about the best way to study and understand the various phenomena associated with SLA. Those in the rationalist camp insist that an external world exists independently of our perceptions of it, and that it is possible to study different phenomena in this world, to make meaningful statements about them, and to improve our knowledge of them by appeal to logic and empirical observation. Those in the relativist camp claim that there are a multiplicity of realities, all of which are social constructs. Science, for the relativists, is just one type of social construction, a particular kind of language game which has no more claim to objective truth than any other. This article rejects the relativist view and, based largely on Popper’s “Critical Rationalist” approach (Popper, 1972), takes the view that the various current theories of SLA, and the theoretical constructs embedded in them, are not all equally valid, but rather, that they can be critically assessed by using the following criteria (adapted from Jordan, 2004):

1. Theories should be coherent, cohesive, expressed in the clearest possible terms, and consistent. There should be no internal contradictions in theories, and no circularity due to badly-defined terms.
2. Theories should have empirical content. Having empirical content means that the propositions and hypotheses proposed in a theory should be expressed in such a way that they are capable of being subjected to tests, based on evidence observable by the senses, which support or refute them. These tests should be capable of replication, as a way of ensuring the empirical nature of the evidence and the validity of the research methods employed. For example, the claim “Students hate maths because maths is difficult” has empirical content only when the terms “students”, “maths”, “hate” and “difficult” are defined in such a way that the claim can be tested by appeal to observable facts. The operational definition of terms, and crucially, of theoretical constructs, is the best way of ensuring that hypotheses and theories have empirical content.
3. Theories should be fruitful. “Fruitful” in Kuhn’s sense (see Kuhn, 1962:148): they should make daring and surprising predictions, and solve persistent problems in their domain.

Note that the theory-laden nature of constructs is no argument for a relativist approach: we invent constructs, as we invent theories, but we invent them, precisely, in a way that allows them to be subjected to empirical tests. The constructs can be anything we like: in order to explain a given problem, we are free to make any claim we like, in any terms we choose, but the litmus test is the clarity and testability of these claims and the terms we use to make them. Given it’s pivotal status, a theoretical construct should be stated in such a way that we all know unequivocally what is being talked about, and it should be defined in such a way that it lays itself open to principled investigation, empirical and otherwise. In the rest of this article, a number of theoretical constructs will be examined and evaluated in terms of the criteria outlined above.

3. Krashen’s Monitor Model

The Monitor Model (see Krashen, 1985) is described elsewhere, so let us here concentrate on the deficiencies of the theoretical constructs employed. In brief, Krashen’s constructs fail to meet the requirements of the first two criteria listed above: Krashen’s use of key theoretical constructs such as “acquisition and learning”, and “subconscious and conscious” is vague, confusing, and, not always consistent. More fundamentally, we never find out what exactly “comprehensible input”, the key theoretical construct in the model, means. Furthermore, in conflict with the second criterion listed above, there is no way of subjecting the set of hypotheses that Krashen proposes to empirical tests. The Acquisition-Learning hypothesis gives no evidence to support the claim that two distinct systems exist, nor any means of determining whether they are, or are not, separate. Similarly, there is no way of testing the Monitor hypothesis: since the Monitor is nowhere properly defined as an operational construct, there is no way to determine whether the Monitor is in operation or not, and it is thus impossible to determine the validity of the extremely strong claims made for it. The Input Hypothesis is equally mysterious and incapable of being tested: the levels of knowledge are nowhere defined and so it is impossible to know whether i + 1 is present in input, and, if it is, whether or not the learner moves on to the next level as a result. Thus, the first three hypotheses (Acquisition-Learning, the Monitor, and Natural Order) make up a circular and vacuous argument: the Monitor accounts for discrepancies in the natural order, the learning-acquisition distinction justifies the use of the Monitor, and so on.

In summary, Krashen’s key theoretical constructs are ill-defined, and circular, so that the set is incoherent. This incoherence means that Krashen’s theory has such serious faults that it is not really a theory at all. While Krashen’s work may be seen as satisfying the third criterion on our list, and while it is extremely popular among EFL/ESL teachers (even among those who, in their daily practice, ignore Krashen’s clear implication that grammar teaching is largely a waste of time) the fact remains that his series of hypotheses are built on sand. A much better example of a theoretical construct put to good use is Schmidt’s Noticing, which we will now examine.

4. Schmidt’s Noticing Hypothesis

Schmidt’s Noticing hypothesis (see Schmidt, 1990) is described elsewhere. Essentially, Schmidt attempts to do away with the “terminological vagueness” of the term “consciousness” by examining three senses of the term: consciousness as awareness, consciousness as intention, and consciousness as knowledge. Consciousness and awareness are often equated, but Schmidt distinguishes between three levels: Perception, Noticing and Understanding. The second level, Noticing, is the key to Schmidt’s eventual hypothesis. The importance of Schmidt’s work is that it clarifies the confusion surrounding the use of many terms used in psycholinguistics (not least Krashen’s “acquisition/ learning” dichotomy) and, furthermore, it develops one crucial part of a general processing theory of the development of interlanguage grammar.

Our second evaluation criterion requires that theoretical constructs are defined in such a way as to ensure that hypotheses have empirical content, and thus we must ask: what does Schmidt’s concept of noticing exactly refers to, and how can we be sure when it is, and is not being used by L2 learners? In his 1990 paper, Schmidt claims that noticing can be operationally defined as “the availability for verbal report”, “subject to various conditions”. He adds that these conditions are discussed at length in the verbal report literature, but he does not discuss the issue of operationalisation any further. Schmidt’s 2001 paper gives various sources of evidence of noticing, and points out their limitations. These sources include learner production (but how do we identify what has been noticed?), learner reports in diaries (but diaries span months, while cognitive processing of L2 input takes place in seconds and making diaries requires not just noticing but also reflexive self-awareness), and think-aloud protocols (but we cannot assume that the protocols identify all the examples of target features that were noticed).

Schmidt argues that the best test of noticing is that proposed by Cheesman and Merikle (1986), who distinguish between the objective and subjective thresholds of perception. The clearest evidence that something has exceeded the subjective threshold and been noticed is a concurrent verbal report, since nothing can be verbally reported other than the current contents of awareness. Schmidt adds that “after the fact recall” is also good evidence that something was noticed, providing that prior knowledge and guessing can be controlled. For example, if beginner level students of Spanish are presented with a series of Spanish utterances containing unfamiliar verb forms, and are then asked to recall immediately afterwards the forms that occurred in each utterance, and can do so, that is good evidence that they noticed them. On the other hand, it is not safe to assume that failure to do so means that they did not notice. It seems that it is easier to confirm that a particular form has not been noticed than that it has: failure to achieve above-chance performance in a forced-choice recognition test is a much better indication that the subjective threshold has not been exceeded and that noticing did not take place.

Schmidt goes on to claim that the noticing hypothesis could be falsified by demonstrating the existence of subliminal learning, either by showing positive priming of unattended and unnoticed novel stimuli, or by showing learning in dual task studies in which central processing capacity is exhausted by the primary task. The problem in this case is that, in positive priming studies, one can never really be sure that subjects did not allocate any attention to what they could not later report, and similarly, in dual task experiments, one cannot be sure that no attention is devoted to the secondary task. In conclusion, it seems that Schmidt’s noticing hypothesis rests on a construct that still has difficulty measuring up to the second criteria of our list; it is by no means easy to properly identify when noticing has and has not occurred. Despite this limitation, however, Schmidt’s hypothesis is still a good example of the type of approach recommended by the list. Its strongest virtues are its rigour and its fruitfulness, Schmidt argues that attention as a psychological construct refers to a variety of mechanisms or subsystems (including alertness, orientation, detection within selective attention, facilitation, and inhibition) which control information processing and behaviour when existing skills and routines are inadequate. Hence, learning in the sense of establishing new or modified knowledge, memory, skills and routines is “largely, perhaps exclusively a side effect of attended processing”. (Schmidt, 2001: 25). This is a daring and surprising claim, with similar predictive ability, and it contradicts Krashen’s claim that conscious learning is of extremely limited use.

5. Variationist approaches

An account of these approaches is given elsewhere In brief, variable competence, or variationist, approaches, use the key theoretical construct of “variable competence”, or, as Tarone calls it, “capability”. Tarone (1988) argues that “capability” underlies performance, and that this capability consists of heterogeneous “knowledge” which varies according to various factors. Thus, there is no homogenous competence underlying performance but a variable “capacity” which underlies specific instances of language performance. Ellis (1987) uses the construct of “variable rules” to explain the observed variability of L2 learners’ performance: learners, by successively noticing forms in the input which are in conflict with the original representation of a grammatical rule acquire more and more versions of the original rule. This leads to either “free variation” (where forms alternate in all environments at random) or “systematic variation” where one variant appears regularly in one linguistic context, and another variant in another context.

The root of the problem of the variable competence model is the weakness of its theoretical constructs. The underlying “variable competence” construct used by Tarone and Ellis is nowhere clearly defined, and is, in fact, simply asserted to “explain” a certain amount of learner behaviour. As Gregg (1992: 368) argues, Tarone and Ellis offer a description of language use and behaviour, which they confuse with an explanation of the acquisition of grammatical knowledge. By abandoning the idea of a homogenous underlying competence, Gregg says, we are stuck at the surface level of the performance data, and, consequently, any research project can only deal with the data in terms of the particular situation it encounters, describing the conditions under which the experiment took place. The positing of any variable rule at work would need to be followed up by an endless number of further research projects looking at different situations in which the rule is said to operate, each of which is condemned to uniqueness, no generalisation about some underlying cause being possible.

At the centre of the variable competence model are variable rules. Gregg argues cogently that such variability cannot become a theoretical construct used in attempts to explain how people acquire linguistic knowledge. In order to turn the idea of variable rules from an analytical tool into a theoretical construct, Tarone and Ellis would have to grant psychological reality to the variable rules (which in principle they seem to do, although no example of a variable rule is given) and then explain how these rules are internalised, so as to become part of the L2 learner’s grammatical knowledge of the target language (which they fail to do). The variable competence model, according to Gregg, confuses descriptions of the varying use of forms with an explanation of the acquisition of linguistic knowledge. The forms (and their variations) which L2 learners produce are not, indeed cannot be, direct evidence of any underlying competence – or capacity. By erasing the distinction between competence and performance “the variabilist is committed to the unprincipled collection of an uncontrolled mass of data” (Gregg 1990: 378).

As we have seen, a theory must explain phenomena, not describe data. In contradiction to this, and to criteria 1and 2 in our list, the arguments of Ellis and Tarone are confused and circular; in the end what Ellis and Tarone are actually doing is gathering data without having properly formulated the problem they are trying to solve, i.e. without having defined the phenomenon they wish to explain. Ellis claims that his theory constitutes an “ethnographic, descriptive” approach to SLA theory construction, but he does not answer the question: How does one go from studying the everyday rituals and practices of a particular group of second language learners through descriptions of their behaviour to a theory that offers a general explanation for some identified phenomenon concerning the behaviour of L2 learners?

Variable Competence theories exemplify what happens when the distinction between phenomena, data and theoretical constructs is confused. In contrast, Chomsky’s UG theory, despite its shifting ground and its contentious connection to SLA, is probably the best example of a theory where these distinctions are crystal clear. For Chomsky, “competence” refers to underlying linguistic (grammatical) knowledge, and “performance” refers to the actual day to day use of language, which is influenced by an enormous variety of factors, including limitations of memory, stress, tiredness, etc. Chomsky argues that while performance data is important, it is not the object of study (it is, precisely, the data): linguistic competence is the phenomenon that he wants to examine. Chomsky’s distinction between performance and competence exactly fits his theory of language and first language acquisition: competence is a well-defined phenomenon which is explained by appeal to the theoretical construct of the Language Acquisition Device. Chomsky describes the rules that make up linguistic competence and then invites other researchers to subject the theory that all languages obey these rules to further empirical tests.

6. Aptitude

Why is anybody good at anything? Well, they have an aptitude for it: they’re “natural” piano players, or carpenters, or whatever. This is obviously no explanation at all, although, of course, it contains a beguiling element of truth.To say that SLA is (partly) explained by an aptitude for learning a second language is to beg the question: What is aptitude for SLA? Attempts to explain the role of aptitude in SLA illustrate the difficulty of “pinning down” the phenomenon that we seek to explain. If aptitude is to be claimed as a causal factor that helps to explain SLA, then aptitude must be defined in such a way that it can be identified in L2 learners and then related to their performance.

Robinson (2007) uses aptitude as a construct that is composed of different cognitive abilities. His “Aptitude Complex Hypothesis” claims that different classroom settings draw on certain combinations of cognitive abilities, and that, depending on the classroom activities, students with certain cognitive abilities will do better than others.. Robinson adds the “Ability Differentiation Hypothesis” which claims that some L2 learners have different abilities than others, and that it is important to match these learners to instructional conditions which favor their strengths in aptitude complexes. In terms of classroom practice, these hypotheses might well be fruitful, but they do not address the question of how aptitude explains SLA.

One example of identifying aptitude in L2 learners is the CANAL-F theory of foreign language aptitude, which grounds aptitude in “the triarchic theory of human intelligence” and argues that “one of the central abilities required in FL acquisition is the ability to cope with novelty and ambiguity” (Grigorenko, Sternberg and Ehrman, 2000: 392). However successfully the test might predict learner’s ability, the theory fails to explain aptitude in any causal way. The theory of human intelligence that the CANAL-F theory is grounded in fails to illuminate the description given of FL ability; we do not get beyond a limiting of the domain in which the general ability to cope with novelty and ambiguity operates. The individual differences between foreign language learners’ ability is explained by suggesting that some are better at coping with novelty and ambiguity than others. Thus, whatever construct validity might be claimed for CANAL-F, and however well the test might predict ability, it leaves the question of what precisely aptitude at foreign language learning is, and how it contributes to SLA, unanswered.

How, then, can aptitude explain differential success in a causal way? Even if aptitude can be properly defined and measured without falling into the familiar trap of being circular (those who do well at language aptitude tests have an aptitude for language learning), how can we step outside the reference of aptitude and establish more than a simple correlation? What is needed is a theoretical construct.

7. Conclusion

The history of science throws up many examples of theories that began without any adequate description of what was being explained. Darwin’s theory of evolution by natural selection (the young born to any species compete for survival, and those young that survive to reproduce tend to embody favourable natural variations which are passed on by heredity) lacked any formal description of the theoretical construct “variation”, or any explanation of the origin of variations, or how they passed between generations. It was not until Mendel’s theories and the birth of modern genetics in the early 20th century that this deficiency was dealt with. But, and here is the point, dealt with it was: we now have constructs that pin down what “variation” refers to in the Darwinian theory, and the theory is stronger for them (i.e. more testable). Theories progress by defining their terms more clearly and by making their predictions more open to empirical testing.

Theoretical constructs lie at the heart of attempts to explain the phenomena of SLA. Observation must be in the service of theory: we do not start with data, we start with clearly-defined phenomena and theoretical constructs that help us articulate the solution to a problem, and we then use empirical data to test that tentative solution. Those working in the field of psycholinguistics are making progress thanks to their reliance on a rationalist methodology which gives priority to the need for clarity and empirical content. If sociolinguistics is to offer better explanations, the terms used to describe social factors must be defined in such a way that it becomes possible to do empirically-based studies that confirm or challenge those explanations. All those who attempt to explain SLA must make their theoretical constructs clear, and improve their definitions and research methodology in order to better pin down the slippery concepts that they work with.

References

Birdsong, D. (ed) (1999) Second Language Acquisition and the Critical Period Hypothesis. Mahwah, NJ: Lawrence Erlbaum Associates
Bogen, J. and Woodward, J. (1988) “Saving the phenomena.” Philosophical Review 97: 303-52.
Cheesman, J., & Merikle. P. M. (1986) “Distinguishing conscious from unconscious perceptual processes.” Canadian Journal of Psychology, 40:343-367.
Chomsky, N. (1986) Knowledge of Language: Its Nature, Origin and Use. New York:
Prager.
Ellis, R. (1987) “Interlanguage variability in narrative discourse: style-shifting in the use of the past tense.” Studies in Second Language Acquisition 9, 1-20.
Gardner, R. C. (1985) Social psychology and second language learning: the role of
attitudes and motivation. London: Edward Arnold.
Gregg, K. R. (1990) “The Variable Competence Model of second language acquisition
and why it isn’t.” Applied Linguistics 11, 1. 364—83.
Grigorenko, E., Sternberg, R., and Ehrman, M. (2000) “A Theory-Based Approach to the Measurement of Foreign Language Learning Ablity: The Canal-F Theory and Test.” The Modern Language Journal 84, iii, 390-405.
Jordan, G. (2004) Theory Construction in SLA. Benjamins: Amsterdam
Kuhn, T. (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Krashen, S. (1985) The Input Hypothesis: Issues and Implications. New York: Longman.
Pienemann, M. 1998: Language Processing and Second Language Development:
Processability Theory. Amsterdam: John Benjamins
Popper, K. R. 1972: Objective Knowledge. Oxford: Oxford University Press.
Schmidt, R. (1990) “The role of consciousness in second language learning.” Applied
Linguistics 11, 129-58
Schmidt, R.(2001) “Attention.” In Robinson, P. (ed.) Cognition and Second Language
Instruction. Cambridge: Cambridge University Press, 3-32.
Tarone, E. 1988: Variation in interlanguage. London: Edward Arnold.
Towell, R. and Hawkins, R. (1994) Approaches to second language acquisition.
Clevedon: Multilingual Matters.

Can we get a pineapple?

38905a1

Lost and Unfounded

Leo Selivan’s and Hugh Dellar’s recent contributions to EFL Magazine give further evidence that their strident, confidently expressed ideas lack any proper theoretical foundations.

We can compare the cumulative attempts of Selivan and Dellar to articulate their versions of the lexical approach with the more successful attempts made by Richards and Long to articulate their approaches to ELT.  Richards (2006) describes what he calls “the current phase” of communicative language teaching as

a set of principles about the goals of language teaching, how learners learn a language, the kinds of classroom activities that best facilitate learning, and the roles of teachers and learners in the classroom ( Richards, 2006:2)

Note that Richards says this on page 2 of his book: he rightly starts out with the assumption that “a set of principles” is required.

Long (2015) offers his own version of task based language teaching and he goes to great lengths to explain the underpinnings of his approach. His book is, in my opinion, the best example in the literature of a well-founded, well-explained approach to ELT. It’s based on a splendidly lucid account of a cognitive-interactionist theory of instructed SLA, on careful definitions of task and needs analysis, and on 10 crystal clear methodological principles. Long’s book is to be recommended for its scholarship, its thoroughness, and, not least, for its commitment to a progressive approach to ELT.

So what do Selivan and Dellar offer?

In his “Beginners’ Guide To Teaching Lexically”, http://eflmagazine.com/beginners-guide-to-the-lexical-approach/ Selivan makes a number of exaggerated generalisations about English and then outlines “the main principles of the lexical approach”. These turn out to be

  1. Ban Single Words
  2. English word ≠ L1 word
  3. Explain less – explore more
  4. Pay attention to what students (think they) know.

To explain how such “principles” adequately capture the essence of the lexical approach, Sellivan offers “A bit of theory”for each one. For example, Selivan says “A new theory of language, known as Lexical Priming, lends further support to the Lexical Approach.  ……. By drawing students’ attention to collocations and common word patterns we can accelerate their priming”. Says he. But what reasons does he have for such confident assertions? Selivan fails to give his reasons, and fails to give any proper rationale for the claims he makes about language and teaching.

In his podcast, http://eflmagazine.com/hugh-dellar-discusses-the-lexical-approach/ Dellar agrees that collocation is the driving force of English. He claims that the best way to conduct ELT is to concentrate on presenting and practising the lexical chunks needed for different communicative events. Teachers should get students to do things with these chunks such as “fill in gaps, discuss them, order them, say them, write them out themselves, etc.” with the goal of getting students to memorize them. Again, Dellar doesn’t explain why we should concentrate on these chunks, or why teachers should get students to  memorise them. Maybe he thinks “It stands to reason, yeah?”

At one point in his podcast Dellar says that, while those just starting to learn English will go into a shop and say “I want, um, coffee, um sandwich”,

…. as your language becomes more sophisticated, more developed, you learn to kind of grammar the basic content words that you’re adding thereSo you learn “Hi. Can I get a cup of coffee and a sandwich, please.” So you add the grammar to the words that drive the communication, yeah? Or you just learn that as whole chunk. You just learn “Hi. Can I get a cup of coffee? Can I get a sandwich, please?” Or you learn “Can I get…” and you drop in a variety of different things.

This is classic “Dellarspeak”: a badly-expressed misrepresentation of someone else’s erroneous theory.  Dellar doesn’t tell us how we teach learners “to grammar” content words, or when it’s better to teach “the whole chunk” – or what informs his use of nouns as verbs, for that matter. As for the “can I get…?” example, what’s wrong with just politely naming what we want:  Good MorningA coffee and a sandwich, please.”?  What is gained by teaching learners to use the redundant Can I get…. phrase?

But enough of Dellar’s hapless attempts to express other people’s ideas, let’s cut to the chase, if you get my drift. The question I want to briefly discuss is this:

Are Selivan’s and Dellar’s claims based on coherent theories of language and language learning, or are they mere opinions?

3281465

Models of English 

Crystal (2003) says: “an essential step in the study of a language is to model it”. Here are two models:

  1. A classic grammar model of the English language attempts to capture its structure, described in terms of grammar, the lexicon and phonology (see Quirk et.al. 1985, and Swan, 2001, for examples of descriptive and pedagogical grammars). This grammar model, widely used in ELT today, is rejected by Hoey.
  2. Hoey (2005) says that the best model of language structure is the word, along with its collocational and colligational properties. Collocation and “nesting” (words join with other primed words to form sequence) are linked to contexts and co-texts. So grammar is replaced by a network of chunks of words. There are no rules of grammar; there’s no English outside a description of the patterns we observe among those who use it. There is no right or wrong in language. It makes little sense to talk of something being ungrammatical (Hoey, 2005).

Selivan and Dellar uncritically accept Hoey’s radical new theory of language, but is it really better than the model suggested by grammarians?

Surely we need to describe language not just in terms of the performed but also in terms of the possible. Hoey’s argument that we should look only at attested behaviour and abandon descriptions of syntax strikes most of us as a step too far. And I think Selivan and Dellar agree, since they both routinely refer to the grammatical aspects of language. The problem is that Selivan and Dellar fail to give their own model of language, they fail to clearly indicate the limits of their adherence to Hoey’s model, they fail to say what place syntax has in their view of language. In brief, they have no coherent theory of language.

Hoey’s Lexical Priming Theory

Hoey (2005) claims that we learn languages by subconsciously noticing everything (sic) that we have ever heard or read about words, and storing it all in a massively repetitious way.

The process of subconsciously noticing is referred to as lexical priming. … Without realizing what we are doing, we all reproduce in our own speech and writing the language we have heard or read before. We use the words and phrases in the contexts in which we have heard them used, with the meanings we have subconsciously identified as belonging to them and employing the same grammar. The things we say are subconsciously influenced by what everyone has previously said to us.

This theory hinges on the construct of “subconscious noticing”, but instead of explaining it, Hoey simply asserts that language learning is the result of repeated exposure to patterns of text (the more the repetition the better the knowledge), thus adopting a crude version of behaviourism. Actually, several on-going quasi-behaviourist theories of SLA try to explain the SLA process (see, for example, MacWhinney, 2002; O’Grady, 2005; Ellis, 2006; Larsen-Freeman and Cameron, 2008), but Hoey pays them little heed, and neither do Selivan and Dellar, who swallow Hoey’s fishy tale hook line and sinker, take the problematic construct of priming at face value, and happily uses “L1 primings” to explain L1 transfer as if L1 primings were as real as the nose on Hoey’s face.

Hoey rejects cognitive theories of SLA which see second language learning as a process of interlanguage development, involving the successive restructuring of learners’ mental representation of the L2, because syntax plays an important role in them. He also rejects them because, contrary to his own theory, they assume that there are limitations in our ability to store and process information. In cognitive theories of SLA, a lot of research is dedicated to understanding how relatively scarce resources are used. Basically, linguistic skills are posited to slowly become automatic through participation in meaningful communication. While initial learning involves controlled processes requiring a lot of attention and time, with practice the linguistic skill requires less attention and less time, thus freeing up the controlled processes for application to new linguistic skills. To explain this process, the theory uses constructs such as comprehensible input, working and long term memory, implicit and explicit learning, noticing, intake and output.

In contrast, Hoey’s theory concentrates almost exclusively on input, passing quickly over the rest of the issues, and simply asserts that we remember the stuff that we’ve most frequently encountered. So we must ask Selivan and Dellar: What theory of SLA informs your claims? As an example, we may note that Long (2015) explains how his particular task-based approach to ELT is based on a cognitive theory of SLA and on the results of more than 100 studies.

Hoey’s theory doesn’t explain how L2 learners process and retrieve their knowledge of L2 words, or how paying attention to lexical chunks or “L1 primings” affects the SLA process. So what makes Selivan and Dellar think that getting students to consciously notice both lexical chunks and “L1 primings” will speed up primings in the L2? Priming, after all, is a subconscious affair. And what makes Dellar think that memorising lexical chunks is a good way to learn a second language? Common sense? A surface reading of cherry-picked bits of contradictory theories of SLA? Personal experience? Anecdotal evidence? What? There’s no proper theoretical base for any of Dellar’s claims; there’s scarce evidence to support them; and there’s a powerful theory supported by lots of evidence which suggests that they’re mistaken.

black-and-white-pineapple-png-pineapple-die-cut-vinyl-decal-pv733

 All Chunks and no Pineapple 

Skehan (1998) says:

Phrasebook-type learning without the acquisition of syntax is ultimately impoverished: all chunks but no pineapple. It makes sense, then, for learners to keep their options open and to move between the two systems and not to develop one at the expense of the other. The need is to create a balance between rule-based performance and memory-based performance, in such a way that the latter does not predominate over the former and cause fossilization.

If Selivan and Dellar agree that there’s a need for a balance between rule-based performance and memory-based performance, then they have to accept that Hoey is wrong, and confront the contradictions that plague their present position on the lexical approach, especially their reliance on Hoey’s description of language and on the construct of priming. Until Selivan and Dellar sort themselves out, until they tackle basic questions about a model of English and a theory of second language learning, so as to offer some principled foundation for their lexical approach, then it amounts to little more than an opinion, more precisely: the unappetising opinion that ELT should give priority to helping learners memorise pre-selected lists of lexical chunks. 

References

Crystal, D. (2003) The English Language. Cambridge: Cambridge University Press.

Ellis, N. C. (2006) Language acquisition and rational contingency learning. Applied Linguistics, 27 (1), 1-24.

Hoey, M. (2005) Lexical Priming: A New Theory of Words and Language. Psychology Press.

Krashen, S. (1985) The Input Hypothesis: Issues and Implications. Longman.

Larsen-Freeman, D and Cameron, L. (2008) Complex Systems and Applied Linguistics. Oxford, Oxford University Press.

Lewis, M. (1993) The Lexical Approach. Language Teaching Publications.

Lewis, M. (1996) Implications of a lexical view of language’. In Willis, J,, & Willis, D. (eds.) Challenge and Change in Language Teaching, pp. 4-9. Heinemann.

Lewis, M. (1997) Implementing the Lexical Approach. Language Teaching Publications.

Long, M. (2015) Second Language Acquisition and Task-Based Language Teaching. Wiley.

MacWhinney, B. (2002) The Competition Model: the Input, the Context, and the Brain. Carnegie Mellon University.

O’Grady, W. (2005) How Children Learn Language Cambridge, Cambridge Universiy Press.

Richards, J (2006) Communicative Language Teaching Today. Cambridge University Press.

Quirk, R., Greenbaum, S., Leech, G. and Svartvik, J. (1985) A Comprehensive Grammar of the English Language, London: Longman.

Skehan, P. (1998) A Cognitive Approach to Language Learning. Oxford: Oxford University Press.

Swan, M. (2001) Practical English usage. Oxford: Oxford University Press.

Do It Like Dellar, Or Use Your Brain?

ventril

In his talk Teaching Grammar Lexically, Dellar tells us about the life-changing effects that reading Michael Lewis’ The Lexical Approach had on him. What it did was to jolt him out of his comfortable life of grammar-based PPP teaching, and make him realise that language was not lexicalised grammar, but rather grammaticalised lexis. This “profound shift in perspective” took its toll; Dellar confesses that struggling with the challenging implications of Lewis’ text threw his teaching into a state of chaos for two years, and that he and his co-author Andrew Walkley have spent more than twenty years “unpicking” its “dense” content.  About ten years later, Dellar had sufficiently recovered from his intellectual odyssey to read another book, Hoey’s Lexical Priming, and this led to an even deeper understanding of, and commitment to, the lexical approach. The extent of Hoey’s influence on Dellar can be appreciated by noting that, after 2005, Dellar’s stock of constructs doubled – from one to two, so that it now consists of “lexical chunks” and “priming”.  Undaunted by widespread criticism of Lewis’ and Hoey’s arguments (neither offers a developed theory of SLA or a principled methodology for ELT), and unencumbered by complicated theorising or thinking critically, Dellar sees ELT with uncluttered clarity. Alas, he also sees the need to share his vision with others, and so he travels around the world extolling teachers everywhere to profoundly shift their perspective, throw off the “tyranny of PPP” and embrace the promise of properly primed lexical chunks.

In contrast to this simplistic, evangelical proselytising, real educators take the view that the primary goal of education is to encourage people to question everything, to think critically for themselves. It emphasises the dance of ideas, the delight in thinking about things in such a way that one’s intellect is engaged, one’s appreciation of the complexity of things is improved, and the accumulation of information is down-played. In primary and secondary schools, the good teachers are those who encourage students to question conventional wisdom and at university, the same ethos of critical thinking is what informs the best academic staff; they’re less concerned with facts than with what their students make of them. My concern is that in the world of ELT training, this type of approach is not much in evidence.

What is critical thinking?

The ability to think critically involves three things:

  1. An attitude: don’t believe what you’re told.
  2. Knowledge of the methods of logical inquiry and reasoning
  3. Skill in applying the methods referred to in 2 above.

Critical thinking demands a persistent effort to examine anything you’re told in the light of the evidence that supports it and the logic of its conclusions. It demands that you’re not impressed by who said it, that you remain open-minded, and, above all, that you think rationally for yourself. It refers to your ability to interpret data, to appraise evidence and evaluate arguments. it refers, that is, to your ability to critically examine the so-called facts, to assess the existence (or non-existence) of logical relationships between propositions and to assess whether conclusions are warranted.

Critical thinking is needed when you do your own work and when you assess the work of others. When you do your own work in an MA paper, critical thinking demands that you

  • articulate the problems addressed
  • look for means for solving those problems
  • gather and marshal pertinent information
  • evaluate the information gathered.

When you evaluate your own work and the work of others you must identify defects such as

  • poor articulation of the problem
  • appeals to authority
  • unstated assumptions and values
  • partial data
  • unwarranted conclusions

Thinking critically is, in my opinion, most of all an attitude.  It’s the attitude of  a sceptic, of one who can sniff a rat. In many areas of life you’ll be well-advised to deliberately ignore the scent, but when it comes to matters academic, sniffing a rat, sensing that there’s something wrong in the argumentation and evidence given in a text, is a skill that needs nurturing and honing. Once you adopt the right attitude, then you have to improve your ability to not just feel there’s something wrong, but to identify exactly what that something is.

dell1

Back to the Baloney: Dellar’s Lexical Approach  

From the evidence available on his websites, blogs, and recorded interviews, webinars and presentations, Dellar’s approach to ELT training is severely at odds with the critical approach I’ve sketched above. Despite his confusion about both theories of language (UG = Structuralism??) and theories of SLA, Dellar presumes to tell teachers what’s wrong and what’s right. PPP grammar teaching is wrong, teaching lexical chunks is right. Rather than attempt to evaluate different approaches to ELT and tentatively recommend this or that alternative, Dellar banishes doubt and gives the strong impression that he’s cracked it: he’s worked out the definitive blueprint of ELT, and all he has to do is to overcome the entrenched resistance of those still chained to Headway, or English File, or whatever coursebook it is that keeps them languishing under the oppression of PPP grammar teaching.

Any initial excitement  you might feel about someone proposing a move away from coursebook-led teaching soon evaporates when you realize that Dellar is against grammar-based coursebooks, but not coursebooks per se; indeed the definitive blueprint of ELT turns out to be nothing other than his own coursebook series Outcomes! Delivery from the tyranny of grammar teaching and emergence into the brave new world of the lexical approach is a simple affair: you just throw away Headway and pick up Outcomes and  then lead your students, unit by unit. through its mind-numbingly boring pages, just as with any other coursebook. The only difference between the tyrannical past and the liberated future is that grammar boxes are replaced with long lists of leaden lexical chunks, repeated exposure to which is somehow supposed to lead to communicative competence.

If we examine Dellar’s published work – his blogs, his video presentations, his webinars, his conference presentations, and his coursebooks – we find very little evidence of critical thinking about language, language learning or language teaching.

  • Does his work invite teachers to critically consider different views of language? Does it consider the arguments for a generative grammar as argued by Chomsky, versus the arguments for a structuralist approach as argued by Bloomfield, or a functional approach, as argued by Halliday, or a functional-notional approach as argued by Wilkims, or a lexical approach as argued by Pawley, or Nattinger, or Byber? Or does it tell them that English is best seen as lexically-driven, and that’s that?
  • Does his work invite teachers to consider the pros and cons of different accounts of SLA, of different weights given to input and output, of explicit and implicit learning, of different accounts of interlanguage development? Or does it tell them that priming is the key to SLA?  Does Dellar’s oeuvre encourage teachers to critically assess the construct of priming? Or is priming taken as a given, and is it simply asserted that lexical chunks are the secret of language learning?

Conclusion

ELT training is too often characterised by an “I know best” assumption, and by its general rejection of a critical approach to education. Instead of approaching teacher training sessions with prepared answers already in hand, teacher trainers should adopt a critical thinking approach to their job. They should ask teachers open questions, toss some provisional answers out for discussion, invite teachers to critically evaluate them, and work with teachers to help them come up with their own tentative solutions. I need hardly add that these tentative solutions should then be critically discussed.

Criticising Harmer

image-from-textbook

My criticisms of Jeremy Harmer’s latest published work have caused some dismay, which was only to be expected. Equally predictable was that so few of those who objected to what I said, or how I said it, voiced their concerns; silence, as usual, was the preferred response. To those who did speak up, either in emails to me, or in other forums, here’s my reply.

First, a summary of my criticisms:

  • Harmer’s latest edition of The Practice of Language Teaching is badly written, badly informed, and displays a lack of critical acumen.
  •  Harmer’s pronouncements on testing in 2015 were appalling.
  • Harmer is an obstacle to progress in ELT.

Well, that’s my view, and I’ve given some evidence to support it in various posts. Further evidence can be got by simply reading his book and watching his presentations. I’ll be glad to talk to Harmer face to face in any forum that he or anyone else wants to organise. Anytime, anywhere.

I take criticism here to be the act of analysing and evaluating the quality of a given text. This involves deconstructing it. I use “deconstruct” as Gill in the quote that heads this blog uses it (not in the special sense that Derrida uses it), to refer to a process that’s been used down through the ages: to deconstruct a text is to critically take it apart. What we examine is the coherence and cohesion of the text, its expression and its content.

At the most superficial (I mean “surface”, not unimportant) level, the quality of a text can be judged by its coherence and cohesion. Coherence refers to clarity, while cohesion refers to organisation and flow. Harmer’s texts lacks both.  Pick up Harmer’s magnum opus, the truly appalling Practice of English Language Teaching, start reading, and ask yourself: Is this clear? Is this well-expressed? Does the text flow?

  • How many sentences are ungrammatical?
  • How often could things have been more succinctly expressed?
  • How often do you struggle to get to the end of a sentence?
  • How often does the text meander?
  • How often do you feel that the writing is tedious?
  • How often are you referred elsewhere?

The coherence of the text is severely weakened by its author’s inability to stick to the point and to express himself clearly: so often a simple point is dragged out for pages. As for cohesion, the text looks well-organised, but it fails to properly sequence its arguments. It’s full of references to other places in the text where what’s being dealt with is dealt with differently, so you never quite get a handle on anything. And, crucially for cohesion, there’s no over-arching argument running through the text: it’s a motley collection of bits and pieces.

At a deeper level of criticism, we should ask questions about content.

  • Does the text show a good command of things discussed?
  • Does it present an up to date summary of ELT?
  • Does it give a fair and accurate description of current views of the English language, of L2 language learning, of teaching, and of assessment ?
  • Is there the slightest hint of originality?
  • Does it give a good critical evaluation of matters discussed?
  • Is it enjoyable to read?

A critical view of the text demands that we don’t take anything for granted. No assertions should be taken at face value; we should carefully scrutinise any opinions, and we should give some attention to the kind of critical discourse analysis (CDA) proposed by Fairclough and others where political issues are weighed. If we critically examine Harmer’s Practice of Language Teaching in this way, I suggest that we’ll conclude that the answer to the 6 questions above is a resounding “No!”, and that a CDA of the book reveals a deeply conservative commitment to the status quo.

Thoughts of The Master

2

Those who are about to embark on the discourse analysis bit of their MA course might like to examine the quotes below. They’re all quotes from the published work of Jeremy Harmer. As you know, the purpose of discourse analysis is to examine texts “beyond the sentence boundary”. Various frameworks can be used, but I recommend a literacy approach here, where you concentrate on the verbosity, bathos, and general pumped-up, faux academic prose of the writer, blissfully unaware of his limitless limitations. Note the cascade of clichés, the resort to tired truisms, the bumbling use of brackets, and the general tedium of the text, not alleviated by random bits of bullshit. The final example in the list below refers you to a video recording on Harmer’s blog where you’ll find the master examining the finer points of testing in his own unique manner.

So take a look below. As they say in MacDonald’s when they bring you your tasteless, lack-lustre, nutrition-free meal: Enjoy!

The constant interplay of applied linguistic theory and observed classroom practice attempts to draw us ever closer to a real understanding of exactly how languages are learnt and acquired, so that the work of writers such as Ellis (1994) and Thornbury (1999)—to mix levels of theory and practice—are written to influence the methodology we bring to language learning. We ignore their challenges and suggestions at our peril, even if due consideration leads us to reject some of what they tell us.

Teaching may be a visceral art, but unless it is informed by ideas it is considerably less than it might be.

Without beliefs and enthusiasms, teachers become client-satisfiers only—and that is a model which comes out of a different tradition from that of education, and one that we follow at our peril.

A problem with the idea that methodology should be put back into second place (at the very most) is that it threatens to damage an essential element of a teacher’s make-up—namely what they believe in, and what they think they are doing as teachers.

A belief in the essentially humanistic and communicative nature of  language  may well  pre-dispose certain teachers towards a belief in group participation and learner input rather than relying only on the straightforward transmission of knowledge from instructor to passive instructee.

One school of thought which is widely accepted by many language teachers is that the development of our conceptual understanding and cognitave skills is a main objective of all education. Indeed, this is more important than the acquisition of factual information (Williams and Burden 1997:165).

Any teacher with experience knows that it is one thing to put educational temptation in a child’s way (or an adult’s); quite another for that student to actually be tempted.

There is nothing wrong (and everything right) with discovery-based experiential learning. It just doesn’t work some of the time.

What precisely is the role of a cloud granny and how can she (or perhaps he) make the whole experience more productive.

Yet without our accumulated knowledge and memories what are we? Our knowledge is, on the contrary, the seat of our intuition and our creativity. Furthermore, the gathering of that knowledge from our peers and, crucially, our elders and more experienced mentors is part of the process of socialization. Humanity has thought this to be self-evident for at least 2000 years.

On testing https://jeremyharmer.wordpress.com/2013/12/16/testophile-or-testophobe/

As you watch the master deliver his polished address:

  • Note the setting: the well-appointed sitting room, the unused, high quality microphone, the classical music in the background.
  • Note the speaker: the pose, the homely mug of tea, the air of quiet confidence, the carefully-practiced delivery.
  • Note, too, the complete lack of content in what he says, the utter disregard for any serious engagement with an important issue, the assumption that this indulgent, look-at-me-farting-around-saying-absolutely-nothing display will be well met.
  • Such, you might think, is the arrogance of power.

 

Harmer: The Practice of English Language Teaching 5th Edition

imagesU7V0TEMM

The new edition of Harmer’s Practice of English Language Teaching is over 500 pages and includes chapters on:

  • English as a world language
  • Theories of language and language learning
  • Learner characteristics which influence teacher decisions
  • Guidance on managing learning
  • Teaching language systems (grammar, vocabulary and pronunciation)
  • Teaching language skills (speaking, writing, listening and reading)
  • Practical teaching ideas
  • The role of technology (old and new) in the classroom
  • Assessment for language learning in the digital age

If you’re doing a course in ELT, then reading the new edition of Harmer’s massive tome might well have the salutary effect of making you re-consider your career choice. Nobody could blame you if, having read this mind numbingly tedious book, you decided to quit ELT and apply for a job in the Damascus Tourist Agency.  In the unlikely event that you reach the end of its 550 pages, you’ll probably have lost the will to live, let alone teach. Each page is weighed down by badly crafted, appallingly dull writing; each chapter says nothing new or succinct about its subject; each section says nothing that you can’t find much better treatments of in other, well-focused books.

The section on English as a world Language is absurdly long, badly considered and leans heavily on Crystal, who does a much better job of it, far more concisely and completely in his  book The English Language. The section on theories of  language learning is disgraceful; not one of the theories mentioned is properly stated or discussed. I really can’t bring myself to go through the rest of the book; it’s consistantly badly informed, badly considered, wordy and unhelpful.

It’s the style that offends me most in this horrendously-long, door-jam of a book; despite the efforts of all his editors, the suffocating effect of Harmer’s faux academic, charmlessly chummy, verbose and ineffectual prose is to turn everything to sludge. The reader wades endlessly through the sludge, unaided even by decent signposts, towards another badly defined horizon, there to meet more of the same: another, different hill to climb.

Even if you can get over the soporific effects of Harmer’s writing, the content is not likely to satisfy you, whatever TESOL qualification you’re aiming at. The audacious sweep of the book is almost ironic: here’s a book where everything is mentioned and nothing is adequately dealt with. Magpies skillfully take what they need from other nests; Harmer haplessly crashes into the work of scholars, conveying almost nothing of their contribution. Anything, but anything, mentioned here needs further reading. Needless to say, the bibliography is hopeless.

Just to round it off, the seemingly endless trudge through Harmer’s wasteland gets the reader precisely nowhere.  No final vision awaits; all you get at the end of this pathetic pilgrimage are poorly considered, unoriginal platitudes.

This dreadful book serves as a mirror for everybody involved in ELT. How can we in the ELT world be taken seriously by other areas of education when such a book is recommended reading in so many teacher training courses, and even in post-graduate courses?

Harmer, J. (2015) The Practice of English Language Teaching . 5th Edition. London, Pearson.