Thornbury: Fence-Sitting & Slime-Sprawling

Bad Mediators 

In Part 1 I suggested that those who write books and give teacher training courses in ELT have a duty to act as mediators between  researchers and teachers, and that most of them make a mess of the job. This opinion was supported by the mini study Thornbury carried out and then reported on at the 2017 IATEFL conference. The study looked at four top-selling “How to Teach English” books which are recommended reading for hundreds of thousands of people studying to get a qualification in ELT, and it found that all four books are based more on the authors’ biases, intuitions, feelings, and what somebody else told them, than on any serious attempt to critically assess what research findings tell us about how people learn languages. In a post on these mediators  I suggested that Thornbury took a disappointingly uncritical look at the data that his study had produced.

Staying on the Fence 

Unlike the four writers he reviewed, Thornbury himself has discussed research findings that challenge ELT orthodoxy more than once, so if he thinks it’s important for him to keep in touch with research and to use research findings to inform his views on methodology, why doesn’t he expect the same of others? And since he’s been so outspoken in his criticism of coursebooks, why didn’t he mention this when discussing his findings? The answer seems to be that Thornbury has developed the unique knack of not just sitting on the fence, but actually living perfectly perched on it. He’s become so adroit at deftly ducking controversy, so practiced at never getting drawn on the political issues raised by the matters he discusses, that he makes the UK liberal democrats look radical. He knows perfectly well that the bosses of the British Council, the publishing houses, the exam bodies, the training outfits and so on will simply not allow any serious attacks on current ELT practice to be made – witness his own publishers’ making it clear to him that they’re “not interested” in his McNuggets views or in what he really thinks of the CELTA course. He knows that the ELT educational system is set up in such a way that teachers are unlikely to hear about “inconvenient” research findings which challenge coursebook-driven ELT, or which show that the Pearson Test of English is built on sand, or which describe the Common European Framework of Reference as “a prime example of in the way political and social agendas can impact on language testing, and how language testing can be made to serve those agendas” (Fulcher, 2005). I suppose Thornbury thinks, like many reformers, that he can be more of a force for change by staying inside the tent than pissing on it from outside. I think that this argument is demonstrably wrong, but never mind; even if that is Thornbury’s view, it doesn’t explain why he doesn’t adopt a more critical stance. In the end, maybe it’s just that he’s a really nice guy and he doesn’t like upsetting people. Well, I can certainly relate to that. 🙂

Misrepresenting Chomsky 

Still, there’s another bone I have to pick with the loveable Thornbury, and that is his continued misrepresentation of Chomsky’s work. If you look at the “daft things the experts said” at the end of my last post, it was Thornbury who said “The NS-NNS distinction is absolutely central to the Chomskyan project”. It isn’t, of course, and, pace Thornbury, the onus isn’t on Chomsky to perform the logically impossible task of proving that some aspects of the knowledge of language that children demonstrate couldn’t have been acquired from input, and it isn’t the case that there’s no empirical evidence to support Chomsky’s theory of UG. In my post Treatise on Thornbury’s view of SLA I pointed to some mistakes in Thornbury’s account of what Chomsky says about language and language learning, and also the faults in his arguments about UG in general and the poverty of the stimulus argument in particular. It’s important to stress that none of the emergentists who Thornbury now seems to think offer the best explanation of SLA, least of all Larsen-Freeman, has offered an explanation for what young children know about language. As Eubank and Gregg (2002) argue, to suggest that language learning is explained by a general theory of associative learning is to leave unexplained

  1. the fact that children know which form-function pairings are possible in human language grammars and which are not, regardless of exposure.
  2. The countless cases of instantaneous learning .
  3. The knowledge children have in the absence of exposure (i.e., a frequency of zero) including knowledge of what is not possible.

Furthermore, to quote Eubank and Gregg (2002, p. 237)

Ellis aptly points to infants’ ability to do statistical analyses of syllable frequency (Saffran et al., 1996); but of course those infants haven’t learned that ability.  What needs to be shown is how infants uniformly manage this task:  why they focus on syllable frequency (instead of some other information available in exposure), and how they know what a syllable is in the first place, given crosslinguistic variation.  Much the same is true for other areas of linguistic import, e.g. the demonstration by Marcus et al. (1999) that infants can infer rules.  And of course work by Crain, Gordon, and others (Crain, 1991; Gordon, 1985) shows early grammatical knowledge, in cases where input frequency could not possibly be appealed to. Landau & Gleitman (1985) even document lexical acquisition in spite of frequent input, where a blind child acquired (her own interpretation of) verbs like “look” despite frequent training  under a different interpretation. 

In a comment on the post about Thornbury’s view of SLA, Gregg wrote this:

Hi Geoff,

I think I’d revise one bit of your discussion: Where you say

Thornbury’s unqualified assertion that language learning can be explained as the detection and memorisation of frequently occurring sequences in the sensory data we are exposed to is probably wrong and certainly not the whole story.

I’d change ‘probably’ to ‘definitely’. It’s striking, and depressing, to see how purveyors of ’emergentism’ continue to ignore the mountain of research showing the complexity of language, and the other mountain of research showing the kinds of linguistic (and other) knowledge young children show, knowledge that no one has been able to account for on an empiricist learning theory, and how they continue to blithely assert that it’s all done by generalization across input samples, without showing how. I’m again reminded of the story … of how Rockefeller became rich. One day as a young lad he found himself with a penny in his pocket. He walked down to the farmer’s market and bought an apple, walked to Wall Street and sold it for 2 cents. Then back to the market to buy 2 apples, back to Wall Street, … At the end of a week he’d bought an old wheelbarrow, and after a month he’d earned enough to put down the first month’s rent on a small fruit shop. But then his uncle died and he inherited everything.    


Sprawling in the Primeval Slime 

While Thornbury’s remarks about emergentism are slightly less preposterous this year than they were in 2016 (he’s moved on from the Larsen-Freeman and Cameron’s (2008) nonsense about complex systems to slightly better-argued stuff by the likes of Nick Ellis and Tomasello), he continues to incite a younger generation, who after a quick perusal of Samson, Everett, Wolfe and other reliable sources, share their ignorance with others in the comments sections of the A to Z of ELT blog. Thornbury being Thornbury, he doesn’t tell the young uns that they’re talking baloney, he actually encourages them. In one of his posts this November, Thornbury cheerfully quips that, given the choice between Chomsky’s self-proclaimed triumph of “human reason” on the one hand, and “beastly grovelling in the primeval slime” on the other, he’ll choose the slime every time. The trouble is, he invites the younger generation to join him in the beastly bog; he encourages them to think that their ignorance of Chomsky’s work should be worn like a badge of cool, and he confidently assures them that SLA is best explained as the complex result of a simple process of “reinforcing contingencies set up by the verbal community”. You couldn’t make it up, so it must be true. Well, for the time being we’ll have to leave them to it, happily frolicking in the slime, unconsciously strengthening the associations between who knows what cues, and trust that before they get too much older, the brighter ones will get tired of it, climb out, and leave their genial hero alone with his dirty bucket and spade, there to finally appreciate the power and utility of non-communicative uses of language, or ‘thinking’ as Chomsky refers to them.


Eubank, L. and Gregg, K.R. (2002) News Flash: Hume Still Dead.St udies in Second Language Acquisition , 24, 2, pp. 237-247.

Reflections of 2017, Part 1

Looking back on the posts during 2017, I notice that I started the year (ELT: Art and Rationality) by happily conceding that ELT is

“a creative, imaginative endeavour where a teacher’s ability to bring language to life; to contextualise it; to create situations where students engage with it; to get students to learn some key parts of it by rote or at least through frequent re-cycling; to create group dynamics and nurture group cohesion; to empathise with the doubts and fears of students, to manage conflicting needs, and also to design, organise and carry out a coherent plan of learning, are all far more important than a critical appreciation of theories of SLA and the research they’re based on”.

Chomsky  (1995, cited in Gregg, 2006, p. 403) made a similar point when he remarked that we might well learn more about how people think and feel and act by studying history or reading novels than from empirical research, which, “outside of narrow domains has proven shallow or hopeless”.

Teaching English is still, despite all attempts to commodify education, an “arts and craft” activity, a job where experience counts a great deal, and where teachers who combine all manner of skills and knowledge and character traits, and who find themselves in the right place at the right time, can work wonders, making the difference between FonF and FonFs pale into insignificance. And yet, as I said in that post, things have changed from the time when Earl Stevick, John Fanselow Alan Maley and other master craftsmen (I’m afraid they were mostly men) shared their insights with teachers seeking awareness and inspiration. Since the widespread adoption of coursebooks, our freedom as teachers to express our individuality, inventiveness and creativity has shrunk alarmingly, while at the same time, research into the English language and into how people learn languages has greatly expanded. Despite these two decisive changes, we perversely persist in using syllabuses, methodological principles and pedagogic procedures that rob us of the freedom to pursue our craft, and that, at the same time, fly in the face of robust research findings.

My main argument throughout the year has concerned that enormous elephant in the room: ELT coursebooks. Pace the arguments of those who try to defend their use, coursebooks are not just “a symptom”; it’s not just a question of the way you use them, or that they put too much emphasis on grammar teaching, or that they’re tools of imperialism; or even that they’re stultifyingly boring. No, it’s that they have a huge, generally detrimental effect on the practice of ELT, including syllabus design, methodology, and testing. All the discussion of doing things better, of the role of  extensive reading, of what work to do in and outside classrooms, of how to use this or that bit of kit, of whether to teach vocabulary this way or that, of the best way to recycle work, of the efficacy of pronunciation teaching, of when to use the L1, of how to respond to written and spoken errors, and on and on, all take place against the backdrop of using a coursebook which imposes a restrictive and deforming framework on everything we do. We know that synthetic syllabuses, a PPP methodology and an incremental step by step view of progress are based on false assumptions about how people learn an L2, and yet, using the excuse of convenience and bowing to commercial pressure, we plod on regardless. To make matters worse, like politicians refusing to take climate change seriously, our stubborn refusal to face facts blights the future. The coursebook imposes its mistaken methodological principles and pedagogic procedures on teacher training, particularly the CELTA and Trinity College training courses, where learning to be a teacher of English to speakers of other languages is intricately bound up with learning how to use a coursebook.

In a number of posts this year, I’ve replied to those who have defended coursebook-driven ELT (see the Coursebook section of the menu on the right) and, in my opinion, no serious answers to the case against coursebooks have been offered. Penny Ur’s airy  dismissal of any criticism of them; her recent review of SLA research affecting teaching practice (see the Gagged post) where she made no mention of interlanguage research and ignored questions about the implications of interlanguage research for coursebook-driven ELT; and her continued reliance on the argument that the convenience of coursebooks outweighs all other considerations srikes me as typical of too many of today’s so-called ELT experts. Ur’s replies to Thornbury’s questions about the importance of research (“it’s certainly possible to write helpful and valid professional guidance for teachers with no research references whatsoever”), her misrepresentation of the research on TBLT (“there’s no evidence that it works”); and her extensive use of the well-known fallacy that “inconclusive” evidence in support of a hypothesis is reason to believe it’s false, are hallmarks of the unreliable expert.

I suggest that we have a right to expect that those whose job it is to oversee the training and on-going professional development of teachers should take robust research findings about how people learn an L2, particularly those regarding interlanguage development, more seriously and make discussion of them part of their books and training courses. Why does Ur’s book A Course in Language Teaching so confidently promote the coursebook and so completely ignore 40 years of interlanguage research?  Why does Harmer’s magnum opus The Practice of ELT (see here for a review ) devote more pages to a discussion of classroom seating arrangements than to a discussion of SLA research? Why does nobody in the ELT establishment (except Scott Thornbury) speak out against all the harm being done by the domination of coursebooks today?

The most obvious answer is “Because ELT is a business” and coursebooks are the perfect way to package what could otherwise be a rather messy “product”.  But I can’t help feeling that a certain insidious complacency is also to blame, especially when I see Ur, Harmer, Dellar and the rest of them jetting around the planet giving teachers everywhere expert advice on how to teach, without ever initiating a serious discussion of the mounting evidence from SLA research which indicates that current ELT methodology is fundamentally mistaken. Dellar’s*** tweet in September from some exotic corner of the globe illustrates the ease with which doubts about current practice can be shrugged off by those who feel themselves to be really in the know: You quickly realise how little the heated debates of the euro-centric #EFL blogosphere have to do with most contexts …. he wrote. Ohers were quick to “Like”.

*** My sincere apologies to Jim Scrivener, to whom I wrongly attributed the tweet when I published this post.

An examination of conference talks given by the leading lights in ELT in 2017 reveals a general lack of awareness and critical acumen that many of us find shocking; and almost as shocking is that these conference talks go almost entirely unchallenged. The bombast and chutzpah of so many in the ELT establishment gels with the gullibility and docility of their audiences to produce a complacent culture lacking any healthy critical edge. This year, every time the twenty or thirty plenary speakers who presently dominate the global ELT conference circuit finished their presentations, they were met with polite applause. Until this is replaced with a cacophony of affronted catcalls, change won’t come; or at least it won’t come from rank and file action, though it might well come soon enough from technological change which makes both coursebooks and most teachers redundant.

In Part 2 I’ll look at some of the daft things our experts said in 2017, including these:

  •  If you encounter the pattern They man-doubled across the place, you know that man-doubled is some kind of way of moving.
  • In academia the established use of ‘native speaker’ as a sociolinguistic category comes from particular paradigmatic discourses of science and is not fixed beyond critical scrutiny.
  • English sometimes seems as if it is everywhere, but in reality, of course, it is not.
  • The NS-NNS distinction is absolutely central to the Chomskyan project.  
  • English migrated to other countries … such as the USA, Canada, New Zealand, … and many other corners of the globe. And it didn’t stop there. It has morphed and spread to other countries too.
  • The way I see it Scott is that ‘interlanguage’ is one of the uglier of many unnecessary neologisms invented by academics, presumably to give them a sense that they are forging a profession: there are plenty of plain English alternatives.
  • Have you read Evans’ The language myth. Why language is not an instinct ? Very good book. Quite an eye-opener.

After the rain came falling, 

And the truth was washed away, 

I called my brother on the telephone, 

Just to see what he would say

The last one is the first stanza of a song, a lament one could say, in response to Brexit. The song inspired the best comment of 2017 from John Clave:

“I experienced such vergüenza ajena I curled up in a ball and rolled under my bed”.


Gregg, K.R. (2006) Taking a social turn for the worse: The language socialization paradigm for second language acquisition. Second Language Research, 22: 413.

What science is not

A few weeks ago, someone in the ELT world tweeted that Salma Patel’s blog, which deals with management of the UK National Health Service, had a post that gave a good, brief summary of research paradigms. I went to the blog and found the post:

The research paradigm – methodology, epistemology & ontology – explained in simple language 

Published in 2015, it’s had 168,622 views so far, and there are dozens of comments at the end thanking Patel for his “clear”, “brilliant”, “superb”, “excellent”, “amazing”, “extremely useful” explanations.

The explanation starts with a summary of the main components of a research paradigm and there is then a video which explains the text. Patel begins by saying that there are two main approaches to research:

  1. Filling knowledge gap: positivist
  2. Problem-solving: interpretive.

He explains:

In the first you read a lot of books …..and you find a gap in the research. ……It is objective. What is the meaning of objective? Reality is external to us – I don’t know the reality. So, I propose a hypothesis. What is the meaning of a hypothesis? There is a relationship between X and Y, or not. That’s it.

In the second, you identify a problem, you ask “Why?”. There is no single reality so we have to look at reality from different perspectives, understand different characters, different people, .. So there’s no reality here. That’s why we have to go ourselves into the organisation and talk to people.

So there you have it: scientific, quantitative research is most suitable for research projects which seek to fill a knowledge gap, while qualitative research (which assumes that there’s no such thing as objective reality) is the best way to go about problem solving.

Scientific research is, of course, nothing like Patel’s description of it. Nor is positivism what Patel says it is, and nor does his chart present a reliable or useful guide to research projects.

The aim of scientific research is, precisely, to solve problems, or, to put it another way, to explain phenomena. The collection of empirical data, the organisation of taxonomies, etc. are carried out not for their own sakes but in the service of an explanatory theory. Hypotheses are the beginning of attempts to solve problems and should lead to theories that explain a certain group of phenomena. The aim is to unify descriptions and low-level theories into a general causal theory.

SLA research carried out under the umbrella of cognitive science adopts these aims and methods, and although far from achieving any general theory, it still has some claim to be part of what Kuhn calls a mature science tradition. In contrast, the sort of work Patel encourages falls, at best, into Kuhn’s “immature science” bag, in the ‘pre-paradigm’ period. It’s clear from the literature that some sociologists and sociolinguists want no part of the scientific enterprise, but Patel’s biased and distorted description of different approaches to research fails to properly explain either the realist or the relativist case. In order to provide newcomers with a clear, balanced, well informed introduction to research methodology, I think Patel needs a better grasp than he shows of the philosophy of science, the history of western philosophy, and how evidence-based research is conceived and conducted.

In response to information given to me by Steve Brown, Carol Goodey and others earlier this year, I wrote a post on Research Paradigms where I commented on the way that various influential sociology departments have developed their own particular post-Khunian narrative concerning how research is carried out. I said at the time that I was really surprised to learn how widely these daft notions of ‘positivism’ and ‘research paradigms’ had spread, but I find the fact that Patel’s post has reached over 160,000 grateful post graduate students quite shocking. Did nobody catch so much as a whiff of baloney? Nobody took the trouble to, ahem, deconstruct the text?

A more respectable version of Patel’s presentation can be found in Scotland (2012), which is cited, but it’s hardly any better. In the end, we can trace most of this “revised”, post-Kuhnian treatment of paradigms back to Lincoln and Guba (1985) who proposed a “Constructivist paradigm” as a replacement for “the conventional, scientific, or positivist paradigm of enquiry”.  This view is idealist (“what is real is a construction in the minds of individuals”), pluralist and relativist:

There are multiple, often conflicting, constructions and all (at least potentially) are meaningful.  The question of which or whether constructions are true is sociohistorically relative. (Lincoln and Guba, 1985: 85).

Lincoln and Guba assume that the observer can’t and shouldn’t be neatly disentangled from the observed in the activity of inquiring into constructions.  Constructions in turn are resident in the minds of individuals:

They do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors (Lincoln and Guba, 1985: 143).

Thus constructivism is based on the principle of interaction.

The results of an enquiry are always shaped by the interaction of inquirer and inquired into which renders the distinction between ontology and epistemology obsolete: what can be known and the individual who comes to know it are fused into a coherent whole (Guba: 1990: 19).

Note that Patel has either overlooked or ignored the fact that, according to the leading lights in his “constructivist paradigm”, the distinction between ontology and epistemology is obsolete. In any case, if you want to find the roots of the full-blown idealist, relativist, pluralist, your-experience-of-me-experiencing-you-experiencing-the-teapot, topsy-turvy, now-you-see-it-now-you-don’t world of post-modern sociology, you need look no further than Lincoln and Guba, 1985. And if you want a demonstation of why it’s so much baloney, see Gross & Levitt, 1994; and Sokal & Bricmont, 1998.

Not far behind in terms of culpability for all this mess comes Crotty (1998), whose “seminal work” on research in the social sciences is required reading in thousands of undergraduate and post graduate courses all over the world. Crotty’s work quite wrongly states that positivism started with the work of Francis Bacon, completely misrepresents the work of the positivists themselves, and misrepresents the work of Popper, Kuhn and Feyerabend too. At one point, Crotty says that the real target of Feyerabend’s criticism were “the positivists”, despite the fact that before Feyerabend’s Against Method was published, positivists – scientists and philosophers alike – had thankfully disappeared. I challenge Crotty to find a scientific department in any university anywhere on the planet run by self-proclaimed positivists.

C.P. Snow, in his 1959 lecture, first described the ‘two cultures’ of science and the humanities (see Snow, 1993), since when the gap has widened considerably. Eleven years ago, Gregg (2006) noted that in the field of SLA, a look at the ‘applied linguistics’ literature

turns up doubts about the value of controlling for variables (Block, 1996), reduction of empirical claims to metaphors (Schumann, 1983; Lantolf, 1996), mockery of empirical claims in SLA as ‘physics envy’ and denials of the possibility of achieving objective knowledge (Lantolf, 1996), even wholesale rejection of the values and methods of empirical research (Johnson, 2004). Although the standpoints are various, one common thread unites these critiques: a fundamental misunderstanding of what science, and in particular cognitive science, is about (see, e.g. Gregg et al., 1997; Gregg, 2000; 2002).

Today, blogs and twitter exchanges abound with references to white coats, laboratory conditions and the other trappings of so-called positivists (including Chomsky of course) who, it’s claimed, fail to make any connections with the real world, even though, ironically enough, they’re the only ones who believe in such a thing. In my own case, in exchanges with Marek Kiczkowiak of TEFL Advocates about the existence (or not) of native speakers, I refer to the “sociolinguistic twaddle that obfuscates a simple  psychological reality”, while he refers to “the fantastic beast the NS has become in theoretical linguistics and SLA labs”. I’d say that in this case it’s Kiczkowiak who shows a typically depreciating and ignorant attitude towards SLA cognitive research, while I limit myself to the claim that regardless of how difficult it might be for sociolinguists to decide who belongs to what social group, there are such things as native speakers, and it is the case (a case worth researching) that most people who learn a L2 fall short of native competence. But then, I would say that, wouldn’t I.

Patel’s post is more evidence of the need to remain critical in our reading and thinking about our profession. There are so many examples of low standards of scholarship, rational criticism and intellectual honesty in the work of those who do research and teacher training that we need to be constantly on our guard. Down with baloney!



Crotty, M. (1998) The Foundations of Social Research: Meaning and Perspective in the Research Process. London, Routledge.

Gregg, K. R. (2006) Taking a social turn for the worse: the language socialization paradigm for second language acquisition. Second Language Research 22, 4; pp. 413–442.

Gross, P.R. and Levitt, N. (1994) Higher superstition: the academic left and its quarrels with science. Johns Hopkins University Press.

Lincoln,Y. & Guba, R. (1985) Naturalistic Enquiry. Newbury Park; Sage.

Scotland, J. (2012) Exploring the philosophical underpinnings of research: Relating ontology and epistemology to the methodology and methods of the scientific, interpretive, and critical research paradigms. English Language Teaching, 5(9), pp.9–16.

Snow, C.P. (1993) The two cultures. Syndicate of the University of Cambridge.

Sokal, A.D. and Bricmont, J. (1998) Intellectual impostures. London, Verso.

A Reply to A. Holliday’s “Why we should stop using native-non-native speaker labels”


1  In the domain of English language teaching, there is just about universal agreement that discrimination against non-native speaker teachers must stop. Those who fight to end such discrimination have my full support.

2  In the domain of SLA research, native speakers of language X are people for whom language X is the language they learnt through primary socialization in early childhood, as a first language.

3  To paraphrase Long (2007, 2015), the psychological reality of native speakerness is easily demonstrated by the fact that we know one, and who isn’t one, when we meet them, often on the basis of just a few utterances. When monolingual speakers are presented with recorded stretches of speech by a large pool of NSs and NNSs and asked to say which are which, the judges are always very good at distinguishing them, with inter-rater reliability typically above .9. How do they do this, and why is there so much agreement if there is no such thing as a NS?

4  For the last 60 years, the term “native speaker” has been used in the literature concerning studies of language learning, and one of the most studied phenomenon of all is the failure of the vast majority of post adolescent L2 learners to achieve what Birdsong (2009) refers to as “native like attainment”.

On the prevailing view of ultimate attainment in second language acquisition, native competence cannot be achieved by post pubertal learners. There are few exceptions to this generalization (Birdsong 1992).

5  Claims concerning the relative abilities of native speakers and learners of the target language are not disconfirmed by individual cases. The claims all accept the psychological reality of native speakerness.

6  The specific claim that very few post adolescent L2 learners attain native like proficiency is supported by a great deal of empirical evidence (see, e.g., reviews by Long 2007, Harley and Wang 1995; Hyltenstam and Abrahamsson 2003).

7  When trying to explain why most L2 learners don’t attain native competence, scholars have investigated various “sensitive periods”. It’s widely accepted that there are multiple sensitive periods for different domains of second language learning  – pronunciation, morphology and syntax, lexis and collocation (see Long, 2007, Problems in SLA, Chapter 3 for a review of sensitive periods).

To the issue then

Adrian Holliday, Professor of Applied Linguistics & Intercultural Education at Canterbury Christ Church University, has just published a post on his blog: Why we should stop using native-non-native speaker labels  in response to queries about his claim that the terms native speaker and non-native speaker are neo-racist. He addresses the questions: “What does ‘neo-racist’ mean?” and “Are there no occasion (sic) when these labels can be used?”.

He starts with his own subjective impressions of what ‘native speaker’ means to him and then says

In academia the established use of ‘native speaker’ as a sociolinguistic category comes from particular paradigmatic discourses of science and is not fixed beyond critical scrutiny.

I’ve no idea what the phrase “particular paradigmatic discourses of science” refers to, but I’m sure we can all agree that the use of ‘native speaker’ as a sociological category is not fixed beyond critical scrutiny. Holliday seems to be saying that quantitative research based on testing hypotheses with empirical evidence, as carried out by many scholars trying to understand the  psychological process of SLA is part of a “mistaken paradigm”. Since in SLA research there isn’t, and never has been, any general theory of SLA with paradigm status, and since I’m sure that in the field of sociolinguistics and cultural education they’re even further away from any such theory, talk of paradigms, like talk of “imagined objective ‘science’”, and problems that reside in differences being evoked “regardless of the words that are being used”, and labels referring to things that “do not actually exist at all”, belongs to the giddy world of post modern sociology where words mean what their authors choose them to mean “neither more nor less”, as Humpty Dumpty triumphantly concludes.

Whatever the term ‘native speaker’ might be used for in sociolinguistics, in psycholinguistics ‘native speaker’ refers to real people, as I’ve explained above, and nothing that Holliday says challenges this fact. So we’re left with the charge that when we refer to people as ‘non-native speakers’, we imply that they are “culturally deficient”, which amounts to “deep and unrecognised racism”.  We “define, confine and reduce” this group of people and refer to their culture in a way that evokes “images of deficiency or superiority – divisive associations with competence, knowledge and race – who can, who can’t, and what sort of people they are”.

In my opinion this is so badly written as to be almost incoherent, but perhaps it expresses exactly what Holliday means to say. Whatever it means, it’s difficult to counter something like neo-racism if it’s “unrecognised”, and if any attempt we make to use other terms just pushes the labelling “even further into a normalised, reified discourse, where we are even less likely to reflect on their meaning, and where a technicalisation of the labels somehow makes them more legitimate”. Still, since Holliday confidently asserts that “the native-non-native speaker labels” refer to something “that does not actually exist”, it should be easy enough for sociolinguists (and those involved in intercultural education too, I suppose) to stop using them. Meanwhile, back in the real world,  it’s a different story.

Long (2007) argues that the issue of age differences is fundamental for SLA theory construction. If the evidence from sensitive periods shows that adults are inferior learners because they are qualitatively different from children, then this could provide an explanation for the failure of the vast majority of post adolescent L2 learners to achieve Birdsong’s “native like attainment”. If we want to propose the same theory for child and adult language acquisition, then we’ll have to account for the differences in outcome some other way; for example, by claiming that the same knowledge and abilities produce inferior results due to different initial states in L1 acquisition and L2 acquisition. Either way, the importance of the existence (or not) of sensitive periods for those scholars trying to explain the psychological process of SLA indicates that native speakerness will continue to be used as a measure of the proficiency of adult L2 learners.


Harley, B. & Wang, W. (1997). “The critical period hypothesis: Where are we now?”. In A. M. B. de Groot & J. F. Kroll (Eds.), Tutorials in Bilingualism: Psycholinguistic perspectives (pp. 19–51). Mahwah, NJ: Lawrence Erlbaum Associates.

Hyltenstam, K. & Abrahamsson, N. (2003). “Maturational constraints in SLA”. In C.J. Doughty & M.H. Long (Eds.), Handbook of Second Language Acquisition. Oxford: Blackwell.

Long, M. (2007) Problems in SLA. London, Erlbaum.

Long, M. (2015) SLA and Task-based Language Teaching. Oxford, Wiley.


I’m in danger of crying wolf here, because the last time I said I’d been censored, it turned out that it was my own clumsy use of the “reply” function that was to blame. But I’ve checked, and I think this time I’m right. In any case, the important thing is to air the matter of an influential ELT author and teacher trainer not being as rigorous as I think she should be in her role as mediator between researchers and teachers.

Penny Ur recently wrote an article for the IATEFL materials writing special interest group called “And what about the research?”  Ur points out that in the last twenty years, research has produced some “convincing evidence” for ideas which challenge popular, widely-held views among teachers. Ur sympathises with the busy teacher, but urges them to read the research and to pay attention to it. In her role as mediator, Ur goes on to give three examples of this kind of research:

  • Use of the L1  Often proscribed by teachers (and/or their bosses), research shows that using the L1 is very helpful in some situations;
  • Lexical sets  Teaching lexical sets is popular, and the basis for a lot of ELT material. But research shows that it’s counter-productive: learners actually learn new items much better if they are disconnected, or connected thematically;
  • Guessing from context  Another popular activity in many classrooms and in workbooks, research shows that it’s “a thoroughly unreliable way of accessing meaning”.

I wrote a comment about the article and Ur replied. Next, I replied to Ur’s reply but Ur didn’t reply. Finally, Catherine Richards commented, I replied to Richards, but my reply wasn’t published.

There are two issues. The first is that Ur claims to act as an honest mediator between those doing research and busy teachers, and yet she ignores important research findings that don’t fit her own view of ELT.

The second is that whoever is responsible for looking after the MaWSIG blog chose to publish a quite personal, ad hominem attack on someone who criticises Ur, and yet refused to publish the reply.

Here is the exchange of comments, beginning with mine:

My first comment 

You have repeatedly given your own views on TBLT (“there’s no evidence that it works”) and the usefulness of teaching grammar proactively through traditional focus on formS (“it’s effective”), without adequately discussing the evidence from research findings that challenge such opinions (see, for example, Long 2015).

In this article, you mention 2 areas where research can inform ELT while ignoring the elephant in the room, i.e., the 60 years of research findings on interlanguage development. This research (see Han and Tarone, 2017 for a review) poses a serious challenge to the use of materials such as coursebooks, which chop the target language into bits, and then present and practice the bits in a pre-determined sequence on the assumption that learners learn what they’re taught it this way.

Pienemann’s ( e.g. 1987) work showed that all the children and adult learners of German as a second language in a very big study adhered to a five-stage developmental sequence. Later work by his group and others in the 1990s established an acquisition order for morphemes, negation, questions, word order, embedded clauses and pronouns (see Han and Tarone, 2017, for a review). The conclusion from the research findings is that there are various kinds of developmental sequences and stages in interlanguage development which are impervious to instruction, in the sense that stage order can’t be altered, or stages skipped: acquisition sequences do not reflect instructional sequences, and thus teachability is constrained by learnability.

The implication is that a lot of the materials you recommend, including coursebooks that implement a grammar-based syllaubus based on a PPP methodology, fly in the face of robust findings in SLA research.


Han, Z and Tarone, E. (eds.) (2017) Interlanguage Forty years later. Amsterdam, Benjamins.

Long, M. (2015) SLA and Task-based Language Teaching. Oxford, Wiley.

Pienemann, M. (1989). Is language teachable? Psycholinguistic experiments and hypotheses. Applied Linguistics, 10, 52-79.

Penny Ur’s reply

Thanks for your challenging response, Geoff! I’ll try to respond!

I don’t think I did, actually, in my piece, advocate coursebooks based on a grammatical syllabus? All I said was that the research on grammar teaching or about TBLT is inconclusive. You produced references against explicit grammar teaching and for TBLT: these could easily be countered with evidence such as that produced by Norris and Ortega (2002) in the first case or arguments put forward by Michael Swan (2006) in the second. And a lot of doubt has been cast on the practical implications for teaching of the Pienemann’s teachability hypothesis: see for example Spada and Lightbown, 1999. But my point in this case was not that materials should or should not be grammar based or that TBLT is or is not a good idea: but simply that we have no conclusive proof either way, and a lot of conflicting evidence. On the other hand where we DO have substantial and reliable evidence to support a conclusion that affects materials writing, and we have access to it, I think we have a moral obligation to take it into account in our own composition.

Norris, J. M. & Ortega, L.. (2001). Does type of instruction make a difference? Substantive findings from a meta-analytic review. Language Learning, 51, Supplement 1, 157-213.
Spada, N. & P. M. Lightbown. (1999). Instruction, first language influence, and developmental readiness in second language acquisition. Modern Language Journal, 83 (1), 1-22.
Swan, M. (2005). Legislation by hypothesis: the case of task-based instruction. Applied Linguistics, 26(3), 376-401.

My second comment

Dear Penny,

Thanks for your reply. I wasn’t referring only to your piece here, but rather to what you’ve said in recent conference talks and in your book “A Course in Language Teaching”. If we take all these into account, I think it’s fair to say that you have criticised, and indeed, dismissed, TBLT without properly discussing different versions of it, and commended courseboooks which implement a grammar-based syllabus through PPP, without properly discussing the evidence from research findings. My general point is that while you accept the role of mediator between academics who carry out empirical research into (instructed) SLA and teachers, you use this role to argue for a very partisan view of ELT, which is often at odds with research findings.

The works I cited were in support of findings in interlanguage development, and all four of the academics you cite – Spada, Lightbown, Norris and Ortega – support the consensus view among scholars of SLA that instruction can’t affect the route of interlanguage development. They also support the commonly held view that basing ELT on the presenting and practice of pre-selected formal elements of the grammar in a pre-determined order, a methodology which you recommend, flies in the face of robust research findings. It’s surely your duty to discuss these matters with the teachers you council and to explain why you disagree with these views.

You cite the work of Norris and Ortega (2002) as evidence of the value of explicit grammar teaching. Nowhere do these scholars recommend the kind of presentation and practice of successive bits of grammar as you do in your book “A Course in Language Teaching”.

You cite the work of Swan against TBLT. Nowhere does Swan deal with Long’s particular form of TBLT as described in his 2015 book.

You say “a lot of doubt has been cast on the practical implications for teaching of the Pienemann’s teachability hypothesis: see for example Spada and Lightbown, 1999”. One practical implication of Pienemann’s teachability hypothesis has already been mentioned: teaching should respect the learners’ own internal syllabus, and this is an implication that Spada and Lightbown accept. Pienemann’s hypothesis doesn’t ihave clear implications for how to teach, but it does have very clear implications for how not to. You choose to ignore these implications when you encourage teachers to carry on using coursebooks.

Of course we don’t have conclusive proof about the efficacy of grammar-based materials or TBLT. But we do have a great deal of evidence to suggest that you misguide teachers when you tell them that using coursebooks and other materials to support a gramar-based PPP methodology is a perfectly fine way to go about ELT. On the one hand you insist on the need for ELT teachers to be more critical and to pay more attention to research findings, while on the other hand, you don’t deal critically with research findings that flag up the false assumptions on which your own approach to ELT are based.

Catherine Richards’ comment 
I am a little bemused by your bad tempered, disrespectful approach to the exchange of ideas, Geoff Jordan. While some of your points may indeed be valid and worthy of debate, I don’t think you’re much interested in commenting on Penny Ur’s piece on the importance of materials writers being research-aware – the topic here.

You seem much more interested in attacking her for her views on Task Based Learning and for her views on the use of coursebooks that appear to follow a grammar-based syllabus. My own experience, Geoff, is that the vast majority of English teachers in the world don’t work in private language schools with small groups of motivated students and enthusiastic colleagues (with CELTAs and DELTA’s.) They are state school teachers, language or philology graduates, speak English as an L2, put up with poor working conditions – big classrooms, full timetables, hours of admin and stress to the eyeballs. For this reason they love coursebooks, love bite-sized grammar chunks – they are under obligation to test 3 times a semester – and they loathe Task Based Learning almost as much as they loathe pompous academics telling them that they should embrace it and that much of what they do is wrong (because it is based on false assumptions?)

We need to understand teachers first, before we beat them around the head with the latest theory, don’t you think?

My unpublished reply to Richards

Hi Catherine,

I don’t tell teachers what to do, and I certainly don’t beat them around the head with the latest – or any – theory. I dedicate just a bit of my time to taking leading members of the ELT establishment to task for writing books on how to teach English and giving PD teacher training courses which ignore research findings and misguide teachers by telling them that using coursebooks and other materials to support a gramar-based PPP methodology is a perfectly fine way to go about ELT.

Your only defence of coursebook-driven ELT is that it’s convenient. It’s based on false assumptions? Pah! It flies in the face of robust research findings? Never mind! The critic is a bad tempered, disrespectful, loathsome, pompous academic, so we can safely ignore his arguments.

The gentrification of inner cities

There’s increasing interest in what’s happening to neighbourhoods in big modern cities which suffered a drastic decline in the 1970s and 1980s and have now been “gentrified”. A 4-stage evolution has been detected: decline -> regeneration -> displacement -> gentrification, and the real problem is how to arrest the last two stages. Bennie Gray has written various investigative journalism bits about this, and I’m working with him now on something related to it all. What follows is based on Bennie’s work so far.

The story of Covent Garden is an example of the decline to gentrification process.

In 1973 the whole of Covent Garden, which, for more than a century had been devoted to selling fruit, vegetables and flowers, moved en masse to a new site in Battersea.  As a result, ten acres of wonderful old buildings fell empty. There were plenty of developers hungry to knock the whole lot down and put up some lucrative new office blocks, but the government decided to intervene, and slapped protective orders on most of the buildings, with the inevitable result that they became neglected and began to deteriorate.

The next stage of the cycle started when various dodgy people (“deviants” they’d probably be called by town planners) began squatting in the buildings, passing virtually unnoticed by those who preferred to look the other way. They included artists and other arty-crafty, alternative life style desperados looking for free space; drug dealers looking for a place to hide; winos looking for somewhere to crash, dossers looking for somewhere to, well, doss; and so on.

Gradually, a kind of demi-monde community grew up, which was perceived as wicked, which is to say, somehow glamorous and authentic. This in turn began to attract small-time hippy entrepreneurs who opened cafes, craft shops, tattoo parlors, art galleries and alternative therapy places. Quite soon, Covent Garden had become very cool indeed, and thus, more respectable activities began to take place. Art galleries equipped with proper lighting appeared and restaurants with proper kitchens and tablecloths soon followed. Even the squatters moved up a notch, taking an interest in plumbing, for example.

By around 1980, Covent Garden had become a popular tourist destination, which was when the big corporations began to move in, taking advantage of all that commercially fertile coolness.  Rents quickly shot through the new atrium-clad roofs; before long the spirit which had characterised the area in the 1970s evaporated; and by the noughties, Covent Garden had become just another cute, crowded, over-priced shopping centre.

So this is the cycle. An original set of buildings with an original purpose loses its purpose and the buildings fall derelict. They get colonised by people who, although generally regarded as disreputable, create a thriving community.  The place gets talked about and becomes a tourist destination. This generates investment, and although the area prospers, it loses its original appeal. Spiralling rents means that the area loses the very people who created that appeal in the first place – they simply can’t afford to stay. In the case of Covent Garden, many emigrated to run down,cheap parts of Shoreditch and Hackney, and the same old cycle started again.

Another example is Trellick Tower in North Kensington, a huge block of council flats which was once the highest residential building in Europe.  Designed by Erno Goldfinger, it’s a fine example of the “new brutalism”.  In its early life, Trellick Tower was a great success – fantastic views, airy and spacious apartments, lots of balconies, etc., – but thanks to appallingly negligent management, it fell into decline to the point that it became a very dangerous place to live, teeming with vandals and drug dealers.

In the 1980s, Trellick Towers started to be re-assessed, as part of the general re-assessment of North Kensington as a place to live. Nearby Notting Hill, which in the early 60s had been the scene of riots associated with the “No Irish, No blacks, No dogs”  policies adopted by slum landlords, had become absurdly expensive, and North Kennsington, despite the crime and the drugs, was dripping with the “authenticity” and “street credibility” that marks the start of stage 2.

As the early 90s went by, more and more Japanese tourists came to photograph this icon of brutalist architecture. The clicking cameras, together with builders’ skips, mineral water bars and artisan bakers, was a reliable marker of incipient gentrification. Conned by Margaret Thatcher’s catastrophic “right to buy” policy, the poor and needy original tenants sold up and moved out, and the hipsters moved in. Trellick Tower became a desirable place to live: the land at the base of the building was turned into a park, the common parts were carefully restored, and the flats themselves were spruced up. Thus, Trellick Tower went from being good social housing, to a near deathtrap, to a spectacular example of gentrification.

That same cycle is happening all over the Europe and, perhaps most spectacularly, in The US. One indication of the extent of  gentrification is the fact that in recent years, American speculators have taken to buying tracts of rundown inner-city property and land, and then paying groups of the aforementioned “artists and other arty-crafty, alternative life style desperados looking for free space” to live and work in the area free, in the sure knowledge that their mere presence will boost the desirability of the land and the property they occupy.

This process was described by Jane Jacobs in her groundbreaking books on 20th-century urban planning, notably “The Death amd Life of American Cities”.   She had no panacea, she simply pointed out that people are as important as property and that people, in the end, determine what happens with property.

As I’ve said, Bennie Gray has written about this cycle and informed what I’ve written here. Bennie masterminded the Custard Factory project in Birmingham, and he actually managed to prolong the second stage of regeneration for more than 20 years, just because he owned it.  In Bennie’s opinion, there are only two ways to blunt the damage caused by gentrification, neither of them, he confesses, being of much use. The first is a form of value taxation, as proposed by Henry George a hundred years ago. Liberals of one stripe or another have been tinkering with this mechanism ever since, and there’s general agreement that it won’t fly. The second way involves either hugely rich benefactors or the State interfering with the so-called free market.  The Duke of Westminister could, if had a change of heart (don’t hold your breath) freeze development in all the parts of London that he owns in their cozy second stage, and Xi Jinping or Raul Castro could do the same (ditto). Bennie’s pessimistic, of course, and he refuses to even consider the third option – see below.

A third possibility is land trusts, described in some detail in Context Institute website. These trusts divide land rights between immediate users and their community, and examples of them are springing up all over the world, including in India, Israel, Tanzania, Canada, and the US.  We may distinguish between conservation trusts, community trusts, and stewardship trusts.

A conservation trust preserves some part of the natural environment either by the full ownership of some piece of land that it then holds as wilderness, or by owning “development rights” to an undeveloped piece of land. Once conceded, these rights allow them to veto any attempts to develop.

A community land trust (CLT) attempts to remove land from the speculative market and to make it available to those who will use it for the long term benefit of the community. A CLT generally owns full title to its lands and grants long term renewable leases to those who use the land. Appropriate uses for the land are determined by the CLT in a process comparable to public planning or zoning. The lease own the buildings on the land and can take full benefit from improvements they make to the land. They can not, however, sell the land nor can they usually rent or lease it without the consent of the trust. The Institute For Community Economics in the USA is one of the major support groups for the creation of community land trusts in both urban and rural settings.

The stewardship trust combines features of both the conservation trust and the CLT, and is being used now primarily by intentional communities and non-profit groups such as schools. The groups using the land (the stewards) generally pay less than in a normal CLT, but there are more definite expectations about the care and use they give to the land.


In each one of these types, the immediate users have clear rights which satisfy all of their legitimate use needs. The needs of the local community are met through representation on the board of directors of the trust and the larger community has representation on the trust’s board.  Thus by dividing what we normally think of as ownership into “stewardship” (the users) and “trusteeship” (the trust organization), land trusts are pioneering an approach that better meets all the legitimate interests.

The system is, of course, still limited by the integrity and the attitudes of the people involved. Many anarchists will suspect that the idea is manipulative and are right to be sceptical, particularly when considering the possibility of any kind of land trust arrangement in big cities. So what is it then I wonder: nutritious food for thought, or sickly thin soup?

November Calendar

Learning to Teach (Better) with Penny Ur (OBE)

She’s back! This month, four fun-packed, informative webinars from the foremost purveyor of up to date ELT obsolescence. Lots of useful tips on how to carry on teaching in the tried and trusted PPP, grammar-based way that Penny herself remains so fully committed to. You’ll be confidently reassured that all the research is rubbish and that there’s absolutely nothing wrong with a methodology that demands the impossible of students. Carry on slogging through the coursebook!; keep those Concept Questions coming!; and above all, never forget: the teacher knows best!

Learn how to

  • use the latest, digitalised drill-and-kill exercises
  • create your own baffling phrasal verb tests for no particular reason
  • project pages from Murphy’s Grammar in Use on the night sky
  • time “free conversation practice” for just before the bell goes
  • write challenging dialogues about everyday British life and have them recorded by unemployed actors in regional accents
  • unobtrusively wake up sleeping students

and much, much more.

On completion of the course, for an extra $59 you’ll get a worthless Certificate if you answer 2 easy questions about the present perfect.

Teach abroad as an English language assistant for the British Council  

Applications for the 2018/19 academic year open on 6 November 2017. If you’re one of the lucky successful applicants you’ll pay for your flight, accommodation, travel insurance, visas, and so on and then receive a miserable monthly salary in return for the privilege of being a member of one of the most snobbish, ethically questionable UK organisations of the lot.

You’ll work in one of BC’s lucrative commercial ELT operations which in 2016 earned them a tax-free income of approx. £1 billion. These activities have led to accusations that the BC keeps valuable commercial information to itself; that its one-third share in the IELTS biases its testing and certification policies; that it competes with an unfair advantage to train teachers for overseas governments; and that its not-for-profit status means that the income it gets from English teaching is exempt from corporation tax in many countries, unlike its competitors.

But never mind; it’ll look great on your CV, you’ll get a free British Council RP phonetic wall chart to decorate the hovel where you’ll live, and if posted to Caracas, you’ll have free access to the BC’s Rudyard Kipling Memorial Library, though getting there can be a bit tricky after dark.

TESOL Kuwait  

Another chance to see these twin pillars of the ELT establishment strut their stuff! Just in case you missed their wonderful presentations on Being the Best Teacher You Can Possibly Be in 1977, this is a special “Forty Years On Anniversary Ruby Re-run”, where not a single word has been added or taken away from the original scripts.

And what better venue than Kuwait for two of the richest men in ELT, both multi-millionaires with a string of best-selling coursebooks to their names, to celebrate!  When their session draws to a close, dozens of falcons owned by the country’s leading families will swoop over the auditorium, scattering $1,000 bills over the audience, while keys to the limited edition Bugati Veyron are presented to the speakers.

Meanwhile, outside, life goes on for ordinary teachers, migrant workers who form two-thirds of Kuwait’s population. They work long hours for low salaries, have precarious working conditions, and almost no say in what or how they teach.

On 5th August 2017, a class action civil lawsuit and also criminal investigation against the State of Kuwait were opened in Kuwait for claims of decades of unpaid wages of thousands of foreign teachers, allegedly driven by a policy of discrimination. The Court ruling also issued a protective order against public threats by the Kuwaiti Ministry of Education overtly intimidating workers into not seeking access to Justice with the international Judiciary.

TEFL Equity Advocates: a conflict of interests

Every day on Twitter there are inspirational advertisements for the TEFL Equity Advocates.

They invite everybody to join in the fight against the discimination of NNESTs.

When you go the TEFL Equity Advocates web site, you see promotional stuff about training courses that Kiczkowiak runs or supervises if you click on the WEBINARS and TEFL EQUITY ACADEMY options on the home page.

The just cause to stamp out discrimination against NNESTs should, in my opinion, be rigidly separated from Kiczkowiak’s attempts to sell his own stuff.

A reply to Andrew Walkley’s post on teaching a unit from the “Outcomes” Coursebook

My attempts to comment on Andrew’s post – Complicating the coursebook debate: Part 4 – were unsuccessful, and I wrongly assumed that I’d been the victim of censorship. Andrew has explained (see the Comments section below) that nobody tried to stop my comment being published on the website, and I conclude that I, not he, did something wrong; so I apologise to him for the false accusation. Here’s what I wanted to put as a comment on Andrew’s post.

Hi Andrew,

Thanks for this interesting account of how you’d teach the sample unit from your coursebook. You give every indication of being an experienced, thoughtful teacher and I’m sure your students appreciate you. When we get down to this level of detailed teaching procedures, all the particularities of context play a part in deciding between the options and the learning outcomes, as you repeatedly recognise.

Our disagreement centres on the key issue of synthetic versus analytical syllabuses. You use a synthetic syllabus, where the teacher or coursebook writer decides what bits of language are to be taught, and where most of the time is spent teaching students explicit knowledge about the language: grammar, lexis (lexico-gammar if you like) and pronunciation. I use an analytical syllabus where the learners’ needs determine what is to be taught, and where most of the time is spent on scaffolding students’ engagement in pedagogic tasks designed to help them to develop the implicit knowledge required to carry out real life tasks in the L2.

Your description of how you’d use your coursebook makes it clear how heavily you rely on explicit teaching.  It fits well with what you say in Teaching Lexically about the “6 principles of how people learn languages”.  I quote:

Essentially, to learn any given item of language, people need to carry out the following stages:

  • Understand the meaning of the item.
  • Hear/see an example of the item in context.
  • Approximate the sounds of the item.
  • Pay attention to the item and notice its features.
  • Do something with the item – use it in some way.
  • Repeat these steps over time, when encountering the item again in other contexts.

Leaving aside any inadequacies of this mechanistic “explanation”, what stands out is the scant importance given to stage 1: Understand the meaning of the item. You seem impatient to get on to the next stages ASAP, recommending translation as the easiest, most efficient way of getting “meaning” out of the way, so as to get to the real heart of the matter, namely teaching words.  You’re thus at odds with those who believe that giving students opportunities for implicit learning by concentrating on meaningful communication should be a guiding principle of ELT. Meaningful communication about things students have indicated that they need to talk about, the negotiation of meaning, finding their voice, expressing themselves, working out the illocutionary force of messages, catching nuances, compensating for inadequate resources, and all the sorts of things involved in implicit language learning should, for us, be what goes on most of the time in class, not something that’s allotted a ten minute slot here and there. Your plan for how to work through the sample unit involves spending a great deal of the time talking about English; there seems to me to be far too little time devoted to letting students talk in the language. Right at the end you say: “Finally, there is a conversation practice”. Finally! But even here, you add “This is an opportunity for students to re-use language that has been ‘taught’ over the previous sequence of tasks. In fact, we ask them to write the conversation, which allows them to do this more consciously”.

Language learning is not, I suggest, what you assume it to be, and ELT teaching is not best carried out by trying to teach thousands of “items”, especially when you can’t explain the criteria for their selection, and especially when Dellar insists on also teaching the curious, bottom-up grammar which attaches itself to so many of them.

Grammar and vocabulary teaching: What a difference a brain makes

In my last post, I argued that Hugh Dellar’s negligent misrepresentation of grammar models of the English language, such as Huddleston’s (2009) or Swan’s (2005), and his inability to provide any clear description of an alternative  “bottom-up approach to grammar” combined to make his advice to teachers useless. In this post, I take a quick look at two texts that discuss aspects of grammar and vocabulary teaching, just to give some indication of how useful an articulate, well-informed discussion of such matters can be. The first is an article that appeared in ELTJ and the second a recent book which you can read a bit more about on Mura’s blog, where he interviews the authors.

Spoken Grammar: What is it and how can we teach it?

McCarthy and Carter (1995) argue that learners need to be given exposure to both spoken and written grammars, and that the inter-personal implications of spoken grammars are important. They use a relatively small corpus of spoken English, constructed specially for the study of spoken grammar, where particular genres of talk are collected.  In the article, they use 2 samples of data.

(McCarthy & Carter, 1995, p. 208)

In Sample 1, a couple are making food (a curry) for a party. The authors note that ellipsis is the most salient grammatical feature of the sample.  For example:

  • D: Didn’t know you had to do that.
  • B: Don’t have to…
  • B: Foreign body in there

The authors comment:

(McCarthy & Carter, 1995, p. 209)

The article goes on to summarise the grammar features which stand out from an examination of the data:

1. Tails: slots at the end of clauses for more information.

  • they tend to go cold.., pasta
  • He’s quite a comic, that fellow
  • It’s very nice, that road up to Shipton.

2. Reporting Verbs: Use of past continuous rather than past simple.

  • He was telling me…..
  • They were saying …..

3. Frequent use of tend to

  • I tend to put the salt in last
  • It tends to go cold
  • I tend not to use names

4. Question tags: Used most often when meaning is being negotiated

5. Will / going to   More to do with interactive turn taking than semantics of time.

Finally, McCarthy and Carter propose a “Three Is” methodology in place of the traditional PPP methodology.

(McCarthy & Carter, 1995, p. 216)

I have given the most skeletal outline of this article, which includes a lot more information about the data and a series of classroom activities designed to draw upper intermediate students’ attention to some of the most salient aspects of the spoken grammar.

Successful Spoken English: Findings from Learner Corpora 

The book defines a successful English speaker in terms of his/her communicative competence at the various levels outlined in the CEFR, from B1 to C1. This in itself is an interesting and welcome innovation, which moves us away from both the Native Speaker norm and from the vague and incremental CEFR scales. The authors explain how they measure successful spoken language and then discuss the data which emerge from their searches  of the UCLan Speaking Test Corpus. As the editors explain to Mura

This contained data from only students  from a range of nationalities who had been successful (based on holistic test scoring) at each level, B1-C1. As points of comparison, we also recorded native speakers undertaking each test. We also made some comparisons to the LINDSEI (Louvain International Database of Spoken English Interlanguage) corpus and, to a lesser extent, the spoken section of the BYU-BNC corpus.

They begin with an examination of the data pertinent to linguistic competence, describing frequency profiles, frequency lists, keyword lists and lexical chunks at each level, from B1 to C1. It makes fascinating reading, and particularly interesting (again, I take this from Mura’s interview with them) is the finding that higher levels of linguistic competence are not characterised by the use of a much greater range of vocabulary, but rather by a greater flexibility in the use of the words they knew  – most of which remained in the top 2,000 most frequent words in the corpora. As they made progress, students were able to use words with a wider range of collocates for a wider range of functions.

In the subsequent chapters on strategic, discourse and pragmatic competences, each of which ends with a lively, well-considered “Discussion” section, more fascinating insights are shared, and the teaching implications are discussed. Just for example, in tune with McCarthy and Carter (1995) discussed above, the authors stress the need to distinguish between different spoken genres and to recognise the cooperative nature of much spoken discourse: the ability to co-construct conversations and to develop ideas from and contribute to the turns of others, is one important mark of increasingly successful speakers.

The authors suggest that one practical way for teachers to use the book is by taking advantage of the lists of frequent words, keywords and chunks for each level, and to use, for example, the language of successful B2 level speakers to inform what they teach to B1 level speakers. This is a principled and powerful way of choosing the vocabulary and lexical chunks to concentrate on in any particular course, providing that the lists are taken from relevant corpora (that is, corpora built from learners performing relevant tasks).

Another clear message from the book is that successful speakers need to develop all aspects of communicative competence (linguistic, strategic, discourse and pragmatic competence) and that, therefore, teaching should focus on all of these areas rather than spending too much time on learning an unprincipled list of lexical chunks.


Huddleston, R. (2009)  Introduction to the Grammar of English. Cambridge: Cambridge University Press.

Jones, C. Byrne, S., Halenko, N. (2017) Successful Spoken English: Findings from Learner Corpora. London, Routeledge

McCarthy, M. and Carter, R. (1995) Spoken Grammar: What is it and how can we teach it? ELTJ, 49/3, 207-218.

Swan, M. (2005) Practical English Usage. Oxford: Oxford University Press.