There’s a growing opinion among academics and pundits in the ELT industry that exposure to language in the environment is enough to explain how we learn languages. This is a rebuttal of what we might call the cognitive paradigm established in the 60s by Chomsky, according to which our knowledge of language can’t be explained that way. Following Chomsky’s criticisms of Skinner, there’s been a generally accepted view among experts in the field for the last 50 years or so that language learning is not one more example of learning by reinforced behaviour, but rather a special case of learning which draws on a unique property of the mind to interpret linguistic information.
Serious questions of epistemology are at issue here. They revolve around re-visting questions of whether we can speak with any sense about the mind, or if, as the empiricists insist, we can only talk, “sensibly” (geddit) about measureable things presented to our senses. The quest for reliable knowledge led the extreme empiricists, the Logical Positivists, in the 20s and early 30s to insist that all talk of the mind had to be purged, and that only a hand full of carefully-vetted sentences could be used if the chaos and confusion of normal discourse were to be overcome. They arrived at an inevitable dead end, climbed up Wittgenstein’s ladder, and fell into well-earned oblivion. The work of people like Tarski allowed the few scientists who might have been disconcerted by the positivists’ doubts to settle down, adopt a sensible, sorry, I mean common sense, “correspondence“ view of truth, and continue their work, which rested on the view that there’s an objective world out there which we can dispassionately observe and, basing ourselves on empirical (i.e. factual, non-judgemental) data, theorise about the way it works, using rules of logic to guide us.
But those studying human behaviour have, quite rightly of course, had a hard job gaining admittance to the science club. If they wanted to be scientific they’d have to base themselves on empirical observation, wouldn’t they. Hence, Skinner and behaviourism, who and which confused empiricism as a philosophical movement with empirical research. They decided that human behaviour is best studied by observing what people observably do. How do they learn, then? They learn by doing things, by reacting to their environment. They form habits based on repeating the same behaviour in response to their environment. They have bigger brains than other creatures so their learning is more sophisticated. Reasoning is no more than a sophisticated reaction to a stimulus in the environment.
Chomsky questioned Skinner’s general learning theory and you all know how. But Chomsky’s view makes use of a raft of non-observable theoretical constructs which we allow in order for him to develop his theory. Pace Larsen Freeman, they’re not metaphors, any more than gravity is a metaphor, they’re things we invent in order to explain phenomena which we’re trying to explain. Post Chomsky, the most widely accepted view of language learning is that it’s a process that goes on in the mind, itself a theoretical construct, and that it involves the processing of data. How we process the data is the stuff of lots of different theories which try to explain different bits of the process; none of the theories is complete (none offers a complete explanation of the phenomena under investigation in SLA) and none is firmly established. But most of the theories rely on Chomsky’s theory that we learn our first language thanks to an innate capacity of the mind to make sense of the linguistic data we get from the environment.
So here comes Emergentism, which returns to empiricism and its epistemological roots. It takes many forms; it’s been proposed by various academics like O’Grady, MacWhinney and N. Ellis, with varying results. Gregg has done his usual elegant job of pointing out the weaknesses of N. Ellis’ well considered arguments (see my post on Emergentism) and MacWhinney seems to be making little progress. O’Grady, on the other hand, looks better every time I read his work, which I’ve only recently started to do, having got the tip off from Kevin Gregg. I urge you, as Kevin urged me, to read O’Grady (2005) How Children Learn Language. It’s like listening to Glenn Gould play Bach; it’s crystal clear, high definition brilliance, one of the best books I’ve read in years. While it’s not based on an empiricist epistemology (far from it!), it totally rejects Chomsky’s UG and argues that a general learning device explains language learning. Actually, you need to read more of his stuff than just the book, but anyway, ..
And then there’s Stefano Rastelli’s (2014) Discontinuity in Second Language Acquisition. the Switch between Statistical and Grammatical Learning. Mike Long put me on to this, and it’s superb. Long has written a review of Rastelli’s book which I hope will appear soon. In the review Long notes that “recent years have seen growing research interest in the potential of statistical learning and usage-based accounts of SLA by adults”. What Long finds so interesting is that Rastelli has dedicated a full book to his version of statistical learning, not just an article in a journal or a chapter in an edited collection.
Rastelli ‘s theory is that statistical learning is the initial way learners handle combinatorial grammar, i.e., regular co-occurrence relationships between forms that are overt in the input (not absent, like pro-drop, for example) and the meanings and functions of those forms. Combinatorial grammar comprises recurrent combinations of adjacent and non-adjacent whole words and morphemes. The form-function pairs can be stored and retrieved first as wholes, and then broken down into their component parts in order to be computed by abstract rules.
And get this: Combinatorial grammar is learned twice, first by statistical learning and then by grammatical learning. This is the meaning of ‘discontinuity’ in his hypothesis. Statistical learning prepares the ground for grammar learning
Statistics provides the L2 grammar the ‘environment’ to grow and develop. (Rastelli, 2014, p.42).
I hope Thornbury reads this; he might just find in Rastelli some long-missed support for his assertions about learning grammar for free, although that’s not exactly what Rastelli is saying.
So Rastelli rejects the notion that L2 development is continuous, a series of developmental stages as a result of increased exposure to L2 input:
The core idea of discontinuity is that the process of adult acquisition of L2 grammar is not uniform and incremental but differentiated and redundant. To learn a second language, adults apply two different procedures to the same linguistic materials: redundancy means that the same language items may happen to be learned twice (2014: 5).
I’d like to say more, but I don’t want to steal Mike’s thunder, if I haven’t already done so. I hope that’s enough to whet your appetite.
Arguments about SLA are based partly on epistemological underpinnings that need to be declared and understood. Those like Hoey who say that we acquire all the knowledge we have about words on the basis of frequency of exposure, and Larsen Freeman who says complexity theory explains it all, are, whether they appreciate it or not, adopting an empiricist epistemology. Consequently, their theories are doomed to failure unless they create crafty loopholes. But it is, it seems, possible to argue from a different, but still rational, cognitive perspective, that the poverty of the stimulus argument is wrong, if you know what you’re doing. Good scholars like O’Grady and Rastelli do it, and it’s very exciting.