They keep coming, like burps after a poorly-digested Christmas lunch: comments on Thornbury’ A to Z blog. I’ve read 3 in the last few days, so let me add my own final swipe at the edifice before 2015 concludes.
Thornbury’s Sunday posts on his A to Z blog only lasted a few months, but during that short season they became part of my Sunday morning routine: late breakfast, read Thornbury, join in the discussions that aways followed. The final Sunday post was The Poverty of the Stimulus, and as usual it had enough good stuff in it to spark off an interesting discussion. On this particular Sunday I made a few contributions and the exchange went something like this:
Initial statement from Scott (I use his first name to emphasise the cosy Sunday morning feel of the discussion, and also as a way of reminding myself to be nice.)
- The quantity and quality of language input that children get is so great as to question Chomsky’s poverty of the stimulus argument.
- An alternative to Chomsky’s view of language and language learning, is that “language is acquired, stored and used as meaningful constructions (or ‘syntax-semantics mappings’).”
- Everett (he of “There is no such thing as UG” http://www.theguardian.com/technology/2012/mar/25/daniel-everett-human-language-piraha is right to point out that since no one has proved that the poverty of the stimulus argument is correct, “talk of a universal grammar or language instinct is no more than speculation”.
My first reply is short:
“Everett’s claim is nonsense since it’s logically impossible to prove that a theory is true.”
Scott ignores this comment and prefers to pay attention to a certain Svetlana (I imagine her sitting in a wifi-equipped tent, huddled over an Apple app projecting a 3-D crystal ball) who tells him that he’s right to question the POS claim because tiny babies, only recently emerged (sic) from the womb, form huge numbers, like, well millions, of neural connections per second and what’s more, they rapidly develop dendritic spines containing “lifelong memories”. A few unsupported pseudo-scientific, quasi-philosophical assertions which sound as if they’ve been picked up from a hazy weekend seminar at the Sorbonnne are thrown in for good measure.
Imagine my surprise when Scott thanks the mystic Svetlana for bringing “new evidence to bear”, and says that this evidence serves to confirm his “initial hunch.”
“WHAT??” I typed furiously. “Are you really going to be hoodwinked by such postmodernist, obscurantist mumbo jumbo?” (There’s not much known for sure about the role dendritic spines play in learning and memory; I suspect she thinks that mentioning them here is evidence of deep knowledge of the scientific study of the nervous system; and suggesting that they disprove the POS argument is fanciful nonsense.)
“Give us an example of a lifelong memory stored in a dendritic spine that ’s relevant to this discussion then!” I shout uselessly at the monitor.
Well, Scott’s not just hoodwinked, he actually becomes emboldened. Spurred on by the compelling “new evidence”, he’s now ready to dismiss the POS argument completely.
“Actually”, he says, the stimulus is quite enough to explain everything children know about language. Corpus studies “suggest that everything a child needs is in place”.
Asked how these corpus studies explain what children know about language, Scott (apparantly still intoxicated by Svetlana’s absurd revelations) says “the child’s brain is mightily disposed to mine the input”, adding, as if this were the clincher, “a little stimulus goes a long way, especially when the child is so feverishly in need of both communicating and becoming socialized.”
“Cripes! His brain’s gone soft!” I thought. “He’s barking mad!”
“Platitudes and unsupported assertions have now completely replaced any attempt at reasoned argument”, I wrote.
“Anyone who claims that children’s knowledge about an aspect of syntax could not have been acquired from language input has to prove that it couldn’t. Otherwise it remains another empirically-empty assertion” says Scott.
Dear oh dear, here we are back at the start. As with the Everett quote, for purely formal reasons, it’s not possible to prove such a thing, and to demand such “proof” demonstrates an ignorance of logic and of how rational argument, science, and theory construction work. Failing to meet the impossible demand of proof doesn’t make the POS argument an empirically-empty assertion.
Then Russ Mayne joins in to have his typically badly-informed little say. Chomsky, he tells us, is “utterly scornful of data.”
“No he’s not”, says I, ““Chomsky’s theory of UG has a long and thorough history of empirical research.”
And blow me down if Thornbury doesn’t chime in:
““Chomsky’s theory of UG has a long and thorough history of empirical research”. What!!? Where? When? Who?”
So now he’s not just showing a prediliction for explanations involving the lifelong memories stored in dendritic spines, he’s showing even worse signs of ignorance.
That the discussion of the POS argument didn’t get satisfactorily resolved is hardly surprising, but I was more than a bit surprised to hear Scott telling us that language learning can be satisfactorily explained by the general learning processes going on inside feverish young brains that are “mightily disposed to mine the input”. (Just in passing, all these references to the child’s brain seem to contradict the part of the current Thornbury canon which deals with “the language body”.) Asked to say a bit more about how language learning can be done through general learning processes and input alone, Thornbury says
“If we generalize the findings beyond the single word level to constructions…” and then “… generalize from constructions to grammar…”, “hey presto, the grammar emerges on the back of the frequent constructions.”
Hey presto? What grammar? What “findings beyond the single word level”? How do you generalise these findings to “constructions” And how do you generalise from constructions to “grammar”?
This unwarranted dismissal of the POS argument, coupled with its incoherent account of language learning is, you might think, excusable in a Sunday morning chat, but we find more evidence of both the ignorance and the incoherence displayed here in more carefully-prepared public pronouncements on the same subjects. Thornbury’s very poor attempts to challenge Chomsky and psychological approaches to SLA by offering a particularly lame and simplistic version of emergentism, mostly based on Larsen-Freeman’s recent work have already been commented on in this blog (see for example Thornbury and the Learning Body and Emergentism 2), but let me say just a bit more.
Thornbury and Emergentism
Thornbury keeps telling people about Larsen-Freeman’s latest project. The best criticism I’ve read of it is the 2010 article by Kevin Gregg in SLR entitled “Shallow draughts: Larsen-Freeman and Cameron on complexity.” There’s no way I can do justice to the article by quickly summarising it, and I urge readers of this post to read Gregg’s article for themselves. As always with Gregg, the argument is not just devastating, but delightfully written. Gregg dismantles the pretences of the Larsen-Freeman and Cameron book and shows that all their appeals to complexity theory are so much hogwash; nothing of substance sustains the fanciful opinions of the authors. And likewise, Thornbury.
Thornbury has said nothing to persuade any intelligent reader that his version of emergentism provides a good explanation of SLA. Just a few points:
- Emergentism rests on empiricism and empiricism pure and simple is a bankrupt epistemology.
- Emergentism doesn’t get the support Thornbury claims it gets from the study of corpora – how could it? Thornbury’s claims show an ignorance of both theory construction and scientific method.
- As Gregg (2010) points out, the claim that language is a complex dynamical system makes no sense. “Simply put, there is no such entity as language such that it could be a system, dynamical or otherwise……. Terms like ‘language’ and ‘English’ are abstractions; abstract terms, like metaphors, are essential for normal communication and expression of ideas, but that does not mean they refer to actual entities. English speakers exist, and (I think) English grammars come to exist in the minds/brains of those speakers, so it remains within the realm of possibility that a set of speakers is a dynamical system, or that the acquisition process is; but not language, and not a language.”
- Thornbury’s assertion that language learning can be explained as the detection and memorisation of “frequently-occurring sequences in the sensory data we are exposed to” is an opinion masquerading as an explanatory theory. How can general conceptual representations acting on stimuli from the environment explain the representational system of language that children demonstrate? Thornbury’s suggestion that we have an innate capacity to “unpack the regularities within lexical chunks, and to use these patterns as templates for the later development of a more systematic grammar” begs more questions than it answers and, anyway, contradicts the empiricist epistemology adopted by most emergentists who say that there aren’t, indeed can’t be, any such things as innate capacities.
NOTE: I’ve added 2 appendices to deal with the 2 questions asked by Patrick Amon.
Appendix 1: Why can’t you prove that a general causal theory is true?
The problem of induction
Hume (1748) started from the premise that only “experience” (by which Hume meant that which we perceive through our senses) can help us to judge the truth or falsity of factual sentences. Thus, if we want to understand something, we must observe the relevant quantitative, measurable data in a dispassionate way. But if knowledge rests entirely on observation, then there is no basis for our belief in natural laws because it is an unwarranted inductive inference. We cannot logically go from the particular to the general: no amount of cumulative instances can justify a generalisation; ergo no general law or generalised causal explanation is true. No matter how many times the sun rises in the East, or thunder follows lightening, or swans appear white, we will never know that the sun rises in the East, or that thunder follows lightning or that all swans are white. This is the famous “logical problem of induction”. Why, nevertheless, do all reasonable people expect, and believe that instances of which they have no experience will conform to those of which they have experience?” Hume’s answer is: ‘Because of custom or habit’. (Popper, 1979: 4)More devastating still was Hume’s answer to Descartes’ original question “How can I know whether my perceptions of the world accurately reflect reality?” Hume’s answer was “You can’t.”
It is a question of fact whether the perceptions of the senses be produced by external objects resembling them: how shall this question be determined? By experience surely; as all questions of a like nature. But here experience is, and must be, entirely silent. The mind has never anything present to it but the perceptions, and cannot possibly reach any experience of their connection with objects. The supposition of such a connection is, therefore, without any foundation in reasoning. (Hume, 1988 : 253)
Thus, said Hume, Descartes was right to doubt his experiences, but, alas, experiences are all we have.
The asymmetry between truth and falsehood.
Popper (1972) offers a way out of Hume’s dilemma. He concedes that Hume is right: there is no logical way of going from the particular to the general, and that is that: however probable a theory might claim to be, it can never be claimed to be true.
Popper (1959, 1963, 1972) argued that the root of the problem of induction was the concern with certainty. In Popper’s opinion Descartes’ quest was misguided and had led to three hundred years of skewed debate. Popper claimed that the debate between the rationalists and the empiricists, with the idealists pitching in on either side, had led everybody on a wild goose chase – the elusive wild goose being “Truth”. From an interest in the status of human knowledge, philosophers and philosophers of science had asked which, if any, of our beliefs can be justified. The quest was for certainty, to vanquish doubt, and to impose reason. Popper suggested that rather than look for certainty, we should look for answers to problems, answers that stand up to rational scrutiny and empirical tests.
Popper insists that in scientific investigation we start with problems, not with empirical observations, and that we then leap to a solution of the problem we have identified – in any way we like. This second anarchic stage is crucial to an understanding of Popper’s epistemology: when we are at the stage of coming up with explanations, with theories or hypotheses, then, in a very real sense, anything goes. Inspiration can come from lowering yourself into a bath of water, being hit on the head by an apple, or by imbibing narcotics. It is at the next stage, the stage of the theory-building process, that empirical observation comes in, and, according to Popper, its role is not to provide data that confirm the theory, but rather to find data that test it.
Empirical observations should be carried out in attempts to falsify the theory: we should search high and low for a non-white swan, for an example of the sun rising in the West, etc. The implication is that, at this crucial stage in theory construction, the theory has to be formulated in such a way as to allow for empirical tests to be carried out: there must be, at least in principle, some empirical observation that could clash with the explanations and predictions that the theory offers. If the theory survives repeated attempts to falsify it, then we can hold on to it tentatively, but we will never know for certain that it is true. The bolder the theory (i.e. the more it exposes itself to testing, the more wide-ranging its consequences, the riskier it is) the better. If the theory does not stand up to the tests, if it is falsified, then we need to re-define the problem, come up with an improved solution, a better theory, and then test it again to see if it stands up to empirical tests more successfully. These successive cycles are an indication of the growth of knowledge.
Popper (1974: 105-106) gives the following diagram to explain his view:
P1 -> TT -> EE -> P2
P = problem TT = tentative theory EE = Error Elimination (empirical experiments to test the theory)
We begin with a problem (P1), which we should articulate as well as possible. We then propose a tentative theory (TT), that tries to explain the problem. We can arrive at this theory in any way we choose, but we must formulate it in such a way that it leaves itself open to empirical tests. The empirical tests and experiments (EE) that we devise for the theory have the aim of trying to falsify it. These experiments usually generate further problems (P2) because they contradict other experimental findings, or they clash with the theory’s predictions, or they cause us to widen our questions. The new problems give rise to a new tentative theory and the need for more empirical testing.
Popper thus gives empirical experiments and observation a completely different role: their job now is to test a theory, not to prove it, and since this is a deductive approach it escapes the problem of induction. Popper takes advantage of the asymmetry between verification and falsification: while no number of empirical observations can ever prove a theory is true, just one such observation can prove that it is false. All you need is to find one black swan and the theory “All swans are white” is disproved.
Appendix 2: Empiricism and epistemology
Moving to Patrick’s second question, I meant to say that “pure” or “extreme” forms of empiricsm are now generally rejected. Those who adopt a relativist epistemology (e.g most post-modernists) and those who are ignorant of the philosophy of science (e.g. Thornbury) wrongly label their opponents (rationalists who base their arguments on logic and empirical observation) as “positivists”. In fact, nobody in the scientific community is a positivist these days. The last wave of positivists belonged to the famous Vienna Circle. The objective of the members of the Vienna Circle was to continue the work of their predecessors (most importantly Comte and Mach) by giving empiricism a more rigorous formulation through the use of recent developments in mathematics and logic. The Vienna circle, which comprised Schlick, Carnap, Godel, and others, and had Russell, Whitehead and Wittgenstein as interested parties (see Hacking, 1983: 42-44), developed a programme labelled Logical Positivism, which consisted first of cleaning up language so as to get rid of paradoxes , and then limiting science to strictly empirical statements: in the grand tradition of positivism they pledged to get rid of all speculations on “pseudo problems” and concentrate exclusively on empirical data. Ideas were to be seen as “designations”, terms or concepts, that were formulated in words that needed to be carefully defined in order that they be meaningful, rather than meaningless. The logical positivists are particularly well-known for their attempt to answer Hume’s criticism of induction through Probability Theory, which, crudely, proposed that while a finite number of confirming instances of a theory could not prove it, the more numerous the confirming instances, the more probability there was that the theory was true. This, like just about all of their work, ended in failure.
Empiricism in Linguistics: Behaviourism
But empiricism lived on, and in linguistics, the division between “empiricist” and “rationalist” camps is noteworthy. The empiricists, who held sway, at least in the USA, until the 1950s, and whose most influential member was Bloomfield, saw their job as field work: accompanied with tape recorders and notebooks the researcher recorded thousands of hours of actual speech in a variety of situations and collected samples of written text. The data was then analysed in order to identify the linguistic patterns of a particular speech community. The emphasis was very much on description and classification, and on highlighting the differences between languages. We might call this the botanical approach, and its essentially descriptive, static, “naming of parts” methodology depended for its theoretical underpinnings on the “explanation” of how we acquire language provided by the behaviourists.
Behaviourism was first developed in the early twentieth century by the American psychologist John B. Watson, who, influenced by the work of Pavlov and Bekhterev on conditioning of animals, attempted to make psychological research “scientific” by using only objective procedures, such as laboratory experiments which were designed to establish statistically significant results. Watson formulated a stimulus-response theory of psychology according to which all complex forms of behaviour are explained in terms of simple muscular and glandular elements that can be observed and measured. No mental “reasoning”, no speculation about the workings of any “mind”, were allowed. Thousands of researchers adopted this methodology, and from the end of the first world war until the 1950s an enormous amount of research on learning in animals and in humans was conducted under this strict empiricist regime. In 1950 behaviourism could justly claim to have achieved paradigm status, and at that moment B.F. Skinner became its new champion. Skinner’s contribution to behaviourism was to challenge the stimulus-response idea at the heart of Watson’s work and replace it by a type of psychological conditioning known as reinforcement. Important as this modification was, it is Skinner’s insistence on a strict empiricist epistemology, and his claim that language is learned in just the same way as any other complex skill is learned, by social interaction, that is important here.
In sharp contrast to the behaviourists and their rejection of “mentalistic” formulations is the rationalist approach to linguistics championed by Chomsky. Chomsky (in 1959 and subsequently) argued that it is the similarities among languages, what they have in common, that is important, not their differences. In order to study these similarities we must allow the existence of unobservable mental structures and propose a theory of the acquisition of a certain type of knowledge.
Well, you know the story: Chomsky’s theory was widely adopted and became the new paradigm. Currently, badly-informed people like Larsen-Freeman and Thornbury (as opposed to serious scholars like O’Grady, MacWhinney and others) are claiming that no appeals to innate, unobservable mental processes or to modules of mind are necessary to explain language learning. What they don’t appreciate is that, unless, like William O’Grady or Brian MacWhinney, they deal properly with epistemological questions about the status of psychological processes, mental states, mind versus brain, and so on, they are either trying to have their cake and eat it or adopting an untenable empiricist epistemology.
Gregg, K.R. (2010) Shallow draughts: Larsen-Freeman and Cameron on complexity. Second Language Research, 26(4), 549 – 560.
Hacking, I. (1983) Representing and Intervening. Cambridge: Cambridge University Press.
Hume, D. (1988) : An Enquiry Concerning Human Understanding. Amherst, N.Y. Promethius.
Popper, K. R. (1959) The Logic of Scientific Discovery. London: Hutchinson.
Popper, K. R. (1963) Conjectures and Refutations. London: Hutchinson.
Popper, K. R. (1972) Objective Knowledge. Oxford: Oxford University Press.
Popper, K. (1974) Replies to Critics in P.A. Schilpp (ed.), The Philosophy of Karl Popper. Open Court, La Salle, III.
Thornbury, S. (2013) ‘The learning body,’ in Arnold, J. & Murphey, T. (eds.) Meaningful Action: Earl Stevick’s influence on language teaching. Cambridge. Cambridge University Press.
Thornbury, S. (2012?) Language as an emergent system. British Council, Portugal: In English. Available here: http://www.scottthornbury.com/articles.html