A Final Tilt at the Windmill of Thornbury’s A to Z

don_quijote_and_sancho_panza_by_psdeluxe-d7wb1df

They keep coming, like burps after a poorly-digested Christmas lunch: comments on Thornbury’ A to Z blog. I’ve read 3 in the last few days, so let me add my own final swipe at the edifice before 2015 concludes.

Thornbury’s Sunday posts on his A to Z blog only lasted a few months, but during that short season they became part of my Sunday morning routine: late breakfast, read Thornbury, join in the discussions that aways followed. The final Sunday post was The Poverty of the Stimulus, and as usual it had enough good stuff in it to spark off an interesting discussion. On this particular Sunday  I made a few contributions and the exchange went something like this:

Initial statement from Scott (I use his first name to emphasise the cosy Sunday morning feel of the discussion, and also as a way of reminding myself to be nice.)

  1. The quantity and quality of language input that children get is so great as to question Chomsky’s poverty of the stimulus argument.
  2. An alternative to Chomsky’s view of language and language learning, is that “language is acquired, stored and used as meaningful constructions (or ‘syntax-semantics mappings’).”
  3. Everett (he of “There is no such thing as UG” http://www.theguardian.com/technology/2012/mar/25/daniel-everett-human-language-piraha is right to point out that since no one has proved that the poverty of the stimulus argument is correct, “talk of a universal grammar or language instinct is no more than speculation”.

Development

My first reply is short:

“Everett’s claim is nonsense since it’s logically impossible to prove that a theory is true.”

Scott ignores this comment and prefers to pay attention to a certain Svetlana (I imagine her sitting in a wifi-equipped tent, huddled over an Apple app projecting a 3-D crystal ball) who tells him that he’s right to question the POS claim because tiny babies, only recently emerged (sic) from the womb, form huge numbers, like, well millions, of neural connections per second and what’s more, they rapidly develop dendritic spines containing “lifelong memories”.  A few unsupported pseudo-scientific, quasi-philosophical assertions which sound as if they’ve been picked up from a hazy weekend seminar at the Sorbonnne are thrown in for good measure.

Imagine my surprise when Scott thanks the mystic Svetlana for bringing “new evidence to bear”, and says that this evidence serves to confirm his “initial hunch.”

“WHAT??” I typed furiously. “Are you really going to be hoodwinked by such postmodernist, obscurantist mumbo jumbo?” (There’s not much known for sure about the role dendritic spines play in learning and memory; I suspect she thinks that mentioning them here is evidence of deep knowledge of the scientific study of the nervous system; and suggesting that they disprove the POS argument is fanciful nonsense.)

“Give us an example of a lifelong memory stored in a dendritic spine that ’s relevant to this discussion then!” I shout uselessly at the monitor.

Well, Scott’s not just hoodwinked, he actually becomes emboldened. Spurred on by the compelling “new evidence”, he’s now ready to dismiss the POS argument completely.

“Actually”, he says, the stimulus is quite enough to explain everything children know about language. Corpus studies “suggest that everything a child needs is in place”.

Asked how these corpus studies explain what children know about language, Scott (apparantly still intoxicated by Svetlana’s absurd revelations) says “the child’s brain is mightily disposed to mine the input”, adding, as if this were the clincher, “a little stimulus goes a long way, especially when the child is so feverishly in need of both communicating and becoming socialized.”

“Cripes! His brain’s gone soft!” I thought. “He’s barking mad!”

“Platitudes and unsupported assertions have now completely replaced any attempt at reasoned argument”, I wrote.

“Anyone who claims that children’s knowledge about an aspect of syntax could not have been acquired from language input has to prove that it couldn’t. Otherwise it remains another empirically-empty assertion” says Scott.

Dear oh dear, here we are back at the start. As with the Everett quote, for purely formal reasons, it’s not possible to prove such a thing, and to demand such “proof” demonstrates an ignorance of logic and of how rational argument, science, and theory construction work. Failing to meet the impossible demand of proof doesn’t make the POS argument an empirically-empty assertion.

Then Russ Mayne joins in to have his typically badly-informed little say. Chomsky, he tells us, is “utterly scornful of data.”

“No he’s not”, says I, ““Chomsky’s theory of UG has a long and thorough history of empirical research.”

And blow me down if Thornbury doesn’t chime in:

““Chomsky’s theory of UG has a long and thorough history of empirical research”. What!!? Where? When? Who?”

So now he’s not just showing a prediliction for explanations involving the lifelong memories stored in dendritic spines, he’s showing even worse signs of ignorance.

Discussion

That the discussion of the POS argument didn’t get satisfactorily resolved is hardly surprising, but I was more than a bit surprised to hear Scott telling us that language learning can be satisfactorily explained by the general learning processes going on inside feverish young brains that are “mightily disposed to mine the input”. (Just in passing, all these references to the child’s brain seem to contradict the part of the current Thornbury canon which deals with “the language body”.) Asked to say a bit more about how language learning can be done through general learning processes and input alone, Thornbury says

“If we generalize the findings beyond the single word level to constructions…” and then “… generalize from constructions to grammar…”,  “hey presto, the grammar emerges on the back of the frequent constructions.”

Hey presto? What grammar? What “findings beyond the single word level”? How do you generalise these findings to “constructions” And how do you generalise from constructions to “grammar”?

This unwarranted dismissal of the POS argument, coupled with its incoherent account of language learning is, you might think, excusable in a Sunday morning chat, but we find more evidence of both the ignorance and the incoherence displayed here in more carefully-prepared public pronouncements on the same subjects. Thornbury’s very poor attempts to challenge Chomsky and psychological approaches to SLA by offering a particularly lame and simplistic version of emergentism, mostly based on Larsen-Freeman’s recent work have already been commented on in this blog (see for example Thornbury and the Learning Body and Emergentism 2), but let me say just a bit more.

Thornbury and Emergentism

Thornbury keeps telling people about Larsen-Freeman’s latest project. The best criticism I’ve read of it is the 2010 article by Kevin Gregg in SLR entitled “Shallow draughts: Larsen-Freeman and Cameron on complexity.” There’s no way I can do justice to the article by quickly summarising it, and I urge readers of this post to read Gregg’s article for themselves. As always with Gregg, the argument is not just devastating, but delightfully written. Gregg dismantles the pretences of the Larsen-Freeman and Cameron book and shows that all their appeals to complexity theory are so much hogwash; nothing of substance sustains the fanciful opinions of the authors. And likewise, Thornbury.

Thornbury has said nothing to persuade any intelligent reader that his version of emergentism provides a good explanation of SLA. Just a few points:

  • Emergentism rests on empiricism and empiricism pure and simple is a bankrupt epistemology.
  • Emergentism doesn’t get the support Thornbury claims it gets from the study of corpora – how could it? Thornbury’s claims show an ignorance of both theory construction and scientific method.
  • As Gregg (2010) points out, the claim that language is a complex dynamical system makes no sense. “Simply put, there is no such entity as language such that it could be a system, dynamical or otherwise……. Terms like ‘language’ and ‘English’ are abstractions; abstract terms, like metaphors, are essential for normal communication and expression of ideas, but that does not mean they refer to actual entities. English speakers exist, and (I think) English grammars come to exist in the minds/brains of those speakers, so it remains within the realm of possibility that a set of speakers is a dynamical system, or that the acquisition process is; but not language, and not a language.”
  • Thornbury’s assertion that language learning can be explained as the detection and memorisation of “frequently-occurring sequences in the sensory data we are exposed to” is an opinion masquerading as an explanatory theory. How can general conceptual representations acting on stimuli from the environment explain the representational system of language that children demonstrate?  Thornbury’s  suggestion that we have an innate capacity to “unpack the regularities within lexical chunks, and to use these patterns as templates for the later development of a more systematic grammar” begs more questions than it answers and, anyway, contradicts the empiricist epistemology adopted by most emergentists who say that there aren’t, indeed can’t be, any such things as innate capacities.

NOTE: I’ve added 2 appendices to deal with the 2 questions asked by Patrick Amon.

Appendix 1: Why can’t you prove that a general causal theory is true?

The problem of induction

Hume (1748) started from the premise that only “experience” (by which Hume meant that which we perceive through our senses) can help us to judge the truth or falsity of factual sentences. Thus, if we want to understand something, we must observe the relevant quantitative, measurable data in a dispassionate way. But if knowledge rests entirely on observation, then there is no basis for our belief in natural laws because it is an unwarranted inductive inference. We cannot logically go from the particular to the general: no amount of cumulative instances can justify a generalisation; ergo no general law or generalised causal explanation is true. No matter how many times the sun rises in the East, or thunder follows lightening, or swans appear white, we will never know that the sun rises in the East, or that thunder follows lightning or that all swans are white. This is the famous “logical problem of induction”. Why, nevertheless, do all reasonable people expect, and believe that instances of which they have no experience will conform to those of which they have experience?” Hume’s answer is: ‘Because of custom or habit’. (Popper, 1979: 4)More devastating still was Hume’s answer to Descartes’ original question “How can I know whether my perceptions of the world accurately reflect reality?” Hume’s answer was “You can’t.”

It is a question of fact whether the perceptions of the senses be produced by external objects resembling them: how shall this question be determined? By experience surely; as all questions of a like nature.  But here experience is, and must be, entirely silent.  The mind has never anything present to it but the perceptions, and cannot possibly reach any experience of their connection with objects.  The supposition of such a connection is, therefore, without any foundation in reasoning. (Hume, 1988 [1748]: 253)

Thus, said Hume, Descartes was right to doubt his experiences, but, alas, experiences are all we have.

The asymmetry between truth and falsehood.

Popper (1972) offers a way out of Hume’s dilemma. He concedes that Hume is right: there is no logical way of going from the particular to the general, and that is that: however probable a theory might claim to be, it can never be claimed to be true.

Popper (1959, 1963, 1972) argued that the root of the problem of induction was the concern with certainty. In Popper’s opinion Descartes’ quest was misguided and had led to three hundred years of skewed debate.  Popper claimed that the debate between the rationalists and the empiricists, with the idealists pitching in on either side, had led everybody on a wild goose chase – the elusive wild goose being “Truth”.  From an interest in the status of human knowledge, philosophers and philosophers of science had asked which, if any, of our beliefs can be justified.  The quest was for certainty, to vanquish doubt, and to impose reason.  Popper suggested that rather than look for certainty, we should look for answers to problems, answers that stand up to rational scrutiny and empirical tests.

Popper insists that in scientific investigation we start with problems, not with empirical observations, and that we then leap to a solution of the problem we have identified – in any way we like. This second anarchic stage is crucial to an understanding of Popper’s epistemology: when we are at the stage of coming up with explanations, with theories or hypotheses, then, in a very real sense, anything goes.  Inspiration can come from lowering yourself into a bath of water, being hit on the head by an apple, or by imbibing narcotics.  It is at the next stage, the stage of the theory-building process, that empirical observation comes in, and, according to Popper, its role is not to provide data that confirm the theory, but rather to find data that test it.

Empirical observations should be carried out in attempts to falsify the theory: we should search high and low for a non-white swan, for an example of the sun rising in the West, etc. The implication is that, at this crucial stage in theory construction, the theory has to be formulated in such a way as to allow for empirical tests to be carried out: there must be, at least in principle, some empirical observation that could clash with the explanations and predictions that the theory offers.  If the theory survives repeated attempts to falsify it, then we can hold on to it tentatively, but we will never know for certain that it is true.  The bolder the theory (i.e. the more it exposes itself to testing, the more wide-ranging its consequences, the riskier it is) the better.  If the theory does not stand up to the tests, if it is falsified, then we need to re-define the problem, come up with an improved solution, a better theory, and then test it again to see if it stands up to empirical tests more successfully.  These successive cycles are an indication of the growth of knowledge.

Popper (1974: 105-106) gives the following diagram to explain his view:

P1 -> TT -> EE -> P2

P = problem   TT = tentative theory  EE = Error Elimination (empirical experiments to test the theory)

We begin with a problem (P1), which we should articulate as well as possible. We then propose a tentative theory (TT), that tries to explain the problem. We can arrive at this theory in any way we choose, but we must formulate it in such a way that it leaves itself open to empirical tests.  The empirical tests and experiments (EE) that we devise for the theory have the aim of trying to falsify it.  These experiments usually generate further problems (P2) because they contradict other experimental findings, or they clash with the theory’s predictions, or they cause us to widen our questions.  The new problems give rise to a new tentative theory and the need for more empirical testing.

Popper thus gives empirical experiments and observation a completely different role: their job now is to test a theory, not to prove it, and since this is a deductive approach it escapes the problem of induction. Popper takes advantage of the asymmetry between verification and falsification: while no number of empirical observations can ever prove a theory is true, just one such observation can prove that it is false.  All you need is to find one black swan and the theory “All swans are white” is disproved.

Appendix 2: Empiricism and epistemology

Moving to Patrick’s second question, I meant to say that “pure” or “extreme” forms of empiricsm are now generally rejected. Those who adopt a relativist epistemology (e.g most post-modernists) and those who are ignorant of the philosophy of science (e.g. Thornbury) wrongly label their opponents (rationalists who base their arguments on logic and empirical observation) as “positivists”. In fact, nobody in the scientific community is a positivist these days. The last wave of positivists belonged to the famous Vienna Circle. The objective of the members of the Vienna Circle was to continue the work of their predecessors (most importantly Comte and Mach) by giving empiricism a more rigorous formulation through the use of recent developments in mathematics and logic. The Vienna circle, which comprised Schlick, Carnap, Godel, and others, and had Russell, Whitehead and Wittgenstein as interested parties (see Hacking, 1983: 42-44), developed a programme labelled Logical Positivism, which consisted first of cleaning up language so as to get rid of paradoxes , and then limiting science to strictly empirical statements: in the grand tradition of positivism they pledged to get rid of all speculations on “pseudo problems” and concentrate exclusively on empirical data.   Ideas were to be seen as “designations”, terms or concepts, that were formulated in words that needed to be carefully defined in order that they be meaningful, rather than meaningless. The logical positivists are particularly well-known for their attempt to answer Hume’s criticism of induction through Probability Theory, which, crudely, proposed that while a finite number of confirming instances of a theory could not prove it, the more numerous the confirming instances, the more probability there was that the theory was true. This, like just about all of their work, ended in failure.

Empiricism in Linguistics: Behaviourism  

But empiricism lived on, and in linguistics, the division between “empiricist” and “rationalist” camps is noteworthy. The empiricists, who held sway, at least in the USA, until the 1950s, and whose most influential member was Bloomfield, saw their job as field work: accompanied with tape recorders and notebooks the researcher recorded thousands of hours of actual speech in a variety of situations and collected samples of written text. The data was then analysed in order to identify the linguistic patterns of a particular speech community.  The emphasis was very much on description and classification, and on highlighting the differences between languages.  We might call this the botanical approach, and its essentially descriptive, static, “naming of parts” methodology depended for its theoretical underpinnings on the “explanation” of how we acquire language provided by the behaviourists.

Behaviourism was first developed in the early twentieth century by the American psychologist John B. Watson, who, influenced by the work of Pavlov and Bekhterev on conditioning of animals, attempted to make psychological research “scientific” by using only objective procedures, such as laboratory experiments which were designed to establish statistically significant results. Watson formulated a stimulus-response theory of psychology according to which all complex forms of behaviour are explained in terms of simple muscular and glandular elements that can be observed and measured.  No mental “reasoning”, no speculation about the workings of any “mind”, were allowed. Thousands of researchers adopted this methodology, and from the end of the first world war until the 1950s an enormous amount of research on learning in animals and in humans was conducted under this strict empiricist regime.  In 1950 behaviourism could justly claim to have achieved paradigm status, and at that moment B.F. Skinner became its new champion.  Skinner’s contribution to behaviourism was to challenge the stimulus-response idea at the heart of Watson’s work and replace it by a type of psychological conditioning known as reinforcement.  Important as this modification was, it is Skinner’s insistence on a strict empiricist epistemology, and his claim that language is learned in just the same way as any other complex skill is learned, by social interaction, that is important here.

In sharp contrast to the behaviourists and their rejection of “mentalistic” formulations is the rationalist approach to linguistics championed by Chomsky. Chomsky (in 1959 and subsequently) argued that it is the similarities among languages, what they have in common, that is important, not their differences. In order to study these similarities we must allow the existence of unobservable mental structures and propose a theory of the acquisition of a certain type of knowledge.

Well, you know the story: Chomsky’s theory was widely adopted and became the new paradigm. Currently, badly-informed people like Larsen-Freeman and Thornbury (as opposed to serious scholars like O’Grady, MacWhinney and others) are claiming that no appeals to innate, unobservable mental processes or to modules of mind are necessary to explain language learning. What they don’t appreciate is that, unless, like William O’Grady or Brian MacWhinney, they deal properly with epistemological questions about the status of psychological processes, mental states, mind versus brain, and so on, they are either trying to have their cake and eat it or adopting an untenable empiricist epistemology.

 

References

Gregg, K.R. (2010) Shallow draughts: Larsen-Freeman and Cameron on complexity. Second Language Research, 26(4), 549 – 560.

Hacking, I. (1983) Representing and Intervening. Cambridge: Cambridge University Press.

Hume, D. (1988) [1748]: An Enquiry Concerning Human Understanding. Amherst, N.Y.  Promethius.

Popper, K. R. (1959) The Logic of Scientific Discovery. London: Hutchinson.

Popper, K. R. (1963) Conjectures and Refutations. London: Hutchinson.

Popper, K. R. (1972) Objective Knowledge. Oxford: Oxford University Press.

Popper, K. (1974) Replies to Critics in P.A. Schilpp (ed.), The Philosophy of Karl Popper. Open Court, La Salle, III.

Thornbury, S. (2013) ‘The learning body,’ in Arnold, J. & Murphey, T. (eds.) Meaningful Action: Earl Stevick’s influence on language teaching. Cambridge. Cambridge University Press.

Thornbury, S. (2012?) Language as an emergent system. British Council, Portugal: In English. Available here: http://www.scottthornbury.com/articles.html

 

 

30 thoughts on “A Final Tilt at the Windmill of Thornbury’s A to Z

  1. Hi Geoff,

    Another fascinating post. May I suggest, though, if it’s not too presumptious, that, for many readers, claims such as that it is logically impossible to prove that a theory is true, or that empiricism pure and simple is a bankrupt epistemology will require some explanation. There is, I think, a widepsread view according to which it is perfectly possible to prove a theory true and the second of these claims, the one about empiricism, I should admit I’m unclear about myself. Do you mean that empiricism, in its pure and simple version, is a bankrupt epistemology, or do you mean that empiricism is, purely and simply, a bankrupt epistemology?

    Like

    1. Hi Patrick,

      Thanks for your questions. I’ve added 2 appendices to the post which attempt to deal with why you can’t prove a theory is true and empiricism.

      Like

  2. Hi Geoff,
    (This is what makes ELT so much more interesting.) I think this lays bare the bones of the conflict over lexical priming and UG steered language acquisition. I am not quite sure I fully see how a relativist epistemology, and a post-modern view, versus, I guess, the modern counterpart are relevant to a theory of language and language acquisition. I will stare at this for a while.

    TK.

    Like

  3. Hi Thomas,

    There aren’t many academics who support Hoey’s lexical priming theory of language learning – partly because he’s hardly bothered to do more than offer a sketch of it, being far more interested in, and better-informed about lexis than language learning. Hoey borrows a bit from connectionists but offers no serious treatment of how priming works. As far as I know, Leo Selivan and Hugh Dellar think Hoey’s theory is great stuff, but they’ve never managed to say anything very coherent about it. I actually emailed Leo and asked him if he was serious in his praise of Hoey’s theory and Dellar’s unconditional support of it. “Absolutely!” he replied.

    So, with regard to theories of SLA, there are some not very serious views out there, like Larsen-Freeman’s and Hoey’s, and also a lot of relativist nonsense, mostly in the realm of sociolinguistics. That leaves the interesting debate going on between those who think SLA is best seen as a psychological process involving developing cognitive skills which rely on a special, uniquely human, inherited capacity for language learning, and those who see language learning as the same as other kinds of learning, capable of being explained without resorting to any “nativist” constructs.

    Kevin Gregg says that Nick Ellis and most other emergentists have got almost nowhere in their efforts, whereas William O’Grady is the exception. He’s an emergentist, who “both denies everything Chomsky stands for and recognizes (..unlike the rest of them) the challenges he has to face to compete with generative theory; the POS being one of them.” I think Bates was getting close, and that MacWhinney’s Competition Model also recognises the challenges that any empiricist approach faces – although MacWhinney, like O’Grady, is not “strictly” empiricist.

    Like

    1. Hi again,
      I can see why Hoey falls short of offering a full-fledged theory for language acquisition. He came to priming as a conclusion after a long and close look at text. That is, as discussed in another post, he was looking for a psychological mechanism that would explain the omnipresence of collocation in language (p.7). Priming seemed to fit the bill. I studied priming some years ago. I felt that priming, as in associative learning, and priming in memory recall, has been understood and described for some time. The kinship of priming and implicit learning, as I think there is, seems to be very interesting. To me, implicit learning and priming describe the same process. (I found implicit learning far from being clearly defined or agreed upon. So, I find myself in the uncomfortable position of trying to betroth a unknown pretender to a mysterious contender). If I can establish this correlation, priming assumes a more serious role as nobody by now would suggest that we learn a language, and that much of what we can do with language has come about in absence of conscious / explicit learning. I do not mean by this absence of attention.

      I read some of Ellis papers. I am eager to read what Gregg has to say, I wonder what “got almost nowhere” would mean, as I will take a look at O’Grady.

      Best,

      Thomas k

      Like

    2. Dear Geoff,
      The fact that Hoey’s view is not developed should not mean that it cannot be a serious option. As to the dichotomy you point out, those who appeal to innateness enriched with a specific language endowment, versus those who appeal to innate learning capacity unique to humans sufficient to justify language, priming would be an example of the latter innate learning capacity. The physiological pendant would be Hebb’s theory on synaptic-plasticity. It’s the way the brain works. It’s the way we learn music, the way we orient ourselves in our physical surrounding, the way we build memory, etc. Certainly, this is not enough to provide a full explanation of language–to me the big question is how to bridge the gap from brain functions to consciousness, i.e. questions of how we assign meaning, the nature of meaning, etc,– but whatever comes bottom up, the empirical data, can be explained or interpreted with reference to priming. (In popperian fashion, I would hazard a guess, my pet theory, that our uniqueness, that we have language, does not come into play with some guiding mechanism for structural qualities. Rather, the human species has a unique meaning making ability. It’s captured in the Adamic myth of naming the animals. I have no idea whatsoever how that can be explained.)

      I am reading your Theory Construction in SLA. I find it very helpful.

      TK

      Like

      1. “… the human species has a unique meaning making ability…”)

        Tomasello (2008) argues that ‘this unique meaning-making ability’ evolved to serve the need for collaborative social structures that in turn presupposed a capacity for joint attention (mediated, for example, through pointing), a precondition for the successful deployment of the three macrofunctions of language, i.e. to request, to (helpfully) inform, and to share experiences, emotions and attitudes (in order to expand common ground and forge social cohesion). Tomasello’s extensive work with great apes suggests that only the first of these functions is shared with our primate kin, i.e. that the meanings that chimpanzees are able to formulate and communicate are purely imperative and not declarative. Whereas here-and-now requests can be realised non-verbally, e.g. through deictic gestures, or simply lexically, telling and, especially, storytelling, require much more sophisticated linguistic devices – what we call grammar. Similarly, jargons evolve into pidgins which are in turn conventionalized into creoles.

        There is some similarity here with Halliday’s view of language as a social semiotic, and that the meaning potential of language is encoded in its lexicogrammar to serve at least three kinds of meaning: interpersonal, ideational, and textual. Children, in Halliday’s formulation, ‘learn to mean’, that is they learn to use language in order to function socially. ‘Language is as it is because what it has to do.’ Chomsky, of course, was not in the least interested in this meaning potential (“the notion ‘grammatical’ cannot be identified with ‘meaningful’ or ‘significant’ in any semantic sense”).

        Like

      2. Tomasello is indeed expressing Halliday’s views. It’s interesting that you, Scott, rather like Larsen-Freeman, state the view that language derives from social interaction as if it were a fact, and as if the mere assumption that language is a social semiotic somehow gives weight to your criticisms of Chomsky.

        What needs to be made clear are the differences between formal and functional approaches to language, partly exemplified in differences between Chomsky’s and Halliday’s work. Whereas Chomsky studies linguistic knowledge (competence), Halliday studies language use: One concentrates on psychology, the other on sociology. There’s obviously a case to be made for both approaches, but let’s at least admit that the case for each has to be made. If you’re interested in pragmatics, discourse analysis, and aspects of sociolinguistics, well fine, but let’s not give the impression that Halliday has resolved the POS argument, or that we are bound to accept the assumptions on which his approach is based. The two fields have developed along side each other, neither camp being disposed to lock horns with the other, both camps preferring to pursue their separate agendas.

        Halliday eschews any attempt to describe a rule system which generates the grammatical sentences in a language, and he sees language learning as “learning how to mean”. I won’t bother to go into any critique of Halliday here, enough to say that it’s not obvious to everybody that language is best described as a social semiotic (language isn’t exclusively a tool for social interaction, is it?) or that structural regularities in language are best described as common, constantly shifting strategies which respond to communicative needs.

        Like

      3. “… it’s not obvious to everybody that language is best described as a social semiotic (language isn’t exclusively a tool for social interaction, is it?) …”

        No, but the fact that the few attested cases of children that were deprived of social interaction and didn’t learn their mother tongue (Genie et al), as opposed to the non-existence of cases of isolated children who DID learn it – would suggest that – if nothing else – social interaction is a necessary, if not sufficient, condition for FLA. ‘The social semiotic is the system of meanings that defines or constitutes the culture; and the linguistic system is one mode of realisation of these meanings. The child’s task is to construct the system of meanings that represents his own model of social reality. This process takes place inside his own head; it is a cognitive process. But it takes place in contexts of social interaction, and there is no way it could take place except in these contexts. As well as being a cognitive process, the learning of the mother tongue is also an interactive process. It takes the form of the continued exchange of meanings between the self and others. The act of meaning is a social act.’ (Halliday 1975:139 – 40).

        Like

      4. To learn a language, children need to hear certain types of sentences in certain types of contexts. The question is, to paraphrase O’Grady (2005, p. 182) What does the brain of a child do when it’s exposed to speech and how does that result in a fully fluent child three years later? Of course the cognitive process of language learning takes place in social contexts, but that’s not an explanation of the learning process. And to quote Halliday’s assertion that “the act of meaning is a social act” gets us precisely nowhere in answering O’Grady’s question or in answering the POS argument. Those interested in O’Grady’s answer, which offers an interesting alternative to Chomsky’s (unlike Scott’s protestations (reviewed in this blog post) which draw on the unpromising treatment offered by Larsen-Freeman) need only read Chapter 7 of How Children Learn Language.

        O’Grady, W. (2005) How Children Learn Language. CUP

        Like

      5. But there IS no POS argument. That was the whole point of my blog. As Dabrowska puts it (much better than I could):

        “Strikingly, most expositions of the poverty of the stimulus argument in the literature do not take the trouble to establish the truth of the premises: it is simply assumed. In a well-known critique of the POS argument, Pullum and Scholz (2002) analyze four linguistic phenomena (plurals inside compounds, anaphoric one, auxiliary sequences, auxiliary placement in Y/N questions) which are most often used to exemplify it, and show that the argument does not hold up: in all four cases, either the generalization that linguists assumed children acquired is incorrect or the relevant data is present in the input, or both. With respect to the auxiliary placement rule, for example, Pullum and Scholz (2002) estimate that by age 3, most children will have heard between 7500 and 22000 utterances that falsify the structure independent rule”.

        If the stimulus is not impoverished, the need to hypothesize an innate grammar is somewhat undermined, to say the least. And the feverish way that ‘innatists’ cling to the UG starts to sound a little bit like creationism, or paleo-diets, or neurolinguistic programming, and other daft kinds of magical thinking.

        Like

      6. Here we go again! Scott started his Poverty of the Stimulus blog post by saying “The quantity and quality of language input that children get is so great as to question Chomsky’s poverty of the stimulus argument.” Having read a couple of articles and had Svetlana confirm his hunch, he now dogmatically asserts “there IS no POS argument!”

        To support this assertion he cites Dabrowska (2015) whose argument against the POS rests on the well-known Pullum and Scholz (2002) article. This is cherry-picking at it’s very worst – Scott hasn’t bothered to give a fair review of Dabrowska (2015) who hasn’t bothered to give a fair review of Pullum and Scholz (2002). Dabrowska is far less strident that Scott, and IMO would be unlikely to support his gleeful rejection of POS. Just one quote “The crucial question is whether the relevant knowledge or abilities are language-specific or whether they can be attributed to more general cognitive processes—and this is far from clear.” Note: “far from clear”, not “conclusively settled” as Scott claims.

        Scott is probably blissfully unaware of how poor the Dabrowska (2015) argument is, but the author herself must have known, although she fails to mention it, that the Pullum and Scholz (2002) paper appeared in a special issue of The Linguistic Review 19 where a number of responses to the paper also appeared. These papers criticise Pullum and Scholz for failing to consider other researchers’ central points or to acknowledge the significance of their findings, together with the misinterpretation of certain analyses or results. Here’s a summary of a few of them taken from Tabua (2009) (who, BTW, offers more support for the POS argument in her own study):

        • Thomas (2002) points out that Crain’s (1991) central point is not addressed in Pullum and Scholz’s paper. Crain (1991) shows that infants are sensitive to ungrammaticality despite the lack of negative evidence in the input.

        • Lasnik and Uriagereka (2002) point out that the findings of Freidin (1991), who investigated English children’s acquisition of auxiliary fronting with special emphasis on to what extent such knowledge follows from the input the child is exposed to, are ignored in Pullum and Scholz. Lasnik and Uriagereka treat those findings as an instantiation of an extreme form of the POS argument, which means that children reach the right conclusion despite positive evidence being insufficient.

        • Crain and Petroski (2002) note that Pullum and Scholz fail to consider the data that nativists’ claims rest on and that the patterns of unique child constructions that appear in the acquisition data are left unaccounted for. Crucially, these patterns might contravene the adult particular languages being acquired, but they never violate the principles of Universal Grammar.

        • Legate and Yang (2002) note that Pullum and Scholz only consider a very small subset of principles and ignore most part of the corpus where these data have been taken from. They also show that Pullum and Scholz do not establish the sufficiency of the disconfirming data they claim to have found in the input. In their review, the former analyse indirect ways to test for data sufficiency which consist in comparing two phenomena that are acquired around the same age and finding out if they require the same amount of linguistic experience. The conclusion is that the principle of structure-dependence is innate, i.e. not learned on the basis of data, but part of some previous linguistic knowledge.

        • Fodor and Crowther’s (2002) point out that Pullman and Scholz exclude negative evidence from their discussion. Both the insufficiency of positive evidence and the lack of negative evidence lead to underdetermination, which is something that has to be overcome by innate principles. Concerning the lacunae of data children are exposed to, Fodor and Crowther (2002: 109) note that, contrary to what Pullum and Scholz assume, adults use a distinctive style to speak to children which involves phonological clarity and shortness of sentences. Under this view, then, certain structures that are frequent in adult speech might be rare when children are addressed.

        Scott once again demonstrates his lack of scholarship and his willingness to grab at any flimsy bits of evidence which might support his preposterous assertions. Scott’s talk of the language body, slow-release grammar, babies desperate to socialise, etc., etc., are less than serious attempts to contribute to an important and interesting debate. His breezy dismissal of Chomsky and his attempts to promote emergentism by leaning on the totally unreliable work of Larsen-Freeman are, IMO, regrettable.

        Serious work is currently underway which not just explores weaknesses in Chomsky’s theory, but also attempts to develop an alternative theory of language learning*. Scott has yet to make any serious contribution to this work.

        * See, for example William O’Grady, Miseon Lee, & Hye-Young Kwak “Emergentism and Second Language Acquisition”, downloadable from this link: http://www.ling.hawaii.edu/faculty/ogrady/Emergentism_%26_SLA.pdf

        Tabua, S. (2009) Early Catalan OV Sequences: Empirical Evidence for the Poverty of Stimulus Argument.

        Liked by 1 person

      7. Dear Scott,
        Thank you for the lead. At the same time, I am afraid these ideas will mess up the quest for consciousness, borrowed from Christof Koch’s book with the selfsame title, or my search for meaning (V. Frankle), beyond rescue. I think there are so many problems with attempts that try to work out origins for whatever is around today, that I keep them in another mental locker. Discussions in there are rather violent.

        Happy 2016!

        Thomas K

        Like

      8. Hi Thomas,

        Maybe Hoey’s view can be developed into a serious option, but it hasn’t been, yet.

        The questions you mention are at the heart of the search for an explanation of language and language learning. Our answers to epistemological questions and the mind / brain problem affect how we tackle the question of how we make sense of the input, whether it’s bottom-up or top-down or something else, and the rest of it. Your guess could well be right; the only way we’ll find out is to keep investigating. Everything I write on this blog is motivated by the belief that those of us who take an interest in on-going work in applied linguistics should be both rational and critical. We shouldn’t cherry pick evidence, and we shouldn’t uncritically adopt explanations that chime with our own prejudices. We should try as hard as possible to be intellectually honest and rigorous. Pace Scott, life is not too short to make the effort to critically examine theories before adopting them, and it’s not always (or even usually) the case that, when critically assessing rival arguments, the devil is in the detail.

        Like

      9. And hard it is–“intellectually honest and rigorous.” It’s just that we are sometimes more motivated to do the grueling work of thinking by proving a point, ours preferably. I for my part can get pretty excited about “my stuff”, and have come to suspect that my professionalism has grown as a bulwark to keep the opinionated self under control.

        It has been a treat to participate in the conversations.

        Cheers,

        Thomas K.

        Like

      10. Hi again Thomas,

        We can be motivated by whatever we like, and the history of science is the history of often huge egos sometimes fighting to the death to defend their views, That’s why science relies so much on the critical appraisal of theories, which in turn demands that the theories are expressed in such a way that they can be appraised in terms of their clarity, logic and empirical evidence.

        Cheers to you.

        Geoff

        Like

  4. Hi Geoff,

    Thanks for adding the appendices; they’ve been very helpful. I’ve been wondering how someone can possibly falsify a theory someone else proposes, e.g. UG, when its existence can’t actually be proved (because, as you explained, it’s logically impossible to prove that a theory is true, especially when our knowledge rests entirely on observation). However, I think I do understand the example of ‘all swans are white’; here it’s just the word ‘all’ that we can dismiss as soon as we find one black swan.

    Anyway, I see why people like to cast doubt on all sorts of theories, even though they can’t falsify them. And I think it’s perfectly fine to constantly question ideas and opinions because if one becomes too identified with what they believe, they may get stuck in the end. And all the discussions that arise are truly fascinating.

    Hana

    Liked by 2 people

  5. Hi Hana,

    The principles and parameters version of UG stands up quite well to scrutiny as a scientific theory. It’s falsifiable – you find people who don’t know what the theory says they should know, for example. The theory of planetary motion developed by Ptolemy was falsified by observations made with better instruments; Newton’s theory of light was falsified by observations of the diffraction of light, and so on. Astrology as a theory relies on lots of supporting evidence from successful predictions, but is denied scientific status because it doesn’t accept that negative evidence falsifies it. But, as you imply, this is not the whole story. In fact, scientific theories are often difficult to falsify (including UG!); theories are rescued by adding ad hoc hypotheses to cover the “exceptions”, and so on. In brief, “naïve falsification” as it’s been called just doesn’t work, and it’s astute of you to sense this.

    But anyway, what is true, I think, is that the best theories are those that use well-defined constructs and clearly indicate the kind of empirical evidence that would support or challenge their claims. And, as you say, it’s by constantly questioning ideas and opinions that we learn.

    Liked by 2 people

  6. A good read, as always, Geoff. Thanks.

    “Popper suggested that rather than look for certainty, we should look for answers to problems, answers that stand up to rational scrutiny and empirical tests.”

    And I argue that the answer to the problem ‘How is a language acquired?’ need not presuppose an innate language acquisition device or UG, when it can be explained more parsimoniously by reference to existing, more general, cognitive faculties. As Nick Ellis argues (in VanPatten & Williams, eds. Theories of Second Language Acquisition: An Introduction, Routledge 2007), these explain at least some of the puzzles associated with language acquisition, such as the ‘natural order’, just as well as, if not better than, other competing theories. Empirical research (e.g. that summarised by Ellis in ‘Formulaic language and second language acquisition: Zipf and the phrasal teddy bear’ in the Annual Review of Applied Linguistics 2012) is available for those with the time or inclination to read it.

    As Michael Tomasello, a leading proponent of the so-called ‘usage-based’ (and yes, ‘embodied’) school of thought, sums it up (in Origins of Human Communication, MIT Press, 2008):

    ‘There is no question that there are general computational constraints on how languages may be created, acquired, and changed, and there are even implicational universals such that if the language accomplishes function X in this way, then it almost always accomplishes function Y in that way (Greenberg 1963). But the question is whether we need an innate universal grammar to account for these kinds of things. In recent research, many of these constraints and implicational relations have been accounted for in terms of the general way that people process information (Hawkins 2004) or the way that they focus on things informationally in different constructions (Goldberg 2006). In this view, then, universal computational constraints on all languages reflect general cognitive, social, and vocal-auditory principles and constraints inherent in human psychological functioning. Languages have been created within the constraints of pre-existing human cognition and sociality, and if these are understood well enough, on the current hypothesis, these constraints will supply what is needed. It is not that the evolution of some kind of innate syntactic template such as universal grammar is impossible, it is just that currently there is no evidence for it empirically, no precise formulation of it theoretically, and no need for it at all – if the nature of language is properly understood.’

    Finally, I’d refer you to the article by Dabrowska (‘What exactly is Universal Grammar, and has anyone seen it?’) that I referenced in my final comment on the P is for Poverty blog post, and which concludes: ‘Virtually everyone agrees that there is something unique about humans that makes language acquisition possible. There is a growing consensus, even in the generativist camp, that the “big mean UG” of the Principles and Parameters model is not tenable: UG, if it exists, is fairly minimal, and most of the interesting properties of human languages arise through the interaction of innate capacities and predispositions and environmental factors.’
    You can access it here:
    http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00852/full

    Like

  7. Hi Scott,

    Thanks for your reply.

    In my post, I suggest that in your discussion of the POS argument on your blog you show a poor grasp of Chomsky’s work and of fundamental considerations of theory construction. I also suggest that your writings on emergentism don’t give a full or well-considered account of language learning; show an inability to appreciate the severe weaknesses in Larsen-Freeman’s work; and fail to make your position on empiricism clear.

    In your reply you say that language acquisition can be explained without resorting to any innate language acquisition device or to UG. A more parsimonious explanation is offered by reference to general, cognitive faculties. In support of this you say:

    1. Nick Ellis (2007) in his chapter on the Associative-Cognitive CREED, explains “at least some” of the puzzles associated with language acquisition. Ellis (2012) summarises empirical research that supports an emergentist view.

    2. Tomasello (2008) “sums it up”. He says that recent research has accounted for “many” of the constraints that characterise natural languages, in terms of the general way that people process information. If the constraints of pre-existing human cognition and sociality are understood well enough, they will supply what is needed for language learning. Finally, there is no evidence for the existence of an innate syntactic template such as universal grammar, and no need for it if the nature of language is properly understood.

    3. An article by Dabrowska (2015) says “There is a growing consensus, even in the generativist camp, that the “big mean UG” of the Principles and Parameters model is not tenable: UG, if it exists, is fairly minimal, and most of the interesting properties of human languages arise through the interaction of innate capacities and predispositions and environmental factors.”

    Your reply thus consists of a list of other people’s assertions and says very little indeed in answer to my criticisms.
    * Which of the puzzles does Ellis explain?
    * What about the ones he doesn’t explain?
    * How are we to assess the empirical research he summarises?
    * What is the nature of language in Tomasello’s opinion?
    * What are the constraints he refers to?
    * What is the nature of the “interaction of innate capacities and predispositions and environmental factors” which Dabrowska refers to?

    The claim that general, cognitive faculties can explain language learning rests on describing what these putative general cognitive faculties are; you offer no such description and neither does Nick Ellis in his 2007 and 2012 articles. That’s hardly surprising, because nobody knows what they are, a fact which doesn’t seem to worry you. Ellis’ and others’ attempts to create connectionist models which prove the possibility of empiricist learning have so far failed to eliminate the need for innate, language-specific ideas. In the absence of some well-developed empiricist model of language learning, your appeals to embodied learning, the language body, babies’ desire for socialisation, and so on are really pretty thin soup. Just by the way, as Gregg (2006) points out, it’s possible that language skills develop (however that might be) in the same way that other skills develop (however that might be), while language knowledge is fundamentally different from other kinds of knowledge. “It is perfectly possible, that is, that our ability, say, to use active and passive forms of sentences improves according to the power law of practice (Ellis, 2002), while our possession of the concept SUBJECT is innate.”

    To paraphrase Gregg (2006), the argument for innateness in language is the argument from the poverty of the stimulus. The data that would be needed for choosing among sets of principles are in many cases not the sort of data that are available to an empiricist learner. Since children are in fact successful language learners, it follows that there must be innate, language-specific knowledge. Your post on the POS and your reply here don’t do much to challenge that argument. I’ve already quoted your assertion “Anyone who claims that children’s knowledge about an aspect of syntax could not have been acquired from language input has to prove that it couldn’t. Otherwise it remains another empirically-empty assertion.” Notwithstanding this absurd remark, the onus is actually on you and other emergentists to show how the environment is rich enough to provide universally successful language instruction to first language learners. That means not just asserting that associative learning is enough, or that a little stimulus goes a long way; but offering a coherent, persuasive explanation.

    References
    Dabrowska, E. (2015) What exactly is Universal Grammar, and has anyone seen it?’ Frontiers in Psychology. http://dx.doi.org/10.3389/fpsyg.2015.00852

    Ellis, N. (2007) The Associative-Cognitive CREED. In VanPatten, B. and Williams, J. (eds.) Theories of Second Language Acquisition: An Introduction, Routledge 2007

    Ellis, N. (2012) Formulaic language and second language acquisition: Zipf and the phrasal teddy bear. In the Annual Review of Applied Linguistics.

    Gregg, K.R. (2006) Taking a social turn for the worse: The language socialization paradigm for second language acquisition. Second Language Research, 22: 413 – 442.

    Tomasello, M. (2998) Origins of Human Communication, MIT Press.

    Like

    1. * Which of the puzzles does Ellis explain?
      * What about the ones he doesn’t explain?
      * How are we to assess the empirical research he summarises?
      * What is the nature of language in Tomasello’s opinion?
      * What are the constraints he refers to?
      * What is the nature of the “interaction of innate capacities and predispositions and environmental factors” which Dabrowska refers to?

      Life is short. The details are in the references. And the devil is in the details. Read them. As for me, I’m going to toast the new year. I think you should too. 😉

      Like

  8. Hi again, Scott,

    I quite understand your desire to enjoy life rather than make the effort required to explain yourself clearly. At this rate, by the time you get to my age, all we’ll get from you will be the occasional pithy bon mot. “Graze carelessly on collocations and stop worrying”; “Jeremy Harmer is a genius”; “My new book Natural Grammar 15 is out.”

    I’ve read all the stuff referred to, except Tomasello, M. (2998) 🙂 I wasn’t expecting you to answer the questions, just drawing attention to the fact that you hadn’t.

    I’ll certainly toast the new year tomorrow evening, and you too, while I’m at it. While you’re swinging from the rafters at some celebrity junket, I’ll watch the ball drop in the Plaza del Sol on telly, clink glasses with Judy and say “Here’s to us, the kids and our friends, including the splendid Scott Thornbury!”

    Happy New Year!

    Like

  9. Dear Scott,

    Before I get stuck into this special commemorative bottle of the Macallan, and after our latest little “frank exchange of views”, let me say how much I admire everything you’ve done to make the ELT world more interesting, more inquisitive, better. You’re a treasure, you really are, and an extraordinarily warm, funny, decent human being to boot. I wish you the very best for 2016.

    Your friend (I hope),

    Geoff

    Like

Leave a comment