* Theory Assessment in SLA


I’ve been asked by several students to say a bit more about theory assessment in SLA. Below is an abridged version of the text taken from by book on Theory Construction in SLA.

By what criteria do we judge theories of SLA which offer different explanations of the same phenomena? How do we assess to what extent they successfully achieve their aims? What makes one theory “better” than a rival? First, like any text, a theory needs to be coherent and cohesive, and expressed in the clearest possible terms. It should also be consistent – there should be no internal contradictions. Theories can be compared by these initial criteria which may help to expose fatal weaknesses or simply invite a better formulation. In the field of SLA, there is a great deal of muddled thinking, there are poorly-argued assertions, and badly-defined terms. Similarly, research methodology is less of a problem in the natural scientists than it is in SLA, partly because in the former experiments are often easier to control, variables are easier to operationalise, etc., and partly because the latter is so relatively young. Whatever the reasons, it is certainly the case that when judging theories of SLA, we should favour those that are most rigorously formulated. Once a theory passes the test of coherence, cohesiveness, consistency, and clarity, we may pass on to questions of falsifiability and empirical adequacy. I should insist yet again that we are dealing with common sense notions of evaluation; the tests are never absolute.

Crucially, theories should lay themselves open to empirical testing: there must be a way that a theory can in principle be challenged by empirical observations, and ad hoc hypotheses that attempt to rescue a theory from “unwanted” findings are to be frowned on. The more a theory lays itself open to tests, the more risky it is, the stronger it is. Risky theories tend to be the ones that make the most daring and surprising predictions, which is perhaps the most valued criterion of them all, and they are often also the ones that solve persistent problems in their domain. Generally speaking the wider the scope of a theory, the better it is, although often in practice many broad theories have little empirical content. There are often “depth versus breadth” issues, and yet again, how these two factors are weighted will depend on other factors in the particular situation where the theory finds itself.
Simplicity, often referred to as Occam’s Razor, is another criterion for judging rival theories: ceteris paribus, the one with the simplest formula, and the fewest number of basic types of entity postulated, is to be preferred for reasons of economy.

Two views on constructing and assessing theories of SLA

Here, I offer a brief look at the views of Long, and McLaughlin. On separate pages, I discuss the Relativist view and Gregg’s view (see “Relativists and SLA” and “Explaining SLA” in the Menu on the right, under SLA).


Long on Theory types and Theory Assessment

Long (1999) distinguishes between set-of-law theories that limit themselves to observing a strong correlation between two variables and making a generalisation about them, and causal-process theories that specify “how or why SLA will occur, not just that or when it will”. Long cites Spolsky (1989) as a source of set-of-laws theories and gives some examples from Spolsky, such as “The closer two languages are to each other genetically and typologically, the quicker a speaker of one will learn the other.” As for causal-process theories, Long cites Chomsky’s Principles and Parameters theory, among others.

Long (1990a, 1993,1999) also makes the distinction between two research styles: theory then research, and research then theory, which perhaps loosely correspond to the deductive and inductive approaches. Frequently, of course, researchers employ both styles at different stages of their work, but, Long argues, no proper explanation of any given phenomena can be offered by those who do not attempt to construct some sort of causal theory of the phenomena under examination.

When discussing the assessment of different theories of SLA, Long (1993) examines the theory assessment strategies used by Darden (1991), and in a later paper (1999), suggests that theories can be assessed in absolute and relative terms. “In absolute terms, theories may be judged inadequate because they are too powerful, ad hoc, untestable, say nothing about relevant phenomena, and so on. In relative terms, they may be less adequate than rival theories of the same phenomena because they consistently make less accurate predictions, account for fewer data, require more mechanisms to handle the same data, etc. – and of particular importance, following Laudan (1977), in terms of their comparative ability to solve various kinds of differentially weighted empirical problems” (Long, 1999: 7).

The criteria for an “absolute” choice are noteworthy. It is not obvious to me why we would reject a theory on the grounds that it is “too powerful”, but earlier in his article Long suggests that “the goal of most SLA theorists is the least powerful theory that will handle the known facts, i.e. to identify what is necessary and sufficient for language acquisition.” (Long, 1999: 3) I presume that this is a reflection of the comparative youth of SLA research, and that Long is being “realistic” and expects theorists to aspire to higher things in good time. The other three “absolute” criteria Long gives seem to me to simply rule the putative theory out of court. As I have argued, the normal way to evaluate theories is to assess their ability to explain the phenomena in question in terms of their logical consistency and their ability to withstand empirical tests.

When choosing between rival theories of the same phenomena, Long’s relative criteria need clarifying. First, if theory A consistently makes less accurate predictions (i.e. its predictions are falsified by empirical observation) than theory B, then theory A is in deep trouble, regardless of theory B. It is however, as Long suggests, quite possible that at a given moment we will be confronted by two theories with bad track records in terms of surviving empirical tests, and in this circumstance the obvious course of action is to examine both of them very closely and try and find the cause of the empirical anomalies – that, after all, is the reason for doing the observations in the first place. Second, while it perfectly acceptable to say that theories account for data, we must remind ourselves of the distinction made already between phenomena and data, and insist that it is often impossible to identify all the causal factors responsible for the production of data. I have no doubt that Long would agree with me here; those discussing these issues often refer to a theory’s ability to account for and handle data, and I make the point only because it emphasises the mistaken view of empiricism, in the sense that theory construction is not correctly described as a process of patiently collecting reliable data and then deciding what to make of it. As for Long’s final criterion, I see no way to use Laudan’s criterion of “differentially weighted empirical problems” to any good effect.

Finally, Long, like Gregg and Beretta, favours some culling of SLA theories, which I see no need for, as I have already said. One question, of course, is what counts as a theory: most of the sixty-plus mentioned by Long would seem to be at McLaughlin’s “proto-theory” level, or occupying a very small domain. SLA is, as has already been noted, a very wide field, and SLA research draws on many methods from many disciplines. While this might present a problem to those hoping to develop a unified theory of SLA, it seems premature, to say the least, to expect one single theory to explain all aspects of SLA, and meanwhile the multi-methodological, multi-disciplinary nature of SLA research is not, in itself, a hindrance. This does not mean, however, that we should entirely dispense with the requirement for a causal explanation for each domain.

Another issue Long raises is whether or not a theory should be required to inform language teaching practice. This, to some extent, reflects a long-standing debate in the philosophy of science regarding the epistemological status of theories: the descriptive and the instrumentalist. The first regards the theory as a summary of the facts, and as being true or false depending on whether it fits the experimental data. The second regards theory as being an instrument of inquiry, and according to it, no theory is true or false, but some theories are better than others as instruments for guiding research. Again, this is a philosophical issue (and not uninteresting for that I hope) where there is no need for any researcher to come down on one side or the other: it is not an “either/or” question.

As far as the specific, practical issue of to what extent research in SLA should be applicable to teachers, learners, policy decision-makers, etc., is concerned, Long’s common sense approach would seem best. While he defends the right of researchers to work on problems that have no obvious implications for practice, he accepts that researchers have a certain social responsibility to help improve the efficaciousness of classroom teaching and to forge a more liberal language policy.


McLaughlin on General Rational Requirements for a Theory of SLA

McLaughlin (1987), discussing theory construction in SLA, says that a theory of SLA should give a causal explanation of the phenomena. He agrees that first stage proto-theories (what Long (1985b) has called “storehouse” theories, and what Spolsky (1989) calls “set-of-laws” theories) are collections of often unrelated generalisations about phenomena. He gives these examples:

1. Adult SL learners learn faster than children but attain lower levels of ultimate proficiency.

2. Learners pass through a certain developmental sequence of structures

3. Errors made by learners in acquiring certain structures in a particular L2 are similar for all L1s.

If these generalisations are not unified under a general theory, then, in McLauglin’s opinion, they lead nowhere – they do not provide any coherent explanation of the phenomena we want to explain, nor can they lead to new hypotheses. McLaughlin sees one important task for theory builders in SLA as being to try to fit the different “bits” together.

McLaughlin suggests that an SLA theory should meet various types of requirements. First, there are requirements to do with meeting the correspondence norm.
• A theory should correspond to external reality. (3) In effect this means that a theory must have empirical elements.

• The concepts employed in a theory must be described so that anyone will interpret it in the same way.

• Terms used in the theory may be drawn from everyday language or the theorist may invent his own terms. If the terms are drawn from everyday language, then all ambiguity must be removed. If the term is a neologism (4) it can be precisely defined but risks being misunderstood, an example is intake. Operational definitions are very helpful.

• A theory must have explanatory power – good theories go beyond the facts and can be generalised.

A good theory meets the norms of correspondence when the explanation it provides applies to a specified range of phenomena and when the conditions suitable to its application are met.

Second, there are coherence norms. The simpler a theory is, the better. Do not multiply variables, do not use ad hoc explanations. A theory should be consistent with other theories in the field. McLaughlin gives the example of telepathy being suspect because it is the only form of transmitted information that is not affected by the distance travelled.

Third, there is the pragmatic norm: a theory should be practical – it should make predictions.

And finally, a theory must be falsifiable. An adequate hypothesis is one that repeatedly survives “probing”.

With the one exception of the requirement that the theory should be consistent with other theories in the field, I endorse these requirements, which are incorporated into my own “General Requirements” (see below). The “consistency” requirement seems unecessary to me; I can see no good reason to make such a requirement. If there are those who want to suggest that telepathy is part of an explanation of SLA, then they must make their case within a rationalist framework, and then the community can decide what to make of it.

For my own view on theory assessment, see the page “General Rational Requirements for a Theory of SLA” in the SLA section of the Menu on the right.

All references can be found in the “Suggested Reading” page, under the SLA section – see Menu on the right.

2 thoughts on “* Theory Assessment in SLA

  1. Hi Runankashfee,

    Thanks for taking the time to leave this comment. I was aiming my remarks at post-grad students, but anyway,how’s this:

    These criteria can be used to assess theories of SLA:

    1. Theories should be coherent and cohesive, and expressed in the clearest possible terms. They should also be free of internal contradictions.
    2. Theories should lay themselves open to empirical testing: there must be a way that a theory can be falsified by empirical observations,
    3. Theories should be “fruitful”, in the sense that they should make daring and surprising predictions, and they should solve persistent problems in their domain.
    4. Theories should be broad in scope. Ceteris paribus, the wider the scope of a theory, the better it is.
    5. Theories should be simple. Following the Occam’s Razor principle, ceteris paribus, the theory with the simplest formula, and the fewest number of basic types of entity postulated, is to be preferred.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s