Recently, I’ve been helping students get their dissertations for an MA in Applied Linguistics ready for submission. Some of the dissertations deal with different aspects of SLA, and a recurrent theme is whether the small studies that usually form part of the dissertation can be viewed as scientific research. This raises the general question: Can researchers in SLA construct a scientific theory? Many natural scientists, Popper, Lakatos and Feyerabend among them, deny scientific status to the areas involved in SLA research (psychology, cognitive psychology, sociology, anthropology, social psychology, linguistics, applied linguistics), and there are also numerous academics working in the field of SLA who think that the so-called scientific method is inappropriate for their work. As far as those in the field of SLA are concerned, there are those who embrace a relativist view (the post-modernists, constructivists, etc.) and then there are those who believe that there is a fundamental difference between the natural sciences and areas of enquiry dealing with human beings.
There are obviously some important differences between a physicist working in a laboratory, devising experiments on innate matter under carefully controlled conditions, and a researcher working in SLA who has people as the subject matter. First, there is the problem of categorical imprecision. While nature can be put into relatively well-defined categories to which deductive arguments can be applied with reasonable certainty, it is much more difficult to categorise people. What is a French speaker? What is fluency? What is a dialect? What are learner strategies? In the study of SLA, if there are elements which are not well-defined, stable, meaningful elements in the conceptual framework, then formal logical implications cannot be applied to them.
Ziman paints this gloomy picture:”Unfortunately human behaviour is always so complex and varied that we can seldom make a sharply confirmable (or disconfirmable) prediction from the model. At best, the chain of inference can only be tested statistically; the model produces no more than “tendencies” in particular directions…. More harshly it might be said that the behavioural sciences are cluttered with innumerable half-articulated speculative models that have never been subject to critical validation. Standards of theory construction and confirmation have seldom been sufficiently high to distinguish clearly between what is well established, what is essentially conjectural, and what has been thoroughly disconfirmed. … Such a situation is, of course, deplorable; but it reflects the enormous difficulties of defining consensible observations and discovering consensual theories to explain them” (Ziman 1978: 171).
Ziman might be understood as simply saying that as a matter of fact the behavioural sciences have a bad record, not that there is any necessary reason why the domain of their research makes scientific research and theory construction impossible. But perhaps necessary ethical restraints on doing experiments on human beings, plus the serious problems involved in observing human behaviour due to the fact that humans respond to being observed in complex and unpredictable ways, make a scientific approach impossible. Surely the whole point of science is that it deals with a world that has nothing to do with human beings – it attempts to explain things that would be true even if human beings never existed, and it relies on the fact that the natural world is impartial to human beings’ beliefs. As O’Hear puts it: “A scientific theory will characteristically attempt to explain some natural phenomena by producing some general formula or theory covering all the phenomena of that particular type. From this general formula, it will be possible to predict how future phenomena in the class in question will turn out. Whether they do or not will depend on nature rather than on men, and any scientist can observe whether or not they do, regardless of his other beliefs” (O’Hear, 1989: 7).
In any investigation of human beings, as much as researchers might try to set up experiments in carefully controlled and standardised situations, how can we know the effect that extraneous factors, such as the way the experiment was set up, the subjects chosen, the instructions given, had on the participants’ behaviour? How do we deal with experimenter variables? Can the problems of the self-consciousness of the subjects and the bias of the observer be overcome?
Winch (1970) argues that human actions are meaningful and that meaning is not a category open to causal analysis. Thus, human and social behaviour should be seen as rule-following behaviour, not as causally regular behaviour. Social science is distinguished from natural science by the unique property of its subject matter – it entertains beliefs about its own behaviour. The sociologist’s or anthropologist’s beliefs about the persons who make up the society under investigation have to take account of the beliefs of those persons about the very same facts. When we look at politics, we use political criteria, when we examine religion, we use religious criteria – we are nearly participants in the events.
Related to both the conceptual and observational problems of the social and behavioural sciences in general, and of SLA in particular, is the problem of what is observable. If science is supposed to explain the facts, if the empirical method involves limiting enquiries to questions about objects and properties which can be perceived by the senses, then how can we expect SLA research, which deals, among other things with cognitive processes, emotional states, and social pressures, to conform to the scientific framework sketched above?
Arguments like these are used to suggest that investigation of SLA should be excluded from the scientific club, as if, in some way, it did not live up to the standards of science. But there are, of course, a lot of academics in the field of SLA who have no wish to join the club. They argue that the insistence on being “scientific” robs the study of SLA of its interest. If science insists on reducing the complexities of human communication to mechanisms, and if it insists on a certain type of causal explanation, then we are better off without it.
To some extent, doubts about, and objections to, the applicability of a scientific approach to SLA stem from a misunderstanding about what science is. If science is defined as the study of natural phenomena then obviously SLA is not part of science, and neither for that matter is mathematics. If those engaged in research in the areas of sociology, economics, anthropology and psychology, for example, wish to call themselves scientists in order to emphasise the rigorous nature of their work and to distinguish it from the work of astrologers, for example, then I see no reason to object, but there remain fundamental differences between the hard sciences and the social sciences. My intention, in any case, is not to argue that SLA research should or should not be scientific, but rather that it should be rational, in the sense that its explanations should be logically consistent, coherent and open to empirical tests. Science is in many ways the best example of rationality at work, and can, I suggest, be characterised by its insistence on the twin criteria of rational argument and empirical testing. I distance myself from those who claim that there is no such thing as objective knowledge, and also from those who claim that any legitimate explanation of SLA must conform to a narrowly-defined scientific framework. The aim of what follows is to dispel misunderstandings about what explanations and theories involve.
Phenomena and Data
One central problem mentioned in the last section is the problem of what is observable. The problem is not confined to the natural sciences; it concerns any and all rational investigation. In order to deal with the problem of observability, a clear distinction should be made between phenomena and data. Theories attempt to explain phenomena, and observational data are used to support and test those theories. This distinction, argued for succinctly by Bogen and Woodward (1988), helps repair the damage done by positivists, for whom “cognitive psychology” would be an oxymoron.
Phenomena are detected through the use of data, but in most cases are not observable in any interesting sense of that term. Examples of data include bubble chamber photographs, patterns of discharge in electronic particle detectors, and records of reaction times and error rates in various psychological experiments. Examples of phenomena, for which the above data might provide evidence, include weak neutral currents, the decay of the proton, and chunking and recency effects in human memory. Bogen and Woodward give the example of research into the frontal lobes by neurophysiologists. Two researchers, Millner and Teuber, compared the performance of patients with frontal lobe damage due to surgery or gunshot wounds with normal controls on a number of tests. The tests involved sorting cards, visual searches, and orientation tasks. The data consisted of drawings made by surgeons and X-ray photographs of the skull, together with the test scores. Millner interpreted her data as indicating that damage to the frontal lobes impairs a subject’s ability to give up unsuccessful problem-solving strategies and devise new ones. Teuber thought his data indicated an impairment of a certain kind of co-ordination between motor and sensory functions. If Millner was right, behavioural preservation was the phenomenon her data indicated. If Teuber was right, the phenomenon indicated was a kind of dysfunction of sensory processing. (Bogen and Woodward, 1988: 316)
The important difference between data and phenomena is that instances of phenomena can occur in a wide variety of situations, since they are the result of the interaction of some manageably small number of causal factors, instances of which can also be found in a wide variety of situations. By contrast, many different sorts of causal factors play a part in the production of data, and their characteristics depend on the peculiarities of the experimental design, or data-gathering procedures, employed. “Data are idiosyncratic to particular experimental contexts, and typically cannot occur outside those contexts, whereas phenomena have stable, repeatable characteristics which will be detectable by means of different procedures, which may yield quite different kinds of data. … The psychological functions which Millner and Teuber ascribed to the frontal lobes ought to be exhibited in a wide variety of everyday behaviour. But the data (drawings, photographs and test scores) which they appealed to as evidence were idiosyncratic” (Bogen and Woodward, 1988: 317).
The point is, then, that empirical observation is needed to test hypotheses, not to be collected for its own sake: we do not go around the world impassively, objectively observing things. Our observation is theory-laden, and both the natural world and human experience are so complex that mere data can never describe or explain them. The claim by positivists and some empiricists that we have no good reason to believe in the existence of entities which we cannot perceive is both overly optimistic about what our sense organs and instruments can tell us about the world, and overly pessimistic about our resources for establishing the existence of phenomena. Bogen and Woodward conclude their paper: “In order to understand what science can achieve, it is necessary to reverse the traditional empiricist placement of trust and doubt. Our stance is to be modest and conservative in our estimation of what our senses and instruments can register, and to put more trust in the abilities of scientists to detect phenomena from the relatively little our senses and instruments do provide” (Bogen and Woodward, 1988: 352).
The previous section only partly deals with the fact that in all kinds of investigation, including that of SLA, there are appeals to things that cannot be observed in order to explain things that we can and do observe. To repeat: unobserved things include actual entities, like Neptune (which was claimed to exist – in order to explain the movements of Uranus – before it was eventually observed), electrons (which have never been observed, but can be, in principle), and forces, such as gravity. In all these examples, the claim of the theories involved is the same: some of the things we observe are manifestations of unobservable entities or forces.
Theories attempt to explain the causes of things that often cannot be observed. The evidence for our theory is indirect, we argue that some of the things that we can observe are manifestations of, or effects of, the phenomena under study. If there are rival theories of a phenomenon, (and if more than one gets through the conceptual and empirical tests we subject it to – see below), then we tend to favour the theory that gives a better explanation of the phenomenon, by which is meant a more complete entailment of the statement of the problem. This is popularly known as the “inference to the best explanation”, the inference being that explanatory power is taken to be a reason for belief. We cannot, of course, in strictly logical terms, allow this inference: the phenomena, if not observable, are only inferred in virtue of the explanation they are part of. Yet it seems extremely unlikely that a particular theoretical account can actually fit the facts so well by pure coincidence. The logical positivists would say that we have gone beyond permissible operations, but, as Popper says: “Realists not only assume that there is a real world but also that this world is by and large more similar to the way modern theories describe it than to the way superseded theories describe it. On this basis, we can argue that it would be a highly improbable coincidence if a theory like Einstein’s could correctly predict very precise measurements not predicted by its predecessors unless there is “some truth” in it” (Popper 1974: 7).
Hacking makes a similar point: “it would be an absolute miracle if for example the photo-electric effect went on working while there were no photons. The explanation of the persistence of this phenomenon – the one by which television information is converted from pictures into electrical impulses to be turned into electromagnetic waves in turn picked up by the home receiver – is that phontons do exist” (Hacking, 1983:54). It is also worth reminding ourselves that many postulated entities for which there was initially no observable proof have since been observed: Neptune, microbes, genes, molecules among them.
What then is a good explanation? What is it we are looking for when we ask for an explanation of some particular event that we find puzzling? The deductive view of explanation is that we look for information which, when appropriately put together, yields us an argument to the effect that the event in question was what we should rationally expect. For example, I go to get my motorcycle from the car park and find that it has a buckled front wheel. How did it happen I wonder? My friend tells me he borrowed my bike and hit a curb at speed. Well, that explains it! The event C (buckled wheel) was preceded by event A (my friend borrowed my bike) and event B (he hit a curb at speed). (1) This explanation rests in turn on a causal law that hard objects will damage softer ones on impact (the explanans). A fully spelled out explanation takes the form of adducing a general law or laws, or generalisation, some set of initial conditions, and deducing from these the statement describing the event to be explained (the explanandum).
Another feature of a deductive explanation is that it should show why something else did not happen – why the event in question had to happen. Deductive argument is so strong because if we can cite a valid generalisation of the form “All As are Bs” we rule out the chance of finding a singular statement of the form “This is A but not B.” The basic point of a deductive argument is that we cannot both accept the premises and deny the conclusion. Such a clean and tidy type of explanation is not always available, however; often, pace Popper, we have no general laws, we make unwarranted inferences, we use inductive arguments, we ignore cases of Bs that are not quite As, and so on. Nevertheless, the deductive schema serves as an ideal model, and has its practical uses.
Let us take another tack: how is an explanation arrived at? The first step in constructing a satisfactory explanation is often to propose a low level theory: generalisations of what is immediately observable, such as that gases expand when heated, that glass is brittle, or as when Molliere’s character ascribes the fact that opium puts people to sleep to its possession of “a dormative power”. Such generalisations do not get us much closer to an understanding of the cause of the expansion of gas, the brittleness of glass, or the sleepiness induced by opium: they do not tell us why they happen. But at least these low-level generalisations can succeed in eliminating other alternatives under review. If we explain the broken window by adducing the fact that glass is brittle, we rule out the alternative explanation that the projectile possessed immense, non-obvious force.
A little further along in the development of an explanation, we might be able to make some more general statement about the relation between two observable events. Let us take the example of Boyle’s law. Boyle showed that reducing the volume of gas to one half doubled its pressure, in other words the pressure of a gas is inversely proportional to the volume. (Asimov, 1975a) This is an example of an empirical law. The law generalises about a kind of event, not about a particular experiment with a particular cylinder using a particular gas, and is applicable to different events – other types of gases and/or cylinders. But it still does not tell us why an increase in pressure is linked with a decrease in volume. For this we need a theory of gases. Before we come to the theory, we should note that Boyle did not simply happen upon a J tube with some mercury in it, and observe that the pocket of air trapped in the closed end on the short side of the J shrank as he poured in more mercury. Boyle started with a problem – the density of air – and his experiment was designed to refute the accepted theory that the atmosphere was evenly dense all the way up.
Continuing with the explanation of gases (Asimov, 1975a), the kinetic theory of gases answers the question of why the pressure of a gas is inversely proportional to the volume. The kinetic theory invokes the atomic nature of gas, and sees it as being composed of a large number of molecules which, because they are moving, sometimes collide with each other and the walls of the cylinder. Newtonian mechanics described the motion of these molecules and made it possible, in principle at least, to calculate the pressure on the walls of the cylinder by determining how many molecules are colliding with the walls at each instant, and the strength of each collision. With this picture in mind it is not surprising that when the cylinder’s surface area is reduced, the pressure rises, since there are more collisions. The same theory explains why gases expand when heated. The kinetic theory of gases says that gases are made up of molecules in constant motion and that heat causes more and more violent motion of the particles that compose the gas. This more general theory has wider application, it answers more questions (changes in pressure and expansions in volume have common underlying causes) and gives us a more complete picture.
We should note two things. First, Boyle’s law was “the first step in the long series of discoveries about matter that eventually led to the atomic theory” (Asimov, 1975a: 170). A well-defined but unexplained empirical relationship is established between two phenomena, and this not only challenges the current paradigm but plays an important part in the development of a powerful new theory. Second, what has been improved is the causal narrative, not the deductive rigour.
In some treatments of theory construction, distinctions are made between four different stages of development: description, prediction, determining causes, and explaining phenomena. In the case of Boyle’s law and the kinetic theory of gases, such a development might be argued to have taken place, but, Boyle’s law hardly serves as a good example of the first stage. To take another example, once it has been observed that watching violence on television and aggressive behaviour are systematically related to one another, it becomes possible to make predictions. We then try to determine the causes of this relation that we can now predict. And finally we need to explain the events described: even if we establish that watching violence on television causes aggressive behaviour, we have not yet explained why it does: we need a theory. (2) While this might be an appealing account, it does not really do justice to the complexity of the matter: all these four “steps” are very tightly intertwined, particularly determining causes of events and explaining them, and often one or more step is left out.
By what criteria do we judge theories that offer different explanations of the same phenomena? What makes one theory “better” than a rival? First, like any text, a theory needs to be coherent and cohesive, and expressed in the clearest possible terms. It should also be consistent – there should be no internal contradictions. Theories can be compared by these initial criteria which may help to expose fatal weaknesses or simply invite a better formulation. In the discussions among philosophers of science about the natural sciences, these considerations are almost taken for granted: the big questions concern empirical adequacy, predictive ability, and so on. But I think Laudan (1977, 1996) is quite right to emphasise the importance of these “conceptual problems” although I disagree with his treatment of them.
Of the four types of conceptual problem Laudan lists in his taxonomy, the first – internal inconsistency or ambiguous theoretical mechanisms – is surely the most important. The second type of problem – assumptions made that run counter to other theories, prevailing metaphysical assumptions, widely accepted epistemology and methodology – seems to me to be relevant only to radical forms of relativism, which, according to my arguments in this book, means they should actually be excluded from serious consideration. There are no conceptual problems involved in assuming things that run counter to other theories, as long as the widely accepted epistemology and methodology is rationalist in the sense I have already defined it.
Laudan’s third category is reserved for conceptual problems that cause the theory in question to violate the “research tradition” of which it is a part. I have already said that I think this is an ill-defined technical term, and that anyway I can see no use for it. There is no need to pledge allegiance to any group, or to belong to any tradition, apart from the rationalist one, which has nothing to say about such “conceptual problems.” Once again, I suggest that Laudan’s attempt to champion the rationalist cause is misguided; we do not have to regard clashes of theories as evidence of irrationality or to suppose that this is a problem that can only be solved by inventing a research tradition.
Laudan’s final type of conceptual problem occurs when a theory fails to utilise concepts from more general theories to which it should be logically subordinate. Unfortunately Laudan does not expand on this, and it seems to me that this type of problem either belongs properly to the first category, or is no problem at all. If the argument is that a theory “should” be subordinate because of some obligation to the research tradition it is supposed to belong to, then I would say that if the theory has the impudence to challenge its presumed superiors, well good luck to it.
Despite my objections to important aspects of Laudan’s theory of scientific progress, I think Laudan is right to emphasise the importance of conceptual problems when assessing a theory. In the field of SLA, there is a great deal of muddled thinking, there are poorly-argued assertions, and badly-defined terms. Consequently, discussions among researchers and academics in SLA often deal with just these issues. Similarly, research methodology is less of a problem in the natural scientists than it is in SLA, partly because in the former experiments are often easier to control, variables are easier to operationalise, etc., and partly because the latter is so relatively young. Whatever the reasons, it is certainly the case that when judging theories of SLA, we should favour those that are most rigorously formulated.
Once a theory passes the test, more or less strictly set and marked, of coherence, cohesiveness, consistency, and clarity, we may pass on to questions of falsifiability and empirical adequacy. I should insist yet again that we are dealing with common sense notions of evaluation; the tests are never absolute.
Crucially, theories should lay themselves open to empirical testing: there must be a way that a theory can in principle be challenged by empirical observations, and ad hoc hypotheses that attempt to rescue a theory from “unwanted” findings are to be frowned on. The more a theory lays itself open to tests, the more risky it is, the stronger it is. Risky theories tend to be the ones that make the most daring and surprising predictions, which is perhaps the most valued criterion of them all, and they are often also the ones that solve persistent problems in their domain. Generally speaking the wider the scope of a theory, the better it is, although often in practice many broad theories have little empirical content. There are often “depth versus breadth” issues, and yet again, how these two factors are weighted will depend on other factors in the particular situation where the theory finds itself.
Simplicity, often referred to as Occam’s Razor, is another criterion for judging rival theories: ceteris paribus, the one with the simplest formula, and the fewest number of basic types of entity postulated, is to be preferred for reasons of economy.
There are no golden rules for theory assessment, no hard and fast rules even, except the obvious requirement that a theory has empirical content. How long should we defend a theory with bad test results – when do we say it has been falsified? To what extent do we ignore anomalies? When should we agree that a theory which is vague and confused is nevertheless a plausible candidate for development? Such issues must be decided on a case-by-case basis, and publicly debated among those working in the field.
Let me now give a brief summary of the relativist approach. In a subsequent post, I’ll deal with other approaches and suggest my own guidelines.
Those in the SLA academic community who adopt a relativist, postmodernist position, deny the idea of any objective reality external to the observer, and claim that there are a multiplicity of realities, all of which are social constructs. The adoption of the view that the construction of reality is a social process means, as we have seen, that there can be no one “best” theory of anything: there are simply different ways of looking at, seeing, and talking about things, each with its own perspective, each with its own set of explicit or implicit rules which members of the social group construct for themselves. Thus science, for example, is just one specific type of social construction, a particular kind of language game which has no more claim to objective truth than any other. In SLA research, those who take this view see the need to fight what they see as the outdated “positivist” paradigm which currently dominates the field, and to replace it with their own methodology. Let us look at a few short examples of this point of view.
Schumann (1983) suggests that SLA research should be viewed as both art and science. As an example of the artistic perspective Schumann suggests viewing the opposing accounts of Krashen and McLaughlin of conscious and unconscious learning as “two different paintings of the language learning experience – as reality symbolised in two different ways… Viewers can choose between the two on an aesthetic basis, favouring the painting which they find to be phenomenologically true to their experience” (Schumann, 1983: 74).
Lantolf (1996a) suggests that scientific theories are metaphors, that the acceptance of “standard scientific language” within a discipline “diminishes the productivity of the scientific endeavour” and that “to keep a field fresh and vibrant, one must create new metaphors” (Lantolf, 1996a, 756).
Firth and Wagner (1997) argue that SLA research should be “reconceptualiized” so as to “enlarge the ontological and empirical parameters of the field”. They continue: “We claim that methodologies, theories and foci within SLA reflect an imbalance between cognitive and mentalistic orientations, and social and contextual orientations to language, the former orientation being unquestionably in the ascendancy” (Firth and Wagner, 1997: 143). At the end of their paper they say: “although SLA has the potential to make significant contributions to a wide range of research issues, that potential is not being realised while the field in general perpetuates the theoretical imbalances and skewed perspectives on discourse and communication· (Firth and Wagner, 1997: 285).
Block (1996) argues that the field of SLA is under the sway of a ruling ideology, and in the course of a plea for a wider view of SLA research, Block challenges some central assumptions held by what he sees as the ruling clique. The assumptions that Block objects to include that there is any such thing as “normal science”, that a multiplicity of theories is problematic, that replication studies are helpful, and that there is an “ample body” of “accepted findings” within SLA research. Finally Block argues that one problem for the SLA community, which stems from its being under the sway of such misleading assumptions, is that those who attempt to challenge them do not get a fair opportunity to voice and promote their alternative views.
Markee (1994) notes that: “a few writers have valiantly attempted to stem the nomothetic tide”, but that “these have been voices crying in the applied linguistic wilderness.” (Markee, 1994: 91) The “hermeneutic scientific tradition” that Markee would like to see given at least equal footing with its nomothetic big brother replaces explanation with understanding, replaces “objective, value-free language” with “the ordinary language of social actors and their lay explanations of everyday experience.” (Markee, 1994: 92), and replaces a mathematical statistical explanation of a phenomenon with an explanation “that is constructed in terms of lay participants’ real-time understanding of the phenomenon.” (Markee, 1994: 93)
I would like here to separate two different issues which I think have been wrongly bundled together by the relativists. The two issues I refer to can be summed up in these questions:
1. What phenomena need explaining and what range of opinions should be expressed in the SLA research community?
2. How should we explain the phenomena of SLA?
When Firth and Wagner (1997) argue for “a reconceptualization of SLA research that would enlarge the ontological and empirical parameters of the field”, they would seem to be making a plea for more attention to be paid to sociolinguistics and discourse analysis, and for SLA research to be liberated from the domination of “Chomskian thinking.” But there is another argument in the Firth and Wagner paper, namely that those working in psycholinguistics are dominated by the views of a small group of researchers who insist that SLA research be carried out according to “established” and “normal” scientific standards. Firth and Wagner argue that there is something deeply wrong with such a position, and they go on to suggest that SLA research should throw off the assumptions of scientific enquiry and adopt a relativist epistemology which holds that there is not one reality, that all science is political, that all statements are theory-laden, that theories are a kind of story-telling, and so on. Here the two separate issues mentioned above have become tangled up. As Long puts it “Firth and Wagner attempt to bolster their “social context” case by an unfortunate appeal to epistemological relativism thereby conflating what are two separate issues” (Long, 1999: 3).
Block (1996) makes exactly the same mistake as Firth and Wagner. Block claims that those who attempt to challenge the ruling clique in SLA do not get a fair opportunity to voice and promote their alternative views, and at the same time he claims that the field of SLA is dominated by a certain methodological orthodoxy which should be replaced by a more relativistic alternative. Again, we must separate the issues.
To argue for a shift in focus for SLA research, i.e. for a more multi-theoretical, multi-methodological approach, where research is done from a sociocultural perspective, where a more context-sensitive approach is adopted, where concepts such as “non-native speaker”, “learner”, and “interlanguage” are re-examined with increased “emic” (i.e. participant-relevant) sensitivity, is one thing. To argue that there is no rational way to decide that Theory X is better than Theory Y is another, separate thing. The first issue is a political question about priorities in the distribution of limited research resources, the second issue is about the fundamental questions of what we can know, and of how we should do research. The relativists have every right to argue for more resources to be devoted to their kind of research, and to argue the merits of their kind of approach to theory construction and assessment. But they should clearly separate what are, I repeat, two different issues.
If it is in fact the case that those professing to use a rationalist, deductive research methodology are imposing their methods on others, and are insensitive to the value of “home-grown ways of thinking”, then I would be the first to urge them to stop such an imposition, and to listen to different points of view. What I would not ask them to do is stop criticising, or to abandon their methodology.
The important issue concerns explanation. While I hold a rationalist, realist position, the relativists claim that such views are obsolete and blinkering. This is an epistemological issue. As an example we can take the suggestion that scientific theories are metaphors, that the acceptance of “standard scientific language” within a discipline “diminishes the productivity of the scientific endeavour” and that “to keep a field fresh and vibrant, one must create new metaphors.” Nobody, I suppose, would question that terms like “input” “processing” and “output” are metaphors, and it is certainly worth reminding oneself that they are metaphors. But, from my side of the fence, scientific theories are not just metaphors, they are attempted explanations of events that take place in a real world and they are open to empirical tests which support or falsify them and thus make it possible for us to choose rationally between them.
To the extent that researchers need to be flexible, to be imaginative, to open up to unlikely possibilities, to brainstorm, to “fly kites”, etc., I would completely endorse Schumann’s suggestion that SLA research should be viewed as both art and science. I have no objection to looking at Krashen’s and McLaughlin’s theories as “paintings”, as reality symbolised in two different ways, but sooner or later, I suggest, we will need to scrutinise Krashen’s and McLaughlin’s accounts in order to check their validity, and to subject them to empirical tests. On the basis of such scrutiny, by uncovering ill-defined terms, contradictions, etc., and by seeing how they stand up to empirical tests, we will be able to evaluate the two accounts and make some tentative choice between them. First, they cannot both be correct: McLaughlin suggests that conscious learning affects language production, while Krashen denies this. Second, they suggest different ways of continuing the search for answers to the question of interlanguage development, and different pedagogical applications, and researchers have to have some reasons to choose between them. Krashen’s account is seriously flawed since, first, its terms are almost circular, and second, there is very little empirical content in it These, to the rationalist, are extremely serious defects. Schumann suggests that: “Neither position is correct; they are simply alternative representations of reality.” (Schumann, 1983: 75) It may well turn out that neither position is correct, and they are certainly alternative representations of reality; but if the implication is that there is no way, other than an appeal to our own subjective aesthetic sense, to decide between them, then here lies the fundamental disagreement between rationalists and extreme relativists.
See the “suggested Reading” section on the left under SLA for all references