Saturday 3 June 2017

3 Basic Problems of Language Assessment

The three basic problems of assessment are
  1. inference
  2. prediction and
  3. generalisation.
1. Inference
Inference is what we do with the performance of a test-taker. We make inferences about the language abilities of the test-taker based on how the test-taker has performed on the assessment in question. The problem is that to make this inference, we make a lot of assumptions about language and language performance. We assume that language performance comes from underlying language competence or underlying language abilities. We also assume that language abilities have particular structure, and that different language abilities interact with each other. Based on these assumptions, we design our tests to systematically sample test-taker's language. This sample is used as our raw material to make inferences about the underlying language competence or abilities.

The real problem is that we do not for sure know what is the relationship between performance and abilities/competence or how abilities are structured or how they interact with each other. All these questions are still not satisfactorily answered. 

2. Prediction
Prediction is to say in advance how language abilities will be used in future, in actual situations in real time. A good test will have high potential to predict actual performance. Abilities will interact with other performance conditions like physical conditions, affective factors, etc. during performance. Therefore, prediction has to consider these influences.

The problem is that if a test cannot make accurate predictions or if it makes wrong predictions, the decisions made based on the test will have serious consequences.

3. Generalisation
Generalisation is about applying the prediction to other contexts of language use as well. This is an important quality since a language test must be able to talk about a learner's language use in many situations and language use contexts. Otherwise a test has very limited relevance. Tests generally characterise different contexts so that we know what is different in various contexts. Using this information, we can apply the prediction based on a test to other contexts too.

The problem is when a test doesn't have the capacity to generalise. The test will be highly parochial, and its relevance will be too localised.

A larger problem
Each of the above problems conceptualise language sampling in different ways. Therefore, no single approach to language sampling is possible. With any particular approach, we cannot solve the problems mentioned above. These are the approaches available today which have their own independent focuses.

1. Abilities approach: This approach uses a model of communicative competence. Abilities underlie performance. Therefore this approach tries to build tests to elicit performance based on particular underlying abilities. Here language processes and contexts are treated as secondary extensions.

2. Processing approach: Processing approach gives centrality to language processing. Therefore, real-time language processing in communication is the focus of assessment. Tests will assess how well test-takers can handle the pressures of communication online. In this approach, abilities play only a secondary, service role. Therefore, assessment will focus on a sampling framework that looks at performance conditions. Generalisation thus will be to other language use contexts that use the same kind of language processing only. Again, this is limited.

3. Contextually driven approach: In this approach, difference between different contexts is the focus. Assessment is focused on the characteristics of contexts. Test sampling therefore focuses on covering a range of contexts so that generalisation of prediction is meaningful to those contexts.

Solution to these problems
1. Develop a model of underlying abilities
2. Develop direct performance tests combining performance and contextual problems

The first solution could be systematic in portraying language abilities. This would align with empirical methods used today in measuring abilities. Therefore, this approach could further the existing scholarship in the field. The process would be defining language constructs, then gather data to assess constructs, then assemble effective tests.
Its problem is that is presents a static picture of proficiency. Since it assumes that there are underlying abilities, and is trying to uncover them, it is a difficult endeavour especially because we are not sure of what these underlying abilities are, or what are their connections with performance!

The second solution has greater predictive quality because of the restricted situations it deals with. It emphasises context validity by looking into characteristics of contexts and performance. 
But the problem is that it deals with limited number of contexts or domains. Validation depends on needs analysis, which in turn depends on assessment, and vice versa. It works pretty much like ESP (English for Specific Purpose) tests. Generalisation is limited. Moreover, there is little underlying theory to explain the differences in contexts and performance in relation to abilities.

Therefore, the former 'interactive ability' model seems to be the better choice in terms of prediction and generalisation capacities. 

Amazon.in