Monday 29 May 2017

International English Language Testing System (IELTS)

IELTS examination is conducted by the British Council. The test score is used for the purposes of Study, Migration and Work. It’s a high stakes gatekeeping examination that screens people for the aforementioned purposes. It measures the English language proficiency of people who want to work or study in an English speaking country. It awards one of nine grades to test-takers, based on their performance. The test is available in two separate versions- one for academic purposes, and another for general training. IELTS accepts all native speaker versions of English- British, North American, New Zealand and Australian English. Assessment covers all four skills (listening, speaking, reading and writing).  The test promises to avoid biases through careful planning and execution. The test can be taken at any test centre across the world on about 50 different dates a year.

IELTS Academic and IELTS General Training versions
The IELTS Academic version is for people who wish to pursue higher education or to register for professional services in an English speaking country. The test content is academic in nature. The General Training version of the test is for those who go to an English speaking country for intermediate education, work experience or some training programme. Australia, Canada, US and UK take this test as the criterion for allowing migration. The test content is designed to test basic survival skills in everyday living situations.

The test is organized as two parts on two days. Listening, reading and writing tests will be completed on the same day in a 2 hours and 45 minutes’ continuous test. Speaking test is conducted within two weeks before or after this according to the plan of the test centre.
IELTS Listening

Listening test is for 30 minutes. There will be four recordings. Two of them will be on everyday social contexts- a conversation and a monologue. The other two will be on academic subjects or educational context- a conversation and a monologue. Assessment will be based on factual information, opinion and attitudes of speakers, purpose of utterances grasped and the ability to grasp development of ideas. There will be 40 questions. MCQs, matching, plan/map/diagram labelling, form/note/table/flow-chart/summary completion, sentence completion, etc. are the types of tasks available. Each question carries one mark.

MCQs will have three or more options to choose from. Global comprehension or comprehension of specific points is used to test a wide range of skills. Matching questions test the candidate’s abilities to relate and connect facts in the listening passage, to listen for details, to follow a conversation, etc. Planning, mapping and diagram-labelling questions assess candidate’s abilities to understand and relate descriptions to visual representation, to follow spatial directions, locations, etc. Questions that use form/note/table/flow-chart and summary completion test candidate’s comprehension and attention to details. In a given outline, candidates will have to fill in the required information. In this type of questions, the instructions are crucial and have to be strictly followed to avoid negative marking. Sentence completion intents to test the candidate’s ability to identify key information in a listening text. They will have to complete the given sentence using information from the listening passage. A word limit is given and has to be strictly adhered to. Short answer questions test candidate’s ability to comprehend factual information from listing. Again, word limit applies here. Assessment is done by trained and certified markers and analysed by Cambridge English Language Assessment.

IELTS Reading
Reading has two kinds of tests- Academic reading and General Purpose reading.

IELTS Academic Reading
Academic reading test is for one hour, and has forty questions. Candidates are given three reading passages. Task types used are multiple choice, identifying information, identifying the writer’s views/claims, matching information, matching headings, matching features, matching sentence endings, sentence completion, summary completion, note completion, table completion, flow-chart completion, diagram label completion and short-answer questions. Texts used are selected from books, journals, newspapers and magazines on general interest topics. Passages can have narrative, descriptive, or discursive/argumentative style. Texts may contain logical arguments and non-verbal information. Questions are given in the same order as information appears in the text. For MCQs, four, five or seven alternatives will be given. They test global and local comprehension. Identifying information and writer’s opinion, candidates prove their ability to locate and recognize information conveyed in the text. Opinion tasks are used with argumentative tests. Matching information tests candidate’s ability to identify specific information where matching headings tests the ability to identify main /supporting ideas. Matching questions test candidate’s ability to relate information. For this, skimming, scanning and reading for detail are necessary skills. Matching sentence endings tests candidate’s ability to understand main ideas within sentences by selecting the best option to complete the given sentence. There will be more options than the number of questions raising the level of challenge. Sentence completion questions test candidate’s ability to locate specific details in a text. Completion of summary/note/flow-chart/table asks candidates to complete the given summary, etc. with information from the text. Understanding of main ideas, collocation, etc. are tested with this type of tasks. Diagram labelling asks candidates to label the given diagram based on the text. Understanding of detailed description and ability to transform information into other forms are tested with this type of tasks. Short answer question test candidate’s ability to locate specific information within the text.

IELTS General Purpose Reading
There are three sections in this one-hour general purpose reading test. Maximum score is forty marks. The first section has two or three short texts. It tests candidate’s social survival skills to retrieve information from notices, tables and advertisements. The second section is to test candidate’s skills in workplace survival. Materials used will be like job descriptions and staff training materials. The third section involves general reading with more complex structures. Descriptive and instructive materials of general nature are selected. The types of questions are the same in Academic Reading test. The difference is in the materials used which raise or lower difficulty level of the test.

IELTS Academic Writing
This is a one-hour test with two questions to be answered. First question is to describe some visual information (charts, tables, diagram, device or process) in their own words, and second question is to respond to a point of view or argument or problem. The first answer expects the use of academic or semi-formal neutral style including most relevant information from the given data. About 150 words must be written in about 20 minutes in sentences (not as broken sentences, or notes). The second answer must be focused and relevant to the question. About 250 words are expected in about 40 minutes.
Each task is assessed independently by certified examiners who are approved by British Council. Performance descriptors are clearly stated. First question is assessed on the criteria of task achievement (how well task-requirements are fulfilled), coherence and cohesion (logical sequencing and linking of ideas and fair use of cohesive devices), lexical resource (range and accuracy of vocabulary use) and grammatical range and accuracy. Second question is assessed on the same criteria except for that of task completion. It is replaced by task response which sees how the candidate develops a position in relation to the given question, how ideas are supported by evidence, etc.

IELTS General Training Writing
Like in Academic Writing, here too we have two writing tasks. The first task is to respond to a situation in the form of an informal, semi-formal or formal letter in at least 150 words in about 20 minutes. Common day to day situations will be presented. The skills needed would be asking or giving factual information, expressing likes, dislikes, needs and wants, and making requests or suggestions. Test takers’ ability to write standard letters, organize and link information appropriately, cohesively and coherently is tested. The second task is to respond to a point of view or argument or a problem. It requires at least 250 words, and candidates can take about 40 minutes. Answering might involve providing factual information, presenting a solution, justifying an opinion or an argument, evaluating an evidence, etc. General topics are used. Answering requires the use of more abstract and complex ideas than task 1. Irrelevance and short text would attract negative marks.


Assessment criteria and performance descriptors are the same as in Academic Writing test.

IELTS Speaking 
Speaking test is an oral interview with an examiner. The test is recorded for evaluation and quality maintenance. It lasts about 14 minutes. There are three parts with specific functions.

Part 1 is introduction and interview. Test taker is asked his/her name, details about family, and other familiar everyday topics to put him/her at ease and to introduce them to each other. Ability to speak of everyday topics, state opinions and experiences are tested in this part. This part takes four to five minutes.

Part 2 takes about three to four minutes. But this time, the candidate will have to speak on a given topic in detail. Candidate prepares for a minute, speaks for two and then answers interviewer’s questions. This part tests the candidate’s ability to speak coherently at length about a topic without prompts, using appropriate language. Candidate may have to speak about his/her personal experiences also.

Part 3 is a discussion about the same topic in part 2 in a general, but deeper manner. The focus of the test is on the ability to express and justify opinions and to analyse, speculate about and discuss issues. This takes four to five minutes.

Marking is done by certified IELTS examiners. Performance descriptors are prepared. Fluency refers to the ability to speak fairly continuously at a fair rate to make coherent and connected speech. Coherence refers to logical sequencing, marking stages of a discussion, narration, argument and use of cohesive devices. Lexical resource means range and accuracy of appropriate choice of vocabulary. Ability to circumlocute is also noted. Grammatical range and accuracy refers to appropriate use of grammatical resources. Length and complexity of spoken sentences, use of subordinate clauses, range of sentence structures, number of errors made and their effect on communication are the measures of grammatical ability. Pronunciation is measured by the amount of strain listener has to put in, amount of unintelligible speech and influence of L1.

Figure 1- IELTS Score band

The final score is generated by averaging individual test scores of the four skills. It ranges from 0 to 9 as shown in Figure 1. The test score has a validity of two years. 

IELTS claims to be fair to candidates of all nationalities, cultural backgrounds, genders or special needs by trialling the test questions extensively with people from various backgrounds all over the world. It assesses language skills, not specialist skills. It is not based on any particular text book or syllabus, but on general knowledge of English language and its use.  To ensure quality and safety, IELTS has established procedures to verify candidate’s identity. The tests are unique- no candidate gets the same test twice. Assessment involves double marking and has other security features. Results are available online.

The test development has the following stages.
a. Commissioning of language specialists to work on the test. Test development starts with test specifications and characteristics of four IELTS components. This team of specialists guides test writers by providing information on specific requirements, approach to test writing, and selection of appropriate materials.
b. Pre-editing is the stage where initial materials submitted by test writers are edited for topic, level of language, style, level and focus of task. Suggestions for revision are given to the writers.
c. Based on pre-editing feedback, test material is worked on and resubmitted. This material is either sent for further revision or for pre-testing.
d. Pre-testing stage gives the test to representative groups of test-takes from around the world. Information on item difficulty and ability to distinguish between strong and weak candidates is gathered. Based on this information, decisions are made whether to accept or not to accept particular materials for testing.
e. Standards fixing stage involves testing new listening and reading materials with representative test-taking groups. This ensures that the difficulty levels of the materials are appropriate to provide same measures of language ability in all band scores. Once this is done, materials are ready for use in tests.
f. Test construction and grading is the final stage. Papers for all four tests are constructed. Test construction is based on item difficulty (mean of all items and that of individual items), range of skills tested, balance of task types, range of cultural perspectives and range of voices and accents in the listening versions. Data from tests is collected to ensure accurate grading and to feed into quality improvement.

Saturday 27 May 2017

A Framework to describe Language Ability

Bachman and Palmer (1996) discusses the need to have a clear framework to talk about language ability. Since we use tasks in test in order to make inferences about test takers' language abilities, or decisions about their future, we need to demonstrate how test tasks correspond to language use tasks in real life. To do this we need a framework that could efficiently and clearly describe language task characteristics, test task characteristics, test taker characteristics and language user characteristics.

What is language use?
Language use is the creation and interpretation of meaning in discourse by an individual, or dynamic and integrated negotiation of intended meanings between two individuals. It involves multiple interactions, and is complex. It involves multiple language use situations including testing situation. Since language use is made up of interactions, we need an interactional framework.
As we can see in the figure, the inner circle has characteristics of the individual, outer circle has those of tasks and settings. They interact in language use. Thus the need for an interactional framework.

Individual characteristics are 
  • personal characteristics like age, sex, nationality, resident status, native language, level and type of general education, type and amount of preparation or exposure to the given test, etc.
  • topical knowledge: real world knowledge. They are knowledge structures in long term memory. This information is used by language users in the task. Some tasks presuppose certain kinds of topical knowledge.
  • affective schemata: It is the affective or emotional correlate of topical knowledge. On presentation of the task, test takers assess the task using this. Affective responses are determined by affective schemata and task characteristics. 
  • Language ability: The definition of language ability needs to be context specific. There is no universal definition. Our inferences will be based on this definition. This definition is known as 'CONSTRUCT DEFINITION'. The model of language ability presented here has two components. They are: 1. Language Knowledge and 2. Strategic Competence.
1. Language Knowledge
Language knowledge is what is used by strategic competence to create and interpret language discourse. It has two components: Organisational knowledge and Pragmatic knowledge. They are described in the following diagram from Bachman and Palmer (1996).
2. Strategic Competence
It is the set of metacognitive components or strategies. The elements are goal setting, assessment and planning as can be seen from the figure below from Bachman and Palmer (1996). 
 

How is metacogntive strategies used in language use and language test? The following diagram from Bachman and Palmer (1996) will tell us. 

Language Skills
Traditionally, language skills meant the four language skills- listening, speaking, reading and writing. The new understanding suggested by Bachman and Palmer is that language skill is situated in particular contexts, in specific tasks. it is not part of language ability, but is a contextualised realisation of language ability to use language in specific language use tasks. Therefore, they urge us not to think of language performance in terms of 'skills', but in terms of specific tasks or activities. Therefore, 'skill' can be better seen as a combination of language ability and task characteristics.
Advantage of this view
The advantage of this view is that the above framework can be used to design new tasks, or select existing tasks for assessment using a checklist of language abilities. See the checklist below.

This summary is from Bachman and Palmer (1996). I believe this summary is useful to you as a test designer and test user.

Thursday 25 May 2017

Characteristics of Tasks for Language Tests

Like all language learning and teaching tasks, language test tasks also have characteristics. Yes. But why are we talking about task characteristics in the context of tests? We are talking about the characteristics of test tasks for the following reasons.


Uses of Task Characteristics

  • Knowing the characteristics of tasks will help us link test and non-test tasks better. That is, if we know the characteristics of a task, we will be able to see if test tasks reflect non-test tasks. In other words, we could ensure that the task we use in a test is much like a task in real life situations. 
  • Knowing the characteristics of tasks will give us information about what language ability of the test-taker is engaged while performing test/non-test tasks.
  • Knowing the characteristics of tasks will help us establish authenticity of test tasks. If the test task characteristics correspond to Target Language Use (TLU) task characteristics, we have an authentic test task.
  • If we know task characteristics, we will be able to control them while designing test tasks.
Test must be a clear, transparent process. In testing, it is important that the test taker must understand how to perform, what performance is expected, how the performance will be rated and how the result will be used. 

In order to talk about tasks characteristics in the context of language assessment, we must first define language use tasks. Language use tasks are the tasks used in language tests to gather information about the test-taker's language abilities. They are situated in particular contexts, goal oriented and involve active participation of test taker/s. 

TLU domain is the set of language use tasks that the test-taker might encounter outside the testing situation, to which we want to generalize our inferences about language abilities/skills. 

For our purposes, we can look at language use as a set of language use tasks, and language test as a procedure to elicit language use instances from which inferences can be made about test-taker's language abilities. 

Characteristics of Test Tasks
Task characteristics have very clear influence on task performance. When our intention is to elicit best performance from test-takers, we ought to consider task characteristics so that the test tasks are best suited to elicit their best performance. Especially when each test task is a bundle of characteristics, we need to have a framework for clear understanding. Bachman and Palmer (1996) proposes the following framework to understand and use task characteristics for test development and design.

The framework intends to help us base test tasks on TLU tasks, ensure comparability of test and non-test tasks, and ensure authenticity. The elements of the framework are:

1. Setting
Setting implies physical circumstances. It has three elements.
  • Physical settings (place, light, furniture, etc.), 
  • Participants (administrators, other participants in group tasks, etc.) and 
  • Time of task (conducted at what time, when test-takers are fresh/tired, etc.)
2. Test Rubric
Test rubric talks about structure and procedures of the test. Elements are:
  • Instructions: explicit so that test-taker is informed how to take the test, how it is scored, and how scores are used; Language of instruction, its presentation and specification of procedures must be conducive. 
  • Structure: how parts are put together to form the entire test.
  • Time allotment for each item, and the entire test.
  • Scoring method: Criteria of correctness, scoring procedure and explicitness of both of these must be informed clearly.
3. Characteristics of Input
Elements are:
  • Format
  • Channel- aural, visual or both
  • Form- language, non-language or both
  • Language- native, target or both languages
  • Length of input texts
  • Type of input- item or prompt
  • Degree of speededness- how fast the testee must process the input
  • Vehicle- how the input is delivered: live, reproduced or both
  • Language of input- organisational (grammar, vocabulary, syntax, morphology, etc.) and pragmatic (functional and sociolinguistic) characteristics, and topical (personal, cultural, social information) characteristics.
4. Characteristics of expected response
Elements are:
  • Format
  • Type of response expected: selected, limited production or extended production
  • Degree of speededness- time available/needed to process
  • Language- native, target or both languages
5. Relationship between Input and Expected Response
Elements are: 
  • Reactivity: how input or response directly influences subsequent input/responses
    • reciprocal tasks: with an interlocutor- has feedback and interaction
    • non-reciprocal tasks: no feedback or interaction
    • adaptive tests: new development. Subsequent tasks are varied in difficulty depending on previous response
  • Scope of relationship: The amount of language to be processed in order to respond as expected
    • broad scope- like in a prompt question
    • narrow scope- needs to process only limited amount of available input.
  • Directness of relationship: whether expected response is based directly on input or also on other background information/knowledge
    • Direct
    • Indirect
Application of this Framework: To compare TLU and test task characteristics, and to create new tasks by assembling different task characteristics.

Summary from Bachman and Palmer (1996)

Wednesday 24 May 2017

Six qualities of Test Usefulness by Bachman and Palmer

In their book Language Testing in Practice: Designing and Developing Useful Language Tests, Bachman and Palmer (available at this link) spends an entire chapter to talk about Test Usefulness. They take this effort since usefulness is the foremost quality of a test. If the test is not useful, there is no point in having it at all.

Test Usefulness has six qualities or elements. They are discussed in brief below.

1. Reliability
Reliability is about consistency. Two versions of a test must provide comparable scores. Two sets of test takers of comparable language abilities must give comparable scores. Same test administered after a period of time must also deliver comparable results. If the tests provide very different scores under these circumstances, we cannot trust or rely upon the test. Though it is not possible to attain 100% reliability at all times, it is a very necessary quality for any good test.

2. Construct Validity
We interpret the score of a test. We make decisions based on the score of a test. We promote students, rank them, etc. based on test scores. How do we justify such actions? It is not enough just to claim that our judgments or decisions are justified. We ought to demonstrate it. Construct validity implies that the test score reflects the areas of language ability the test claims to measure. To do this, the test must define what constructs measured in a particular test. Construct validity can be defined as the extent to which the test score can be interpreted as an indicator of the abilities or constructs of the test taker, that we measure. Another way of defining it is 'the correspondence between the characteristics of test task  and Target Language Use (TLU) task to which we want to generalise our test score. Therefore to define construct validity, we need to define TLU task characteristics and constructs to be measured.
What ensures Construct validity? a) correspondence of characteristics of test task to TLU tasks, b)engagement of testees' language abilities by the test task characteristics.
Note that construct validation is an ongoing process, and that no interpretation is absolutely valid.

3. Authenticity
Authenticity is the correspondence between TLU task performance and test task performance. By ensuring that TLU and test tasks have same characteristics, we can ensure authenticity. Generalisation beyond test tasks depends on authenticity. Therefore, this is a critical quality/feature of test tasks.

4. Interactiveness
It is the extent and type of involvement of the test takers' individual characteristics in accomplishing a test task. Three major individual characteristics are language ability, topical knowledge and affective schemata. These characteristics interact with the test task characteristics. This interaction can be controlled by monitoring test task characteristics.

5. Impact
Impact is on test takers, teachers, and the society in general. Test takers' experience of taking the test, preparing for the test, receiving the feedback and facing the decisions based on test score are impacts on test taker. For teachers, a test might mean change or adjustment of instruction style, teaching materials, assessment and feedback. For the society at large, test methods imply allocation of funds, changes in decisions, arrangement of other facilities, infrastructure, etc. Characteristics of testing situation are also important factors.

6. Practicality
Practicality is about the feasibility of a test. The elements influencing this will be human resources, material resources and time. If one of these is not available, the test may not be practical.

For each testing situation/context we need to have a balance of these six elements so that the test is useful. Take one of these elements away, and the test becomes useless. For example, if all the first five qualities are there in a test, but the test is not practical, the test is not useful. Therefore, while designing or adapting tests, we need to carefully consider all these test qualities.

All-India English Language Testing Authority (AIELTA)

The All-India English Language Testing Authority (AIELTA) is the testing arm of the English and Foreign Languages University (EFL-U), Hyderabad. AIELTA develops and designs English language proficiency tests to enable students and professionals to assess their proficiency in English. Unlike many other language proficiency tests currently available, AIELTA is specially tailored to meet the demands of the Indian context. Developed by experts at EFL-U, AIELTA is open to anyone over sixteen regardless of their educational qualifications or employment status.

AIELTA also develops tests on demand for organisations that require a diagnostic tool to measure the language abilities of their employees.

AIELTA’s goals are:
• to produce standardized tests at various levels to measure functional abilities in English
• to offer consultancy services to public and private sector organizations; and
• to respond to the needs of state and central agencies as well as institutions at secondary and tertiary levels of education throughout the country.

The test is for 3 hours. It is offered at three levels. They are Class X, Graduation, Post-Graduation.

The cost of the test was ₹1500 when conducted in the year 2013.

Statistics says that in 2008, 26% of India’s rural children attended English medium schools. i.e., the demand for English is rising day by day. In this context, a contextualised test like AIELTA is necessary. The multilingual and multicultural context of India demands special considerations in test construction. A test that understands the needs, training and knowledge of the candidates can better serve its purpose. AIELTA could have served this purpose using the understanding of English language scenario in India built by EFL-U over the years. But unfortunately, the test was not well-received for various reasons, and is discontinued. Much literature about the test is not available, and we do not know about its future.

Link to University's information on AIELTA: http://www.efluniversity.ac.in/AIELTA.html

Monday 22 May 2017

Proficiency Tests

Proficiency tests measure candidates’ ability in a particular language. These tests are regardless of any training they may have undergone, or any course they may have attended. The content of proficiency tests is thus based neither on any language course contents nor its objectives.

What then is a proficiency test based on? Proficiency tests are based on a specification of what the candidates may be able to do with the language in order to be called proficient in that language. To understand proficiency tests, we must define the term ‘proficiency’. To say you have proficiency means to say you have sufficient command of language for a particular purpose. For example, ‘proficient in English to be a proof reader in a publishing company’, ‘proficient in English to manage post graduate studies in English in the United Kingdom’, etc. Thus proficiency is related to the purpose for which the test is taken. This ‘purpose’ will be reflected in the content specification of the proficiency test, prepared at the early stage of test development. But proficiency is not a simple construct. As Spolsky (1983) states, an individual’s overall language proficiency might consist of different functional abilities. It could be influenced by many variables. For example, Carrell & Grabe (2010) state that reading proficiency in L2 in influenced by reading ability in L1 and general proficiency in L2.
Some proficiency tests are of a more general nature. These tests will have no particular programme or purpose in mind. They are conducted for candidates to know where they stand with regard to their target language skills. Cambridge Certificate of Proficiency in English (CPE) is an example. It assesses proficiency at CEFR level C2. These general proficiency tests show that the candidate has reached a certain level with respect to the abilities specified by the test. One advantage of these tests is that they are independent of any teaching institution. Therefore, they are used by many professional and educational institutions for admission. They are trusted by potential employers and educators in order to compare candidates coming from various backgrounds. This ‘gate-keeping’ function makes them very powerful. These tests specify in detail, what a successful candidate will be able to do in the target language. Tests are based on these specifications, to see how well a candidate can fulfil them.
Most proficiency tests use available specifications like Common European Framework Reference (CEFR). CEFR is one set of specifications used by many proficiency tests today. For a proficiency test to be effective and useful, well defined specifications are necessary.

Though proficiency tests are not based on any particular course or programme, these tests may have powerful backwash effect on the courses of study available. These effects could be positive or negative. 

Monday 15 May 2017

Three types of task planning on Fluency, Accuracy and Complexity in L2 oral production

"The Differential Effects of Three Types of Task Planning on the Fluency, Complexity, and Accuracy in L2 Oral Production"- Paper by Rod Ellis, published in Applied Linguistics 30/4, 2009. Available at this LINK to download.

This paper by Rod Ellis is an excellent summary of research on the effects of planning time in L2 oral production till 2009. Ellis puts the research till date in perspective and outlines questions for future research. A summary with comments is given below. I am adding my comments in Italics wherever appropriate. Relevant references are copied from the original paper  and given at the end of this post for your quick reference. 

Research on the effects of planning time on L2 oral production informs the methodology of task-based teaching for improvement.

Three kinds of planning are distinguished by Ellis (2005). They are Rehearsal planning, Strategic planning and Within-task planning. 
Rehearsal and Strategic are both Pre-task planning which is done before the learner actually does the task.
Rehearsal planning is when the learner rehearses the entire task before performing the task a second time. 
Strategic planning lets the learner plan what language and content to use during the task, but doesn't let the learner rehearse the task.
Within task planning is when planning takes place during the performance of the task. There are two kinds- Pressurised and Unpressurised. Pressurised planning has a specific time limit. Therefore, learners must strive to finish the task within this constraint. This is otherwise known as pressurised online planning. Unpressurised planning provides unlimited time to complete the task. This leads to careful online planning. 

Three aspects of language production are looked at to see the the effect of planning time on them. They are, accuracy, fluency and complexity. They constitute a learner's language proficiency. These three terms are defined by Skehan (1998) and Skehan and Foster (1999). Fluency is the capacity to use language in real time, complexity is the capacity to use more advanced language and Accuracy is the ability to avoid error in performance. In practice, however, researchers develop and use operational definitions to serve the goals of their research. This is problematic when it comes to comparison of different studies. 

The framework developed by Skehan (1998) speaks of two systems possessed by language speakers. They are rule-based and memory-based systems. Rule-based system consists of the underlying rules and patterns of language. Memory-based system consists of chunks of language or formulaic sequences which are readily accessible for use. The difference between these two is in terms of processing load. The former requires large amounts of attentional resources to process, while the latter requires much less resources. Language performance involves the use of both in a variable manner depending on the requirements and demands of language processing at the time of production. This has three implications. Firstly, this implies that proficiency is a flexible phenomenon which adapt according to the conditions available during performance. Secondly, if one learner lacks in one of the systems, he/she can compensate that shortcoming using the other. Thirdly, when the situation demands greater dependence on one system, some aspects of performance might be affected. For example, when rule-based system is heavily used during a performance for the sake of structural accuracy, fluency of speech might get affected. Individual differences can also affect performance in this manner. 

1. Studies in Rehearsal Panning Time

Rehearsal planning give the learner an opportunity to perform the entire task prior to actual performance that is counted. For the same reason, rehearsal planning time is not practical in testing practice. Ellis cites three studies: Bygate (1996), Gass et.al. (1999) and Bygate (2001). Two major questions are asked in these studies.
i. Does task repetition have any effect on the performance of the same task?
All three studies produced evidence for beneficial effect of task repetition on the performance of the same task.
ii. Does task repetition have any effect on the performance of a new task?
All studies reported that that there is no transference of effects to new tasks, even when the new tasks are of the same type as the rehearsal task.
Which aspect of performance is affected by rehearsal?
Fluency and complexity are the most influenced aspects as reported by all three studies. Bygate used 10 week gaps to see if the effect is due to immediate recall, which is not. Accuracy was found to have no/little effect on performance.
Acquisition-Performance connection
Since the studies showed that there is no transference of the effect to new tasks, it implies that performance in L2 does not lead to acquisition. Or, to lead to acquisition, learners need to get feedback on the initial performance to enable 'noticing' and successive acquisition. An interesting study reported is Sheppard (2006)- an unpublished PhD thesis from the University of Auckland. This study reports the feedback intervention mentioned above.

Thus we could conclude that task repetition affects performance. But when used along with other tools like feedback, it could even lead to acquisition, and positive influences on all three aspects of performance. 

2. Studies in Strategic planning time

Ellis cites 19 studies here. He uses four parametres to talk about the results of these studies. They are: learners, settings, tasks and planning.

Learner Variables:
  1. Second and Foreign language learners 
  2. Proficiency level of learners (most studies looked at intermediate level learners). Wigglesworth (1997), Kawauchi (2005) and Tavakoli and Skehan (2005) manipulated proficiency as a variable. 
  3. Learner's orientation to planning 
  4. Individual learner difference factors
Settings Variables:
  1. Classroom
  2. Laboratory
  3. Testing
These variables are connected to length of planning time. Compared to testing context, the other two settings provided longer planning time to learners. 

Task Variables:
  1. Interactive Variables (Monologic vs. Dialogic tasks)
  2. Task Complexity (Simple vs. Complex tasks)
All the testing studies included in Ellis' list used monologic tasks in order to control the multitude of influences that come in with an interlocutor. All initial testing studies used monologues. But there are dialogic testing studies available. 

Task complexity or task difficulty is a difficult topic due to many variables involved in it. The factors affecting complexity, and used in the studies listed here are: degree of familiarity with the task context, the degree of structure in the information to be communicated, the number of distinct referents to be encoded and temporal reference (Here-and-Now vs. There-and-Then).

Planning:

  1. Length of planning time
  2. Guided vs. Unguided planning
  3. Form-focus vs. Meaning-focus
Most teaching research used ten minute planning time. Testing research uses lesser duration. Wigglesworth (1997) used 1 minute, Elder and Iwashita (2005) used 3 minutes, and Tavakoli and Skehan (2005) used 5 minutes. Mehnert (1998) studied the effect of varied planning times and found that longer planning time had more effect on performance. 

Guided planning asks learners to do particular things while planning, while unguided condition doesn't. Kawauchi (2005) studied three kinds of guided planning: writing what was planned, rehearsal of what is to say, and read a model of what to say. 

How does the above variables affect Fluency, Complexity and Accuracy?

Fluency

Operationalisation of Fluency: a. Measure of temporal aspects of fluency (number of words/syllables per minute), b. measure of repair phenomena (false starts, repetitions, reformulations). 

General finding in teaching studies is that strategic planning has a positive effect on fluency in both temporal and repair fluency aspects. But in testing, this is different. Elder and Iwashita (2005) found no effect, and Wigglesworth (2001) found negative effect. 

Second and foreign language learners experienced positive effects of planning on fluency.

Influence of proficiency level on the effect of planning on fluency was found to be varied. Wigglesworth (1997) found greater effects on the fluency of high proficiency test-takers. Kawauchi (2005) reports positive effect on low and high proficiency learners, but not on advanced learners (probably because they did not need planning time to speak well). Tavakoli and Skehan found that high proficient learners performed better than low proficient learners. Thus with the available data, no conclusion can be made on the effect of planning time for learners of different proficiency levels.

Learner's attitude towards the opportunity to plan found varied results in different studies. 

Learner's Memory played a role in fluency in some studies. Guara-Taveres (2008) found that learner's working memory and measures of fluency correlated well when planning time was available. 

Effect of Settings have clear conclusions. In laboratory and classroom settings, planning time has positive effects on performance. But in testing context, it seems to have lesser or no effect. 

With the available studies, one cannot make a conclusion regarding fluency about the participatory structure of tasks. Majority of studies involving dialogic tasks showed positive effect. Only two studies in monologic tasks did not show an effect. 

Task complexity was well studied in Foster and Skehan's studies. They used personal information, narrative and decision making tasks. The first was deemed easy because no external information was needed. The third was deemed more difficult because of the unfamiliarity of information to be communicated, and its lack of availability of structure. Mehnert (1998) found more effect on fluency in more structured tasks. Tavakoli and Skehan (2005) found more effect on fluency on structured tasks. In short, planning interacts with task complexity/difficulty, and the effect on fluency is more in case of less complex tasks. 

Length of Planning seems to have a clear effect on fluency. Mehnert (1998) found that longer the planning time, greater the effect. 

Type of Planning has an effect. Guided planning had more effect on fluency than unguided planning. But more studies are needed to find when guided planning works better than unguided planning- whether it depends on the nature of the task, proficiency of learners, etc.

Complexity

Operationalisation or measurement of complexity was done through: a. amount of subordination, number of different verb forms used, type-token ration and number of different word types. Results are more mixed than those of fluency studies, although there is plenty of evidence that strategic planning helps up complexity of production. 13 of the 19 studies reported here showed positive effects, while the rest (6) did not find any effect. Overall, strategic planning appears to have a greater effect on grammatical complexity. 

Both second and foreign language learners recorded benefit from planning on complexity. 

Proficiency: Advanced learners may not benefit from planning in terms of complexity (Kawauchi, 2005).

Working memory was found to be significantly related to complexity in planning group, not in no-planning group (Guara´-Tavares, 2008).

Complexity doesn't depend on laboratory/class setting. In testing, there is very less or no effect on complexity.

Task factors interact with planning time to have an effect on complexity, but 'how' is not clear. Most dialogic and monologic tasks showed a positive effect of planning on complexity. 

Task complexity: Foster and Skehan (1996) found that more complex decision making tasks did not have an effect on complexity while personal information and narrative tasks have higher grammatical richness with planning condition. But other studies give different results. More studies are needed to reach conclusions. 

Planning time variable: Mehnert's (1998) is the only study in planning time variable reported no effect on complexity in all of her different conditions. Different types of planning also did not have any effect on complexity. The degree of guidance might have an effect on the outcomes, however. 

Accuracy 

Accuracy also showed mixed results in various results. Thirteen of the studies found that planning enhanced accuracy but six reported no effect. 

Learner’s proficiency has an effect on accuracy. Kawauchi (2005) found less effect on advanced level learners than low proficiency learners. Learner proficiency needs to be controlled while investigating effects of planning on accuracy. 

Learners’ attitudes towards planning also have an effect on accuracy. 

Working memory wasn't found to be related to accuracy.

Task type is a potential influence factor. Interaction between task type and planning where accuracy is concerned. Foster and Skehan (1996) reported that planning led to greater accuracy in personal and decision making tasks (low and high difficulty tasks). No conclusion is really possible about how task type influences the effect that planning has on accuracy. How learners orientate to the task is also an important task. 

Type of planning failed to find effects in the studies listed here. Mochizuki, N. and L. Ortega. (2008) showed that when the guided planning is focused it can have an effect on accuracy. 

General Comments

Overall, strategic planning has clear effects on task performance. In the light of research, it is possible to say that conclusions about learner and task variables' influence on performance can be made. But we need more data and more studies that carefully control and manipulate these variables. 

Ellis questions: The three dimensions of performance we chose are not perfect descriptors of performance. Are ratings based on these these dimensions sensitive to the influence of strategic planning? This doubt is natural since no study reported an effect of planning on the ratings even when there were differences in discourse analysis results! It may thus seem that ratings are sensitive to influence of strategic planning on fluency in learning context, not in testing. 

Thus testing context seems insulated to the influence of strategic planning in terms of fluency. Only Wigglesworth (1997) showed that there is a significant effect (in discourse analysis). Why such insulation? Ellis speculates that it is because the test-taker knows that he/she is being tested, which leads to focus on accuracy at the expense of fluency and complexity. That is, testing context neutralizes the influence of planning otherwise available in learning contexts. Or may be the length of planning must be longer (althouth Mehnert (1998) proves this wrong). 

But in learning studies, strategic planning is found to have consistent influence on fluency throughout almost all studies. Prior planning of what to say saves on attentional resources enabling learner to speak fluently. Skehan proposes this trade-off between aspects of performance when learners focus on one aspect of performance. Depending on what is prioritized, the other aspects suffer. This explains the variable results of many studies. More studies are needed to say definite things about the influence of learner characteristics and task characteristics. 

Proficiency is one such variable that must be studied. Except in testing context, it has shown variations. 

Task variables like degree of structure need to be studied in detail. Existing studies have shown effects. But what is less clear is how task structure interacts with planning. 

Other considerations are: 

  • Whether planning is guided or unguided.
  • Whether focus is invited to form or meaning.

3. Studies in Within-task planning time

In fact there is only few studies that report this. Yuan and Ellis (2003) compared the performance of learners in pressurised (no planning) and unpressurised (within-task planning) planning time conditions. Learners with the within-task planning condition spoke longer. Fluency was not significantly different for both groups. Within-task planning resulted in greater syntactical complexity (without much differences in syntactical and lexical variety). It also generated more accurate speech in terms of error-free clauses and verbs. Surprisingly, there was no effect on fluency, probably due to the pressure to speak without prior planning. Thus the study concludes that when given ample time to plan, we can expect more accurate and complex speech in learning contexts. 

One problem with this kind of planning is that we do not know what the learners were doing during the planning stage. Other studies have shown that the effect of unpressurised planning is most beneficial in the initial stages of the task performance. 

Summary

Strategic planning is the most researched planning type so far. This preference is not theory-based. The other two types of planning must be examined with equal vigour. 

Rehearsal results in greater fluency and complexity. But these effects do not transfer to the performance of a new task unless there is some kind of additional intervention. That is, simple repetition of a task may not have a measurable impact on acquisition.
Strategic planning clearly benefits fluency. Results are mixed with complexity and accuracy. The possible trade-off between these two aspects must be the reason (i.e. learners will tend to prioritize either complexity or accuracy). Other variables which have an impact on the effect of strategic planning are the learners’proficiency (the effects are less evident in very advanced learners),the degree of structure of the information in the task and working memory. 
Within-task planning may benefit complexity and accuracy without having a detrimental effect on fluency.

Theoretical Perspectives

Theories of variability
Levelt's model of speaking
Skehan's theory
Robinson's theory

Levelt's Model 

There are three overlapping processes: conceptualisation, forumlation and articulation. The utterances can be monitored prior to and after the production. 
Two characteristics of speech production: a) controlled and automatic processing, b) incremental production. Conceptualiser and monitor operate under controlled processing. Formulator and articulator operate under automated conditions. However, the cases of native speakers and learners might be different in terms of formulation and articulation. 

Putting Levelt's model along with the limited attentional processes of learners explains many findings of the studies discussed above. The relationships between planning and aspects of performances can be understood with this theoretical background. 
  • Rehearsal and strategic planning are likely to assist conceptualization and thus facilitate fluency
  • They may have effects on formulation and articulation as well since the linguistic resources necessary are already accessed during conceptualisation. 
  • Fluency oriented learners may not benefit in terms of complexity and accuracy. In other words, since only limited resources are available, focus on one aspects leads to problems in other aspects of performance. 
  • Advanced learners who have lesser problems in formulation and articulation will have an effect on fluency, but much less on complexity and accuracy. 
  • In unpressurised within-task planning, complexity and accuracy may increase due to benefits to formulation. 
These accounts are helpful in understanding how things work. But we also need to know how individual differences interact with different aspects of performance and variables we study. All learners do not engage in all the processes to the same extent. Orientation, working memory, language aptitude, willingness to communicate, and anxiety are a few such individual differences. 

A framework suggested by Ellis has four sets of variables. 



TTT










The model hypothesises that task and individual variables influence how learners plan and mediate the effect planning has on production. We need to answer the question 'how does planning assist development of fluency and acquisition of linguistic knowledge'. This has yet to be studied. Fluency development and acquisition must be two different phenomena. Like Skehan says, may be fluency is an outcome of development of exemplar-based system. So fluency can develop independent of acquisition. Rehearsal and strategic planning helps learners to develop exemplar-based system. Therefore, such planning helps build fluency. Within-task planning might influence automatization of grammatical knowledge. 

There are three senses of acquisition: a) acquisition of new linguistic features, b) restructuring of existing linguistic resources and c) development of greater control or accuracy over existing linguistic features. This understanding is necessary to theorize the relationship between planning and acquisition. The studies have showed that planning has very less influence on first kind of acquisition. Second and third kind of acquisition experiences effects of planning. Planning affects restructuring by its effect on complexity (Skehan, 1998). These assumptions are based on the condition that more complex production leads to acquisition. 

Limitations and Directions for Future Research

There is lack of information on what learners do during planning. 
Within-task planning and the combined effects of within-task and pre-task planning haven't been studied yet. 
Longitudinal study of the effects of planning has not been done yet. The maximum duration so far is ten weeks!
Studies listed did not collect baseline data of native speakers performing the same tasks. 
Stuidies did not give proficiency level data of the learners. 
No study has investigated the extended performance of learners on a task (not just the early stage of the task, but the entirety of the task performance). 
How individual learner factors affect performance is not studied yet.




References
Bygate, M. (1996). ‘Effects of task repetition: Appraising the developing language of learners’ in J. Willis and D. Willis (eds): Challenge and Change in Language Teaching. Heinemann.

Bygate, M. (2001). ‘Effects of task repetition on the structure and control of oral language’ in M. Bygate, P. Skehan, and M. Swain (eds): Researching Pedagogic Tasks, Second Language Learning, Teaching and Testing. Longman.

Elder, C. and N. Iwashita. (2005). ‘Planning for test performance: Does it make a difference?’ in R. Ellis (ed.): Planning and Task Performance in a Second Language. John Benjamins.

Ellis, R. (2005). ‘Planning and task-based research: theory and research’ in R. Ellis (ed.): Planning and Task-Performance in a Second Language. John Benjamins.

Foster, P. and P. Skehan. (1996). ‘The influence of planning on performance in task-based learning,’ Studies in Second Language Acquisition 18/3: 299–324.

Gass, S., A. Mackey, M. Fernandez and M. Alvarez-Torres. (1999). ‘The effects of task repetition on linguistic output,’ Language Learning 49: 549–80.

Kawauchi, C. (2005). ‘The effects of strategic planning on the oral narratives of learners with low and high intermediate proficiency’ in R. Ellis (ed.): Planning and Task-Performance in a Second Language. John Benjamins.

Mehnert, U. (1998). ‘The effects of different lengths of time for planning on second language performance,’ Studies in Second Language Acquisition 20: 52–83.

Mochizuki, N. and L. Ortega. (2008). ‘Balancing communication and grammar in beginninglevel foreign language classrooms: A study of guided planning and relativization,’ Language Teaching Research 12: 11–37.

Sheppard, C. 2006. The Effects of Instruction Directed at the Gaps Second Language Learners Noticed in their Oral Production. Unpublished PhD Thesis, University of Auckland.

Skehan, P. (1998). A Cognitive Approach to Language Learning. Oxford: Oxford University Press.

Skehan, P. and P. Foster. (1999). ‘The influence of task structure and processing conditions on narrative retellings,’ Language Learning 49/1:93–120.  

Tavakoli, P. and S. Skehan. (2005). ‘Strategic planning, task structure, and performance testing’ in R. Ellis (ed.): Planning and TaskPerformance in a Second Language. John Benjamins.

Wigglesworth, G. (1997). ‘An investigation of planning time and proficiency level on oral test discourse,’ Language Testing 14/1: 21–44.

Yuan, F. and R. Ellis. (2003). ‘The effects of pre-task and on-line planning on fluency,complexity and accuracy in L2 monologic oral production,’Applied Linguistics 24/1: 1–27.

Monday 8 May 2017

How does Planning time affect oral test performance- a multifaceted approach

"A multifaceted approach to investigating pre-task planning effects on paired oral test performance" is a paper written by Ryo Nitta and Fumiyo Nakatsuhara in the year 2014. The paper explores the effect of planning time on oral test task performance using a multifaceted approach. Most of the earlier studies looked at this issue by considering the performance of a group of test-takers as a whole. The problem with such an approach is that the fine differences between test-takers' performances, and the differences within particular test-taker's individual and collaborative performance will be lost. Ryo and Nakatsuhara's approach makes sure that intra-test-taker differences in performances are noticed too.

This study looked at 32 foreign language learners' performance on decision making tasks under planned and unplanned conditions. The study used rating scores, discourse analysis and conversation analysis to understand co-constructed performance apart from a questionnaire to understand test-taker attitudes to planning time. Conversation analysis provided valuable insights and implications for teaching and testing.

In teaching research, planning time is seen beneficial because it cognitively helps the limited attentional capacity/resources of the learner. Planning time activates rule based system. Therefore while performing the task, the learner can use the preactivated rule system, and concentrate on the use of memory based system. It also encourages learners to access explicit analytic knowledge since automatised implicit knowledge is lesser available during live performance. Nature of planning, task type, proficiency level of learners, etc. affects performance. Generally, longer planning time is found to be beneficial to fluency, but lesser useful for accuracy and complexity in teaching research.

Testing Research
In standardised testing, planning time is provided mainly for the sake of fairness. That is, to control the level of cognitive demand imposed by potentially unfamiliar topics and enabling test takers to produce their best performance.

The use of unguided planning for shorter periods in tests has found mixed results in earlier research. The limited effects observed might be due to the high stakes nature of testing context, which focuses test-takers' attention on accuracy. This results in careful online planning. Thus the possible effects of planning might be overridden, says Ellis in his edited book "Planning And Task Performance In A Second Language (2005). Measurement methods used in such research might have influenced results.

Dialogic Tasks
The nature of task used is very important. There are monologic and dialogic tasks. They differ vastly. Monologic tasks do not have an interlocutor. Talking to a microphone in response to a voice prompt or written question is very different from interacting with a live interlocutor in person. They both use very different performance processes. In a monologue, test-takers use their own resources. They solve problems on their own. They construct whole/entire performance. In a dialogic task, the discourse is co-constructed. Language, ideas, vocabulary, constructions, etc. are exchanged. The process is constantly open and is dependent on both (or all) parties involved in the dialogue. 

Galaczi in his paper titled Peer–peer interaction in a speaking test: The case of the First Certificate in
English examination" published in Language Assessment Quarterly, 2008 says that there are three patterns of interaction in pair discussions. They are, collaborative, parallel and asymmetric interaction. Collaborative pattern has exchange of roles of listener and speaker. They support each other's topic and develop each other ideas. Parallel pattern involves each one developing their own argument. There is less agreement on each others' ideas. Asymmetric pattern involves unbalanced contributions. One person takes secondary role while the other speaks the most. It is observed that in tests, it was collaborative pattern that received the highest scores, and parallel pattern, the least. 

Paired oral tasks are designed to measure interactional competence. But usually in research, we only look at cognitive complexity of different tasks, and linguistic demands of task design without attention to the 'co-construction' aspect of interaction. It is important to know how pre-task planning affects interactive patterns of dialogue because the kind of interaction pattern in a task has important implications for validity of the test. 

Multifaceted approach
Performance of particular test-taker could vary at different times within a task. But previous studies have looked at the collective performance of both/all parties involved in the task, assuming uniformity of performance throughout the task. This is not fair to the actual way performance takes place. Interactions involve non-linear processes as this study clearly shows us. 

This study is process-oriented (not summative). It studies differences and similarities in performance processes of interactions under different planning conditions. It used two decision making tasks. Tasks were something like this: which item in the picture is important for a happy life? Choose the most important two of them. Rating scales from previous researches were modified with extra bottom points to include the current subjects' performance. Fluency, accuracy, complexity and interaction were considered as targets to be measured. 

Results
Score analysis showed that there was a slight upgradation of fluency and complexity with planning time condition. 
Discourse analysis showed that there was an improvement in breakdown fluency and longer turn length with planning time condition. Also, planning time reduced the speed of fluency (number of words per minute). 

The increase in complexity under planned conditions might be due to presentation of planned language during planning stage. Such increase in complexity was observed only in the beginning of the task. During the advanced stage of the task performance, complexity levels were low. Also, the pattern of interaction was parallel. That is, each one gave their points without building up on each others's points. The utterances were longer, but more like a monologue instead of dialogue. There was no co-construction. When what is planned during planning time is exhausted, the discourse fell into a stagnant period.

But negative planning time condition led to gradual increase in turn lengths, incorporating each other's points collaboratively. 

Planned interaction ended clumsily which contrasted with unplanned interactions. Planned interactants only expressed their individual ideas during their turns.

Implications for Teaching and Testing
Use planning time for clear purposes, as planned by the task-designer. For example, It is better not to use planning time for tasks that are aimed at developing interactional competence.
Pre-task planning before a pair task is not advisable as it might change the interactional pattern of a task.
We ought to reconsider the duration of test tasks especially in pair oral tasks. As we have seen in this study, when planning time is provided, interaction became collaborative only after a while into the interaction. Therefore, we must reflect whether the given task performance time is sufficient to reach that threshold point where performance can become collaborative. 
There is scope to wonder if provision of planning time in oral pair tasks is wise at all. But further research is necessary before we reach any such conclusion. 

The paper is available for download HERE from Sage Publications' website.

Sunday 7 May 2017

The effects careful online planning and task repetition on accuracy, complexity, and fluency in oral production: Paper by Ahmadian and Tavakoli

"The effects of simultaneous use of careful online planning and task repetition on accuracy, complexity, and fluency in EFL learners' oral production" is a paper published by Ahmadian and Tavakoli in the year 2010 in the journal Language Teaching Research (Vol. 15, Issue 1). The following is a summary of the outcome, and possible implications for language testing research. Original paper is available at this link: Access original paper at Sage Publications.

Oral Proficiency Interview


The goal of the paper is to study the effects of careful online planning (COLP) and task repetition (TR) on oral production of EFL learners' oral production. Planning time was operationalised as COLP and was studied against pressurized online planning (POLP). TR was operationalised as with and without repetition (+TR and -TR) conditions. COLP normally requires more time to complete a task, as there is no restriction to the amount of time that can be used to complete a task.

The results of the experiment showed that COLP has been successfully operationalised. That is, learners who used COLP used more time in completing the task than those learners who used POLP.

COLP produced more error-free clauses and correct verb forms than POLP group. Therefore, the conclusion is that COLP enhances accuracy of EFL learners' oral production.

COLP learners have outperformed POLP learners n terms of descriptive and inferential measures of complexity. Therefore, the conclusion is that COLP enhances complexity of EFL learners' oral production.

POLP group (-TR) produced more syllables per minute than COLP (-TR) group. Therefore, COLP causes disfluency in EFL learners' oral production. Also, Task repetition has positive effect on fluency.

Task repetition assists complexiy of EFL learners oral production. Both measures of complexity were higher in +TR condition than -TR condition.

Simultaneous use of COLP and +TR in learning tasks showed the following effects on accuracy, complexity and fluency. COLP +TR group produced more error-free clauses and correct verb forms than all other groups. Also, complex language was produced by learners performing narrative tasks in both measures of complexity.

Two interesting findings:
1. COLP +TR has outperformed all other groups in terms of complexity measures.
2. Engaging in COLP resulted in a degree of disfluency. But COLP +TR condition learners exceeded POLP _TR condition learners' fluency in both measures of fluency.

Implications for Testing
In learning research, it is easy to operationalise careful online planning. One could give enough time to complete the task. But in testing, we cannot provide unlimited time for planning because of constraints offered by testing context. Therefore, though COLP supports eliciting best performance, we cannot practice it in testing as it is. But we can increase or decrease the amount of time provided in some testing contexts. For example, in computer delivered tests, timed provision of planning time can be adjusted or varied according to specifications previously set by the test designer. Today most standardised tests provide planning time for fairness' sake. Test designers ought to go beyond this and provide research-based, sufficient amounts of planning time, and make sure that test-takers make use of it for planning performance. This is important.

Likewise, task repetition cannot be applied in testing, since it affects validity and readily encourages 'preparation' negatively. But test designers can select tasks that have specific characteristics that can encourage particular test strategies and processes we want test-takers to use in a test. This is very much possible and advisable. Especially in classroom-based tests of oral proficiency, teacher can do this with a bit of careful planning and thought.

Amazon.in