star twitter facebook envelope linkedin youtube alert-red alert home left-quote chevron hamburger minus plus search triangle x
}

Common Problems in Assessing Speaking in Language Proficiency Tests


Common Problems in Assessing Speaking in Language Proficiency Tests (Mr Tich)

 

Today, there are an increasing number of people taking part in language proficiency tests for a variety of purposes. As a result, more and more language proficiency tests are held with diverse forms and high frequency. The matter that all test takers care about is whether the scoring of speaking proficiency tests is accurate and reliable. In other words, there is a question that the test raters face any problems during scoring candidate’s speaking performances. This essay will discuss one common problem you often have when assessing Speaking in language proficiency tests; and explain the causes of this problem.
When assessing Speaking in language proficiency tests, there are a number of possible problems relating to input variables, context variables, test taker variables and rater variables. Amongst these issues, the most common problem I often encounter is inconsistent understanding of assessment criteria and rubrics with other raters or assessors. This regularly results in a big gap in scores given by different raters which consumes a great deal of time discussing and coming to an agreement in giving final marks. Also, it even leads to unfair scores which make a candidate fail or pass the exam and fail to evaluate accurately the test takers’ performance and proficiency. All of these severely influence the assessment result of Speaking in language proficiency tests.

There are some causes to this issue of frequent occurrence. The major reason is that the assessment criteria are so overwhelmed and complicated for me and other raters as a whole. Different Speaking proficiency tests use distinct criteria. And every single test has lots of criteria which are characterized by a variety of descriptors and a wide range of marks. Take IELTS speaking test for an example, there are four criteria with nine descriptors for nine separate bands. And these assessment criteria differ from those of CEFR Tests in some ways. Moreover, assessment criteria for various levels of the same genre of test are described differently. For instance, assessment criteria vary amongst VSTEP Speaking tests for level 1, 2, 3 and level 3-5. Even assessment criteria and descriptors of the speaking tests for level 2 LSS are not similar to those of Level 2 Adults although both of them are the same level 2.

Another cause is the lack or even absence of the careful and consistent discussion on the assessment criteria among the raters before examining the candidates. If there is careful discussion with consistency and agreement on assessment criteria, there will definitely be a minimal or little or no inconsistency in assessing Speaking proficiency tests between me and other examiners.

In addition, carelessness and simplifying the criteria for easy use are excuses for this common issue. This is because some examiners try to make the assessment criteria simpler to utilize while others do not. As a result, the inconsistency and difference in understanding is unavoidable. As well as this, the constraint of time or rater fatigue is why examiners and I show inconsistent understanding of assessment criteria and rubrics.

In summary, inconsistent understanding of assessment criteria and rubrics can potentially trigger inaccuracy when assessing Speaking in language proficiency tests. Therefore, it is crucial that raters in general and I myself take this common problem and the causes discussed in this assignment into consideration for further improvement and accuracy as partial contribution to the reliability, validity and practicality of the tests.