ASSESSING ENGLISH SPEAKING SKILLS AT UNIVERSITY
Many colleges, particularly language universities, are concerned about the veracity of
language competency assessments. As a result, they have various criteria and processes
for evaluating language abilities. Writing ability is evaluated based on student
performance and involvement throughout the course, as well as accomplishment
assessments administered at the end of the semester. Non-majored students and majored
students both take four writing courses in the order of level 1, 2, 3, and 4. Class
attendance, class participation, homework performance, regular tests, middle term test,
and final-semester test are used to determine the final evaluation result of writing
competence for each level. The student will pass if their final score is equal to or higher
than the cut score They will pass that level and move on to the next, but if their score falls
below the cut score, they will have to repeat the current level course. Learners must attend
all classes in order to receive a good grade. They must also participate passionately and
actively in writing tasks and activities that professors develop and instruct. Furthermore,
pupils must complete homework and self-study work prior to attending school classes. In
other words, individuals are graded on their development and accomplishments both
during and after the course. Finally, before they leave school, students must take an
English competency test. It is a requirement for graduation clearance that they pass the
proficiency test's cut-score.
At my institution, I interviewed one of my colleagues who teaches English writing skills
to non-majored students. I focused my interview questions on the challenges of assessing
English writing ability at this institution, from which I can offer some recommendations
for enhancing the assessment.
The first obstacle she (the teacher I spoke with) has encountered in assessing English
writing ability at the school is the difficulties in rating test takers. HALO effects have an
effect on her from time to time. In other words, she is occasionally struck by the test
takers' items' neat handwriting or appealing layout and structure. As a result, rather than
focusing on accentuated HALO effects, she may score and grade the examinees' writing
ability based on this first impression of HALO effects. She also claims that when she is
dealing with personal issues or affairs, it is difficult for her to focus on scoring and
evaluating the candidates' writing ability. When she is in a poor mood, or when she is
having troubles with her family or at work, she, for example, is unable to focus on and
maintain a high level of performance in scoring and assessing test takers' writing ability.
This has the potential to mislead evaluations. Her issues are similar to those of many other
raters. For the first issue, one possible solution is for examiners to remember that they
must always adhere to the test assessment standards and rubrics. His/her job entails not
only evaluating English writing abilities, but also guaranteeing the accuracy, validity, and
consistency of the results.
To address the latter issue, the rater must strike a balance between work and relaxation in
order to maintain a positive attitude and remain aware, fair, and focused throughout the
writing skill evaluation process, particularly during scoring sessions. They must be
conscious of their important function, whether it is as a match referee or as a game show
judge.