Objectivity of knowledge assessment — necessary condition of improving quality of teaching | Статья в журнале «Молодой ученый»

Отправьте статью сегодня! Журнал выйдет 28 декабря, печатный экземпляр отправим 1 января.

Опубликовать статью в журнале

Автор:

Рубрика: Педагогика

Опубликовано в Молодой учёный №25 (159) июнь 2017 г.

Дата публикации: 28.06.2017

Статья просмотрена: 135 раз

Библиографическое описание:

Мавлонова, М. Д. Objectivity of knowledge assessment — necessary condition of improving quality of teaching / М. Д. Мавлонова. — Текст : непосредственный // Молодой ученый. — 2017. — № 25 (159). — С. 295-298. — URL: https://moluch.ru/archive/159/44869/ (дата обращения: 17.12.2024).



The problem of monitoring and evaluating students' knowledge is an important part of the learning process. At present, when globalization has reached the sphere of education, many universities have begun to revise the internal policy of monitoring educational activities and propose new methods for assessing students' knowledge. The main objectives of the revision of the knowledge assessment system are, first and foremost, improving the quality of education and the level of teaching the specialists, since the assessment of students' knowledge is a feedback mechanism that allows the teacher to objectively see the results of his activities and correct existing problems. At the same time, the assessment of students' knowledge is also a kind of teaching tool [1].

Despite the interest of researchers, teachers and students to this topic, the system of assessing knowledge is still very far from perfect [2, 3].

The problem of objective assessment of students' knowledge becomes even more urgent due to the fact that when the knowledge is not adequately assessed, students sometimes exaggerate the level of their preparedness in different subjects. This phenomenon is explained by the theory of social comparison, as well as the theory of causal attribution, according to which such behavior of a person is natural, but it is fraught with great complications in later life for him/her and for others [4].

Moreover, an inadequate and biased system of assessing students' knowledge undermines the reputation of the university and the trust of employers, students and teachers in the diplomas of this institution. A competently organized system of knowledge assessment is the foundation in strengthening the reputation of the university and as a consequence is the key to the success of graduates in their professional activities.

What is the reputation of the university based on? In my opinion, there are two main factors: an objective assessment of the student's knowledge and the correspondence of the content of the discipline studied to the requirements of world standards of education. And this is not accidental.

An employer who employs, say, a Harvard graduate, firstly, is sure that all the disciplines shown in the diploma were taught at the appropriate level, and secondly, grades in the graduate's diploma do reflect his knowledge. If it shows that the student has passed any discipline to «excellent», he really knows it perfectly, without exaggeration. At the same time, I would like to note that, for example, the assessment is «satisfactory», it's not the same thing that «it became a pity for the student and therefore did not put the» deuce In this regard, it is extremely important that the students' knowledge is assessed objectively according to certain established criteria. The criteria and process for assessing knowledge must be transparent and clear. Only in this way it will be possible to increase employers' confidence in graduate work.

To sum up, it is possible to consider the evaluation of knowledge that really reflects the level of knowledge and preparedness of a student in this or that field. To the factors that, on the contrary, prevent us from objectively assessing the knowledge of students, t my mind, we can include the followings:

1) Fuzzy, lengthy exam questions that require the examiner to take a subjective approach to evaluation.

2) The examiner's lack of assessment criteria before the exam. Unclear understanding of the examiner: how the score will be distributed within the questions. For example, if a maximum score of 10 points is assumed for a certain exam question, the examiner should clearly represent how these 10 points will be distributed within the question.

3) The aspiration of the examiner to compare the work of different students and to evaluate accordingly the best and worst work in comparison with the group.

The problem is obvious — the best work in the group is not necessarily the work written on «excellent». The examiner must evaluate the work according to the pre-established criterion, and do not change the assessment in one direction or another, only because «others have written even worse». This approach to evaluation leads to a reduction in the requirements for students' knowledge and ultimately undermines the reputation of the university, since the assessments are already biased.

4) Questions built on the memorization of small minor details that do not in any way affect the general understanding of the subject. Very often test problems arise, which is quite logical, since this kind of knowledge assessment does not involve any detailed answers from the student. This can lead to a situation where students simply learn some facts without bothering to understand the whole picture. With such an approach to exam questions, there is always a possibility that the student who received the highest score does not have a clear holistic view of the discipline.

5) The prejudice of the examiner. Very often appearance, manners, etc. The student prevents the examiner from objectively evaluating the student's knowledge. To the student, who did not miss classes, sat at the first desk, wrote down everything thoroughly and asked many additional questions, many examiners tend to be better. At the same time, many may not even be aware of this or justify such a student «he's just inattentive or tired today, etc». And, conversely, a student who casually dressed, did not record anything during the lectures and missed classes, a teacher may have a dislike or prejudice beforehand. However, it is obvious that the appearance of the student is not a criterion of his knowledge.

What needs to be done to make students' knowledge more objective? One of the solutions to this problem, many considered the use of tests (in English, multiple-choice questions). Indeed, assessing the knowledge of students on the answers to tests does not require the examiner any subjective judgments.

Therefore, the first, second and third factors that hinder the objective evaluation of students' knowledge are safely excluded. However, I would like to note that the test questions do not give the examiner an opportunity to see how well the student was able to grasp the essence of the discipline, how much clear idea he has about the «whole picture».

At the end of time, individual details and facts can be forgotten, but the essence of the questions studied should remain in the memory.

Oral interrogation and examination, on the contrary, can lead to a biased evaluation, since the appearance of the student, his diction, etc. can influence the examiner. In this case, the student can gently «bypass» aspects that he does not know or badly remembers.

The examiner will find it rather difficult to apply a pre-prepared evaluation criterion, since it will not be possible to re-listen to the answer. In addition, in the oral examination, if there are disputes or disagreements between the student and the assessment of both the examiner and the student, it will be difficult to defend one's point of view.

The type of questioning and examination that allows to assess the student's work as objectively as possible is written. When encrypting the student's name, the problem of prejudice is completely solved. However, simply using a written exam (that is, open questions) is not a panacea. To increase the objectivity of assessments, we propose the following scheme for evaluating knowledge.

First, I would like to specify the nature of exam questions. Here it would be appropriate to consider the experience of the UK, where in most universities the examinations take place in written form and in the form of open questions. At the same time, questions are never purely descriptive. It is not easy to remember any theory, it is important to be able to apply it to solve a problem or draw conclusions based on it.

The student must demonstrate the ability to think critically. Also, when evaluating a great importance is given to ensuring that students avoid any unfounded statements in their answers. It is important that students back up their responses with relevant examples, statistics and links to recent research in this area. If the questions are constructed in this way, then it is much more difficult for students to prepare cribs in advance. In this case, anyone unnecessary «memorization» without understanding the basic meaning, too, will not give good results. This construction of exam questions not only guarantees a more objective assessment, but also stimulates students to better prepare for exams and try to really understand the discipline, and not just mechanically learn it. Thus, this approach positively influences the learning process, which, as many foreign authors note, is one of the main tasks of assessing students' knowledge [5].

Secondly, the examiner must prepare in advance a clear criterion for assessing knowledge.

This experience is also used in most foreign universities. In addition to the distribution of points for each question, the examiner must determine for him/her how the score will be distributed within the question before the examination. For example, the maximum score for a question is 15. Before the examination, the examiner prepares a scale for him/her. For example, 5 points are given for the correct definition of a phenomenon or theory. Another 5 points are given for the proof of this theory, 3 points for generalizations and conclusions and 2 points for giving examples and statistical facts. Having this criterion, it will be much easier for the examiner to remain objective.

As it is already noted, the reputation of the university depends on the objectivity of evaluations. As you know, the more learner independently assesses the work, the more objective will be the final assessment. In this regard, it is important in our view is the experience of the UK, where each examination paper is checked at least two times. Prior to the examination, the chief examiner (usually the immediate teacher) prepares the exam itself and a detailed rating criterion (scale) for the similarity described above. After the exam all the works are encrypted, the examiner checks them and puts a score according to the previously prepared scale. No marks on the work itself are done. Estimates are issued on a separate form. Proven work is given to the external examiner for re-evaluation. An external examiner is usually a teacher of another institution. Every year, external examiners change. The teacher usually does not know who will be the external examiner in his discipline. The external examiner exposes his assessments according to the criterion prepared by the chief examiner. At the same time, the assessments of the first examiner are not known to the external examiner. He also puts the marks in a separate form. Estimates are compared. If there is a significant discrepancy in the assessment, the commission meets, where it becomes clear why the discrepancy has occurred. After the meeting of the commission, the final evaluation is issued and reported to the student.

We believe that this experience is extremely interesting and can be successfully used in our country. At the same time, I would like to note that the preparation of a detailed evaluation criterion for each question is possible only if no more than three options are used in the exam. When using our traditional «ticket system» in the exam, this approach is almost impossible to use, since the examiner will need to prepare a rating criterion of at least 90 questions (based on 3 questions in 30 tickets). Thus, the preparation of such a scale becomes a very labor-intensive process that requires a lot of time. In addition, all 90 questions are never the same, and the student's assessment often depends on «luck». In this regard, we consider it expedient to prepare only 2–3 exam variants. In many rating foreign universities, students are invited, for example, to answer from 8 to any 4 questions.

Sometimes the questions are divided into mandatory and optional. For example, 50 % of the evaluation carries an obligatory question, and 50 % of the evaluation is divided between two questions of choice. Thus, the student must respond to one compulsory and to any 2 questions of choice (say 2 out of 5). Of course, the students are not informed in advance. With such an approach to exams, it will be possible without any unnecessary efforts to prepare a detailed evaluation criterion (within each question) and invite an external examiner to re-examine the work. The presence of this system, firstly, will increase the objectivity of assessments and increase the credibility of employers' assessments, and second, it will avoid the situation when the need to prepare a large number of questions (more than 90) will lead to issues that carry the same number of points will not be equal. As noted above, excessive detailing does not always give an objective picture.

Conclusions:

  1. To improve the objectivity of assessing the knowledge of students, it is necessary to prepare correctly and carefully the exam questions, which must be problematic and that when students respond they can show a deep, creative interpretation of the subject, and not just «memorization».
  2. Re-evaluation of the work by an external examiner will enable to achieve a more objective assessment of students' knowledge and increase confidence in the diplomas of universities that practice such a policy.
  3. Examinations in the form of tickets do not allow preparing a detailed criterion for evaluating each issue, and can lead to unnecessary detailed questions and the risk of using unequal issues in different tickets.

References:

  1. Талызина Н. Ф. Управление процессом усвоения знаний. М., 1984.
  2. Ромашкина Г. Ф. Оценка качества образования: опыт эмпирического исследования. Университетское управление: практика и анализ, 2005, № 5, c. 83–88.
  3. Слободин А. В., Часовских В. П. Совершенствование оценки знаний методом тестирования. Телематика 2002. Труды Всероссийской научно-методической конференции. СПб., 2002.
  4. Ишгали Ишмухаметов, Марина Брук. Проблемы оценки знаний студентов в процессе освоения предметов гуманитарного цикла. Starpaugstskolu zinātniski praktiskās un mācību metodiskās konferences rakstiю. Институт транспорта и связи, Латвия. 2005.
  5. McMillan James H. (2000). Fundamental assessment principles for teachers and school administrators. Practical Assessment, Research & Evaluation, 7(8). Retrieved December 23, 2011 from http://PAREonline.net/getvn.asp?v=7&n=8. This paper has been viewed 191,952 times since 9/23/2000.


Задать вопрос