Accuracy and Error
EDU 645 Educational Testing & Measurement
Dr. William Ross
August 9, 2012
Accuracy and Error
In this week’s journal writing the authors raises a question about accuracy and error the question in this week’s journal; the authors state that “all tests and scores are imperfect and are subject to error” (Kubiszyn & Borich, 2010). What I believe is meant by this statement, is that we are human and when making up test there is a remote chance for error. After reading this week’s statement I reflected back to my last final exam, I remember hearing the instructor say how pushed for time she was to get our test out. During the test I remember trying to call or email her to let her know about some errors but I was pushed for time trying to beat the timer so I wouldn’t be penalized any further because the clock was ticking. So I chose any answer because the question wasn’t given and there were no instructions, other times I’ve seen student’s assessment booklets, work books and even at a collage level testing materials misprinted; but we go ahead and test anyway causing the scores to yield fallible excepting whatever grade given. Kubiszyn & Borich, clearly state that “there are some degree of goodness but no test or score is completely valid or reliable”. Hearing that “all tests and scores are imperfect and are subject to error” (Kubiszyn & Borich, 2010). I believe the authors are describing, people and machines make mistakes all the time. When teachers, instructors, mechanical devices and test construction professionals score test we should expect scoring errors.
It is important to recognize that there is no perfect test; because all tests are subject to various sources of error that impair the reliability of their scores as well as accuracy. After we realize that there are no perfect tests we can begin to look for the degree error. In the textbook it states”
“Just as you can expect to make scoring errors, you can expect to make errors in test construction.” No test you construct will be ever so perfect; in fact these imperfect tests will include inappropriate, invalid or otherwise deficient items” (Kubiszyn & Borich, 2010 p.227).
Item analysis can be used to identify items that are deficient in some ways, one way is improving or eliminating them with the results being a better test overall ( p. 228). Knowing this, what we can you do to ensure that teachers and students obtain the most accurate assessment results as possible? Going back to the beginning of our text we learned that “there can be no one size fits all test or assessments” (p. 6). Teachers and qualified staff should put in quality time when constructing test but even then we are still human and subject to make a mistake or error. Teachers can do assessments to help themselves as well as their students, teachers can plan and provide effective instruction in the academic content standards, and teachers can modify instruction directly to individual student needs. Assessments can help students benefit as well, by identifying their areas of strength and weaknesses so teacher’s can help students to prepare for standardized test or testing. In my SPED course I learned about pre and post assessments that allow educators to assess students learning and mastery of content, skills and strategies. The post test assessment provides data on changes that have occurred from a previous assessment. In order to complete a post assessment the initial pre assessment must be done and scored and a summary received by the respondent.
This summer I was able to observe a teacher that assessed students on mathematical content from the class materials and I observed how well she helped them to learn how to apply the strategies she had taught them to problem solving.
In viewing the chapter on accuracy it is defined as test that yield sufficient evidence that are valid, reliable and accurate for these purposes they are used for and the individuals they are used with (p.329).
In concluding, according to reliability is equal to the ratio of variance of the true score to the variance of the observed score. Calculating the ratio of the estimated variance of the true score to the variance of the observed score is the same as calculating the correlation between two observed scores. Therefore the correlation of two repeated measures of the same test is accepted as an appropriate estimate of the reliability of the test (p-295-298). This diagram is a perfect illustration of accuracy and error and what good accuracy and poor precision looks like.Click on this frame and view the image.
Kubiszyn, T. & Borich, G. (2010). Educational testing & measurement: Classroom application and practice (9th ed.). John Wiley & Sons, Inc., Hoboken, NJ