This study identified some factors associated with teachers' knowledge and beliefs that have impacted scoring math constructed-response (CR) assessment tasks. Five groups of teachers who either had different teaching experiences or had different cultural beliefs about teaching and learning math were selected to score 28 students' responses to seven CR math tasks. Four factors were found to have significant impacts on the rating differences. They were teaching experience, experience with students at particular grade levels, the nature of students' responses, and beliefs in teaching and learning mathematics. The identification of the factors has implications both for promoting validity of test scores and for examining teachers' understanding of student learning targets.
Table of Content
TABLE OF CONTENT3
CHAPTER 1: INTRODUCTION4
Overview4
Purpose of the Study5
Research Question6
CHAPTER 2: LITERATURE REVIEW7
Grading Philosophy9
Grading Plan13
Absolute Grading Methods25
Fixed Percent Scale25
Total Point Method26
Content-Based Method27
CHAPTER 3: RESEARCH METHODOLOGY29
Research Design and Data Resource29
Data Collection31
CHAPTER 4: RESULTS AND DISCUSSION33
Results from Four Groups of Chinese Teachers33
Results from Two Groups of In-Service Teachers36
CHAPTER 5: CONCLUSION39
Summary39
END NOTES40
BIBLIOGRAPHY43
APPENDIX44
CHAPTER 1: INTRODUCTION
Overview
Philosophies and instructional approaches change as curriculum changes; teachers need to be prepared to adjust their grading plans accordingly. With experience in assigning grades, reporting to students, and observing the impact of grading on learning, many teachers rethink their responses to the philosophical questions enumerated in the "Developing a Grading Philosophy" section. The meanings of the symbols, the characteristics to be judged, the components to include in a grade, and the method used for assigning grades are all issues of value that take on new importance or new meaning as teachers accumulate grading experience and observe the practices of colleagues.
Grading practices also may change as a teacher's instructional approach changes. For example, a teacher who begins experimenting with cooperative learning strategies would start depending more on group projects and presentations for assessment information. The nature of the grading components being used may need to change, as would any grading practices that foster competition among learners.
In short, a teacher's grading practices are likely to evolve slowly over time as his or her grading philosophy changes, as experience in grading accumulates, and as a base of grading data from several classes becomes available. As the nature of the curriculum changes and teachers fine-tune or modify their instructional approaches, the procedures outlined here can be reviewed to adjust inconsistencies in philosophy and practice.
Purpose of the Study
Analyzing and scoring students' written responses to constructed- response (CR) assessment tasks are a complex process. Numerous factors can affect the scoring of student responses to such an assessment task. To improve the objectivity for scores from such assessments—and eventually to ensure test score validity, ongoing efforts are being made in minimizing raters' effects on scoring those assessment tasks in the measurement community. An extensive literature focuses on training raters to address the concerns of rating consistency and objectivity (Fitzpatrick, et al, 1998; Mashburn & Henry, 2004; Moon & Hughes, 2002; Schafer, et al, 2001). However, little attention has been given to investigating what factors may have influenced analyzing and scoring students' ...