Book Review

Read Complete Research Material

BOOK REVIEW

Book Review: Reading and Learning to Read

Book Review: Reading and Learning to Read

Content area text is the degree to which an assessment represents the content domain it is designed to measure. When an assessment is judged to have high content validity, the content of the assessment is considered to be congruent with the testing purpose and with prevailing notions of the subject matter tested.

Evaluating Content area text is an important part of validating the use of a test for a particular purpose, especially when the test is educational in nature, such as when it is desired to measure a person's knowledge of specific subject matter (e.g. achievement and licensure tests). Although the process of test validation is dynamic and a test is never 'validated' per se, in many cases, evidence of Content area text is a fundamental component for defending the use of a test for a particular purpose.

Content area text is a notion with which lay people can easily identify and understand. For example, parents and teachers expect items on an elementary mathematics test to be consistent with the elementary mathematics curriculum taught in the schools. Similarly, the general public expects items on a test used to license public accountants to be congruent with the tasks public accountants confront when performing their duties. The evaluation of test content vis-a-vis testing purpose is a natural first step in judging the utility and quality of an assessment. Thus, it is not surprising that the term Content area text was included in the earliest efforts to develop standards for test development and evaluation (i.e. American Psychological Association [APA], 1952).

Most, but not all, studies of content validity require content experts to review test specifications and items according to specific evaluation criteria. Thus, a content validity study typically involves gathering data on test quality from professionals with expertise in the content domain tested. Content validity studies differ according to the specific tasks presented to the experts and the types of data gathered. One example of a content validity study is to give content experts the test specifications and the test items and ask them to match each item to the content area, educational objective, or cognitive level that it measures. In another type of study, the experts are asked to rate the relevance of each test item to each of the areas, objectives, or levels measured by the test. Extensive discussions of such studies are provided by Crocker, Miller, and Franks (1989), Hambleton (1984), and Sireci (1998a). The data gathered from these studies can be summarized using simple descriptive statistics, such as the proportion of experts who classified an item as it was listed in the test specifications or the mean relevance ratings for an item across all areas tested. A 'content validity index' can be computed for a test by averaging these statistics over all test items. More sophisticated procedures for analysing these data have been proposed by Aiken (1980) and Crocker et al. (1989).

Sireci and Geisinger (1992, 1995) introduced a newer method ...
Related Ads