Error Sources Influencing Performance Assessment Reliability or Generalizability [electronic resource] : A Meta Analysis / Ying Hong Jiang and Others.

As performance-based assessments have gained wider use, there are increasing concerns about their dependability. This study is a synthesis of existing studies regarding the reliability or generalizability of performance assessments. The meta-analysis involves summarizing, examining, and evaluating r...

Full description

Saved in:
Bibliographic Details
Online Access: Full Text (via ERIC)
Main Author: Jiang, Ying Hong
Format: Electronic eBook
Language:English
Published: [S.l.] : Distributed by ERIC Clearinghouse, 1997.
Subjects:
Description
Summary:As performance-based assessments have gained wider use, there are increasing concerns about their dependability. This study is a synthesis of existing studies regarding the reliability or generalizability of performance assessments. The meta-analysis involves summarizing, examining, and evaluating research findings. Articles on the dependability of performance assessments, analyzed through traditional means or a generalizability framework published after 1980 were selected. The literature search yielded 22 studies meeting the criteria for inclusion. These 22 studies yielded 258 different reliability or generalizability coefficients. Task and occasion facets contributed the greatest proportion of variance to estimates of error in the measurement procedure. Both are inherent in the construction of many performance tasks. The judge facet did not contribute a large proportion of error variance. Critics of performance assessment should not worry that the use of professional judgment to score performance assessment will be a major source of measurement error. (Contains 3 tables and 25 references.) (SLD)
Item Description:ERIC Document Number: ED409342.
ERIC Note: Paper presented at the Annual Meeting of the American Educational Research Association (Chicago, IL, March 24-28, 1997).
Physical Description:20 p.