Úplné zobrazení záznamu

Toto je statický export z katalogu ze dne 11.05.2024. Zobrazit aktuální podobu v katalogu.

Bibliografická citace

.
0 (hodnocen0 x )
EB
ONLINE
Cham : Springer International Publishing AG, 2017
1 online resource (717 pages)
Externí odkaz    Plný text PDF 
   * Návod pro vzdálený přístup 


ISBN 9783319586892 (electronic bk.)
ISBN 9783319586878
Methodology of Educational Measurement and Assessment Ser.
Print version: Bennett, Randy E. Advancing Human Assessment Cham : Springer International Publishing AG,c2017 ISBN 9783319586878
3.1.5 Studying Test Score Measurement Properties With Respect to Multiple Test Forms and Measures -- 3.1.5.1 Alternative Classical Test Theory Models.
2.4.1 Differential Item Functioning, Item Response Theory, and Conditions of Administration -- 2.4.2 Subgroup Comparisons in Differential Item Functioning -- 2.4.3 Comparisons and Uses of Item Analysis and Item Response Theory -- 2.4.3.1 Similarities of Item Response Theory and Item Analysis -- 2.4.3.2 Comparisons and Contrasts in Assumptions of Invariance -- 2.4.3.3 Uses of Item Analysis Fit Evaluations of Item Response Theory Models -- 2.4.4 Item Context and Order Effects -- 2.4.5 Analyses of Alternate Item Types and Scores -- References -- Chapter 3: Psychometric Contributions: Focus on Test Scores -- 3.1 Test Scores as Measurements -- 3.1.1 Foundational Developments for the Use of Test Scores as Measurements, Pre-ETS -- 3.1.2 Overview of ETS Contributions -- 3.1.3 ETS Contributions About -- 3.1.4 Intervals for True Score Inference ---
3.1.5 Studying Test Score Measurement Properties With Respect to Multiple Test Forms and Measures -- 3.1.5.1 Alternative Classical Test Theory Models.
Intro -- Foreword -- Preface -- References -- Contents -- About the Editors -- Chapter 1: What Does It Mean to Be a Nonprofit Educational Measurement Organization in the Twenty-First Century? -- 1.1 What Is an Educational Nonprofit? -- 1.2 Where Did ETS Come From? -- 1.3 What Does the Past Imply for the Future? -- 1.4 Summary -- References -- Part I: ETS Contributions to Developing Analytic Tools for Educational Measurement -- Chapter 2: A Review of Developments and Applications in Item Analysis -- 2.1 Item Analysis Indices -- 2.1.1 Item Difficulty Indices -- 2.1.2 Item Discrimination Indices -- 2.2 Item and Test Score Relationships -- 2.2.1 Relating Item Indices to Test Score Characteristics -- 2.2.2 Conditional Average Item Scores -- 2.3 Visual Displays of Item Analysis Results -- 2.4 Roles of Item Analysis in Psychometric Contexts ---
3.3 Integrating Developments About Test Scores as Measurements and Test Scores as Predictors -- 3.4 Discussion -- References -- Chapter 4: Contributions to Score Linking Theory and Practice -- 4.1 Why Score Linking Is Important -- 4.2 Conceptual Frameworks for Score Linking -- 4.2.1 Score Linking Frameworks -- 4.2.2 Equating Frameworks -- 4.3 Data Collection Designs and Data Preparation -- 4.3.1 Data Collection -- 4.3.2 Data Preparation Activities -- 4.3.2.1 Sample Selection -- 4.3.2.2 Weighted Samples -- 4.3.2.3 Smoothing -- 4.3.2.4 Small Samples and Smoothing -- 4.4 Score Equating and Score Linking Procedures -- 4.4.1 Early Equating Procedures -- 4.4.2 True-Score Linking -- 4.4.3 Kernel Equating and Linking With Continuous Exponential Families -- 4.4.4 Preequating -- 4.4.5 Small-Sample Procedures -- 4.5 Evaluating Equatings -- 4.5.1 Sampling Stability of Linking Functions ---
5.3 Broadening the Research and Application of IRT (the 1980s) -- 5.3.1 Further Developments and Evaluation of IRT Models -- 5.3.2 IRT Software Development and Evaluation -- 5.3.3 Explanation, Evaluation, and Application of IRT Models -- 5.4 Advanced Item Response Modeling: The 1990s -- 5.4.1 IRT Software Development and Evaluation -- 5.4.2 Explanation, Evaluation, and Application of IRT Models -- 5.5 IRT Contributions in the Twenty-First Century -- 5.5.1 Advances in the Development of Explanatory and Multidimensional IRT Models -- 5.6 IRT Software Development and Evaluation -- 5.6.1 Explanation, Evaluation, and Application of IRT Models -- 5.6.2 The Signs of (IRT) Things to Come -- 5.7 Conclusion -- References -- Chapter 6: Research on Statistics -- 6.1 Linear Models -- 6.1.1 Computation -- 6.1.2 Inference -- 6.1.3 Prediction -- 6.1.4 Latent Regression ---
3.1.5.2 Reliability Estimation -- 3.1.5.3 Factor Analysis -- 3.1.6 Applications to Psychometric Test Assembly and Interpretation -- 3.2 Test Scores as Predictors in Correlational and Regression Relationships -- 3.2.1 Foundational Developments for the Use of Test Scores as Predictors, Pre-ETS -- 3.2.2 ETS Contributions to the Methodology of Correlations and Regressions and Their Application to the Study of Test Scores as Predictors -- 3.2.2.1 Relationships of Tests in a Population’s Subsamples With Partially Missing Data -- 3.2.2.2 Using Test Scores to Adjust Groups for Preexisting Differences -- 3.2.2.3 Detecting Group Differences in Test and Criterion Regressions -- 3.2.2.4 Using Test Correlations and Regressions as Bases for Test Construction ---
6.5 Complex Samples -- 6.6 Data Displays -- 6.7 Conclusion -- References -- Chapter 7: Contributions to the Quantitative Assessment of Item, Test, and Score Fairness -- 7.1 Fair Prediction of a Criterion -- 7.2 Differential Item Functioning (DIF) -- 7.2.1 Differential Item Functioning (DIF) Methods -- 7.2.1.1 Early Developments: The Years Before Differential Item Functioning (DIF) Was Defined at ETS -- 7.2.1.2 Mantel-Haenszel (MH): Original Implementation at ETS -- 7.2.1.3 Subsequent Developments With the Mantel-Haenszel (MH) Approach -- 7.2.1.4 Standardization (STAND) -- Standardization’s (STAND’s) Definition of Differential Item Functioning (DIF) -- Standardization’s (STAND’s) Primary Differential Item Functioning (DIF) Index -- Extensions to Standardization (STAND) -- 7.2.1.5 Item Response Theory (IRT) -- 7.2.1.6 SIBTEST -- 7.2.2 Matching Variable Issues -- 7.2.3 Study Group Definition -- 7.2.4 Sample Size and Power Issues -- 7.3 Fair Linking of Test Scores -- 7.4 Limitations of Quantitative Fairness Assessment Procedures -- References -- Part II: ETS Contributions to Education Policy and Evaluation -- Chapter 8: Large-Scale Group-Score Assessment -- 8.1 Organization of This Chapter -- 8.2 Overview of Technological Contributions -- 8.2.1 Early Group Assessments -- 8.2.2 NAEP’s Conception -- 8.2.3 Educational Opportunities Survey (EOS) -- 8.2.4 NAEP’S Early Assessments -- 8.2.5 Longitudinal Studies -- 8.2.6 Scholastic Aptitude Test (SAT) Score Decline -- 8.2.7 Calls for Change -- 8.2.7.1 The Wall Charts -- 8.2.8 NAEP’s New Design -- 8.2.9 NAEP’s Technical Dissemination -- 8.2.10 National Assessment Governing Board -- 8.2.11 NAEP’s International Effects -- 8.2.12 Other ETS Technical Contributions -- 8.3 ETS and Large-Scale Assessment -- 8.3.1 Early Group Assessments -- 8.3.1.1 Project Talent -- 8.3.1.2 First International Mathematics Study (FIMS).
9.3.1 Models Allowing the Derivation of Comparable Measures and Comparisons Across Literacy Assessments.
3.3 Integrating Developments About Test Scores as Measurements and Test Scores as Predictors -- 3.4 Discussion -- References -- Chapter 4: Contributions to Score Linking Theory and Practice -- 4.1 Why Score Linking Is Important -- 4.2 Conceptual Frameworks for Score Linking -- 4.2.1 Score Linking Frameworks -- 4.2.2 Equating Frameworks -- 4.3 Data Collection Designs and Data Preparation -- 4.3.1 Data Collection -- 4.3.2 Data Preparation Activities -- 4.3.2.1 Sample Selection -- 4.3.2.2 Weighted Samples -- 4.3.2.3 Smoothing -- 4.3.2.4 Small Samples and Smoothing -- 4.4 Score Equating and Score Linking Procedures -- 4.4.1 Early Equating Procedures -- 4.4.2 True-Score Linking -- 4.4.3 Kernel Equating and Linking With Continuous Exponential Families -- 4.4.4 Preequating -- 4.4.5 Small-Sample Procedures -- 4.5 Evaluating Equatings -- 4.5.1 Sampling Stability of Linking Functions ---
4.5.1.1 The Standard Error of Equating -- 4.5.1.2 The Standard Error of Equating Difference Between Two Linking Functions -- 4.5.2 Measures of the Subpopulation Sensitivity of Score Linking Functions.
5.3 Broadening the Research and Application of IRT (the 1980s) -- 5.3.1 Further Developments and Evaluation of IRT Models -- 5.3.2 IRT Software Development and Evaluation -- 5.3.3 Explanation, Evaluation, and Application of IRT Models -- 5.4 Advanced Item Response Modeling: The 1990s -- 5.4.1 IRT Software Development and Evaluation -- 5.4.2 Explanation, Evaluation, and Application of IRT Models -- 5.5 IRT Contributions in the Twenty-First Century -- 5.5.1 Advances in the Development of Explanatory and Multidimensional IRT Models -- 5.6 IRT Software Development and Evaluation -- 5.6.1 Explanation, Evaluation, and Application of IRT Models -- 5.6.2 The Signs of (IRT) Things to Come -- 5.7 Conclusion -- References -- Chapter 6: Research on Statistics -- 6.1 Linear Models -- 6.1.1 Computation -- 6.1.2 Inference -- 6.1.3 Prediction -- 6.1.4 Latent Regression ---
6.2 Bayesian Methods -- 6.2.1 Bayes for Classical Models -- 6.2.2 Later Bayes -- 6.2.3 Empirical Bayes -- 6.3 Causal Inference -- 6.4 Missing Data.
3.1.5.2 Reliability Estimation -- 3.1.5.3 Factor Analysis -- 3.1.6 Applications to Psychometric Test Assembly and Interpretation -- 3.2 Test Scores as Predictors in Correlational and Regression Relationships -- 3.2.1 Foundational Developments for the Use of Test Scores as Predictors, Pre-ETS -- 3.2.2 ETS Contributions to the Methodology of Correlations and Regressions and Their Application to the Study of Test Scores as Predictors -- 3.2.2.1 Relationships of Tests in a Population’s Subsamples With Partially Missing Data -- 3.2.2.2 Using Test Scores to Adjust Groups for Preexisting Differences -- 3.2.2.3 Detecting Group Differences in Test and Criterion Regressions -- 3.2.2.4 Using Test Correlations and Regressions as Bases for Test Construction ---
4.5.3 Consistency of Scale Score Meaning -- 4.6 Comparative Studies -- 4.6.1 Different Data Collection Designs and Different Methods -- 4.6.2 The Role of the Anchor -- 4.6.3 Matched-Sample Equating -- 4.6.4 Item Response Theory True-Score Linking -- 4.6.5 Item Response Theory Preequating Research -- 4.6.6 Equating Tests With Constructed-Response Items -- 4.6.7 Subscores -- 4.6.8 Multidimensionality and Equating -- 4.6.9 A Caveat on Comparative Studies -- 4.7 The Ebb and Flow of Equating Research at ETS -- 4.7.1 Prior to 1970 -- 4.7.2 The Year 1970 to the Mid-1980s -- 4.7.3 The Mid-1980s to 2000 -- 4.7.4 The Years 2002-2015 -- 4.8 Books and Chapters -- 4.9 Concluding Comment -- References -- Chapter 5: Item Response Theory -- 5.1 Some Early Work Leading up to IRT (1940s and 1950s) -- 5.2 More Complete Development of IRT (1960s and 1970s) ---
6.5 Complex Samples -- 6.6 Data Displays -- 6.7 Conclusion -- References -- Chapter 7: Contributions to the Quantitative Assessment of Item, Test, and Score Fairness -- 7.1 Fair Prediction of a Criterion -- 7.2 Differential Item Functioning (DIF) -- 7.2.1 Differential Item Functioning (DIF) Methods -- 7.2.1.1 Early Developments: The Years Before Differential Item Functioning (DIF) Was Defined at ETS -- 7.2.1.2 Mantel-Haenszel (MH): Original Implementation at ETS -- 7.2.1.3 Subsequent Developments With the Mantel-Haenszel (MH) Approach -- 7.2.1.4 Standardization (STAND) -- Standardization’s (STAND’s) Definition of Differential Item Functioning (DIF) -- Standardization’s (STAND’s) Primary Differential Item Functioning (DIF) Index -- Extensions to Standardization (STAND) -- 7.2.1.5 Item Response Theory (IRT) -- 7.2.1.6 SIBTEST -- 7.2.2 Matching Variable Issues -- 7.2.3 Study Group Definition -- 7.2.4 Sample Size and Power Issues -- 7.3 Fair Linking of Test Scores -- 7.4 Limitations of Quantitative Fairness Assessment Procedures -- References -- Part II: ETS Contributions to Education Policy and Evaluation -- Chapter 8: Large-Scale Group-Score Assessment -- 8.1 Organization of This Chapter -- 8.2 Overview of Technological Contributions -- 8.2.1 Early Group Assessments -- 8.2.2 NAEP’s Conception -- 8.2.3 Educational Opportunities Survey (EOS) -- 8.2.4 NAEP’S Early Assessments -- 8.2.5 Longitudinal Studies -- 8.2.6 Scholastic Aptitude Test (SAT) Score Decline -- 8.2.7 Calls for Change -- 8.2.7.1 The Wall Charts -- 8.2.8 NAEP’s New Design -- 8.2.9 NAEP’s Technical Dissemination -- 8.2.10 National Assessment Governing Board -- 8.2.11 NAEP’s International Effects -- 8.2.12 Other ETS Technical Contributions -- 8.3 ETS and Large-Scale Assessment -- 8.3.1 Early Group Assessments -- 8.3.1.1 Project Talent -- 8.3.1.2 First International Mathematics Study (FIMS).
8.3.2 NAEP’s Conception -- 8.3.3 Educational Opportunities Survey -- 8.3.4 NAEP’s Early Assessments -- 8.3.5 Longitudinal Studies -- 8.3.6 SAT Score Decline -- 8.3.6.1 Improvisation of Linking Methods -- 8.3.6.2 Partitioning Analysis -- 8.3.7 Call for Change -- 8.3.8 NAEP’s New Design -- 8.3.9 NAEP’s Technical Dissemination -- 8.3.9.1 Documentation of NAEP Procedures and Results -- 8.3.9.2 NAEP’s Secondary-Use Data and Web Tools -- 8.3.10 National Assessment Governing Board -- 8.3.10.1 Comparability of State and National Estimate -- 8.3.10.2 Full Population Estimation -- 8.3.11 Mapping State Standards Onto NAEP -- 8.3.11.1 Testing Model Fit -- 8.3.11.2 Aspirational Performance Standards -- 8.3.12 Other ETS Contributions -- 8.3.12.1 Rater Reliability in NAEP -- 8.3.12.2 Computer-Based Assessment in NAEP -- 8.3.12.3 International Effects -- 8.3.12.4 ETS Contributions to International Assessments -- 8.3.13 NAEP ETS Contributions -- 8.3.13.1 The FORTRAN IV Statistical System (F4STAT) -- 8.3.13.2 Fitting Robust Regressions Using Power Series -- 8.3.13.3 Computational Error in Regression Analysis -- 8.3.13.4 Interpreting Least Squares -- 8.3.14 Impact on Policy-Publications Based on Large-Scale Assessment Findings -- Appendix: NAEP Estimation Procedures -- The Early NAEP Estimation Process -- Scaling -- Conditioning -- Variance Estimation -- Sampling Error -- Measurement Error -- Alternative Psychometric Approaches -- Possible Future Innovations -- Random Effects Model -- Adaptive Numerical Quadrature -- Using Hierarchical Models -- References -- Chapter 9: Large-Scale Assessments of Adult Literacy -- 9.1 Expanding the Construct of Literacy -- 9.2 Developing a Model for Building Construct-Based Assessments -- 9.3 Expanding and Implementing Large-Scale Assessment Methodology.
9.3.1 Models Allowing the Derivation of Comparable Measures and Comparisons Across Literacy Assessments.
001895131
express
(Au-PeEL)EBL6422704
(MiAaPQ)EBC6422704
(OCoLC)1159387640

Zvolte formát: Standardní formát Katalogizační záznam Zkrácený záznam S textovými návěštími S kódy polí MARC