Critical Language and Literary Studies

نوع مقاله : علمی - پژوهشی

نویسندگان

دانشگاه شهید بهشتی

چکیده

چکیدهمهارت خواندن و درک مطلب یکی از مهمترین مهارتهای زبان انگلیسی، به‌ویژه در سطح آکادمیک می‌باشد. این مهارت به تنهایی و در کنار مهارتهای دیگر و از زوایای مختلف در حوزه آموزش و همچنین سنجش مورد بررسی قرار گرفته است. یکی از مولفه های مورد تحقیق، در زمینه سنجش آموزشی، زیرمهارتهای ساختاری مهارت خواندن است. آنچه در پژوهش حاضر بررسی می شود نوع، گستردگی و درجه دشواری این زیرمهارتها و همچنین دشواری سوالات با استفاده از مدل تشخیصی شناختی جی‌دینا[1] است. سنجش تشخیصی شناختی[2]، چارچوب نظری جدیدی در حوزه‌ روانسنجی است که بر خلاف مدلهای سنتی سنجش، به جای رتبه‌بندی آزمودنی‌ها بر اساس عملکردشان در آزمون، اطلاعاتی در مورد آزمودنی و تسلطش بر زیرمهارتهای یک حوزه و همچنین خود سوالات آزمون ارائه می دهد. به منظور انجام این پژوهش، سوالات بخش خواندن و درک مطلب آزمون ورودی دانشگاه در مقطع کارشناسی ارشد انتخاب شد و زیرمهارتهای آزمون از طریق سه منبع، مطالعات انجام ‌شده‌ قبلی حوزه، تکنیک فکرگویی و همچنین داوری متخصصین حیطه، استخراج گردید. این زیرمهارتها به همراه پاسخنامه‌های آزمون مورد نظر در بسته جی‌دینا (Ma & de la Torre, 2017) در نرم افزار برنامه نویسی R تجزیه و تحلیل شدند. چهار زیرمهارت از این آزمون استخراج و در ادامه، دشواری و گستردگی این زیرمهارت ها و پس از آن دشواری برخی سوالات[3]  به روش سنجش تشخیصی شناختی گزارش شد. در پایان، دلایل احتمالی نتایج به‌دست آمده در مقایسه با دیگر مطالعات این حوزه مورد تحلیل و بررسی قرار گرفت. واژگان کلیدی: مدل تشخیصی شناختی- زیرمهارت- مدل جی‌دینا- مهارت خواندن و درک مطلب   [1] G-DINA model[2] Cognitive Diagnostic Assessment[3] Item Difficulty

عنوان مقاله [English]

An investigation of the prevalence and difficulty of reading comprehension's sub-skills by the G-DINA model

نویسندگان [English]

  • Zahra Javidanmehr
  • Mohammad Reza Anani Sarab

Shahid Beheshti University

چکیده [English]

Reading comprehension is one of the most important skills of English language, specifically in academic settings. This skill has been investigated time and again from different perspectives, of which educational measurement is the focus of the present research. This study aims at defining these underlying sub-skills, examining their prevalence and difficulty, and estimating the difficulty of the items of a large-scale exam. The G-DINA model (Ma & de la Torre, 2017), which is a cognitive diagnostic model, was selected as the statistical method of data analysis. To this end, the subtest of reading from National University Entrance Exam, master's level, was selected. The underlying sub-skills of the test were extracted through three main sources of concurrent literature, students' think-aloud protocols, and expert panel's judgment. The extracted sub-skills along with the students' scored responses were used as the input for the GDINA package in R programming software. Four sub-skills were defined for the test and the outputs related to attribute/sub-skill prevalence, sub-skill difficulty, and item difficulty were reported in CDM framework. In the end, the probable reasons for the obtained outputs were discussed in the context of reading comprehension.

کلیدواژه‌ها [English]

  • Cognitive Diagnostic Model-G-DINA model-Reading comprehension- Sub-skills
  1. References
  2. Alderson, J. C. (2005). Assessing reading. Ernst Klett Sprachen.
  3. Alderson, J. C., & Lukmani, Y. (1989). Cognition and reading: Cognitive levels as embodied in test questions. Reading in a Foreign Language, 5(2), 253-270.
  4. Anderson, R. C., & Pearson, P. D. (1984). A schema-theoretic view of basic processes in reading comprehension. Handbook of Reading Research, 1, 255-291.
  5. Azevedo, R., & Cromley, J. G. (2004). Does training on self-regulated learning facilitate students' learning with hypermedia? Journal of Educational Psychology, 96(3), 523.
  6. Butcher, K. R., & Kintsch, W. (2003). Text comprehension and discourse processing. Handbook of psychology.
  7. Choi, H-J, Rupp, A. A., & Pan, M. (2012). Standardized diagnostic assessment design and analysis: Key Ideas from modern measurement theory. 61-85.
  8. Davey, B. (1988). Factors affecting the difficulty of reading comprehension items for successful and unsuccessful readers. The Journal of Experimental Education, 56(2), 67-76.
  9. de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76(2), 179-199.
  10. DiBello, L. V., Roussos, L. A., & Stout, W. (2006). Review of cognitively diagnostic assessment and a summary of psychometric models. Handbook of statistics, 26, 979-1030.
  11. Embretson, S. E. (1998). A cognitive design system approach to generating valid tests: Application to abstract reasoning. Psychological Methods, 3(3), 380.
  12. Ericsson, K. A., & Simon, H. A. (1998). How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking. Mind, Culture, and Activity, 5(3), 178-186.
  13. Gao, L. (2006). Toward a cognitive processing model of MELAB reading test item performance. SPAAN FELLOW, 1001, 1.
  14. Garcia, P. E., Olea, J., & de la Torre, J. (2014). Application of cognitive diagnosis models to competency-based situational judgment tests. Psicothema, 26(3), 372-377.
  15. Gierl, M. J., Leighton, J. P., & Hunka, S. M. (2000). An NCME instructional module on exploring the logic of Tatsuoka's Rule‐Space Model for test development and analysis. Educational Measurement: Issues and Practice, 19(3), 34-44.
  16. Gray, W. S. (1960). The major aspects of reading. In H. Robinson (ed.), Sequential development of reading abilities (vol. 90, pp. 8-24). Chicago: Chicago University Press.
  17. Hale, G. A. (1988). Student major field and text content: interactive effects on reading comprehension in the Test of English as a Foreign Language. Language Testing, 5(1), 49-61.
  18. Hartz, S. M. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality. University of Illinois at Urbana-Champaign.
  19. Jang, E. E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for Fusion Model application to LanguEdge assessment. Language Testing, 26(1), 031-073.
  20. Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric Item Response Theory. Applied Psychological Measurement, 25(3), 258-272.
  21. Kim, Y-H. (2011). Diagnosing EAP writing ability using the reduced reparameterized unified model. Language Testing, 28(4), 509-541.
  22. Lee, Y. H., & Chen, H. (2011). A review of recent response-time analyses in educational testing. Psychological Test and Assessment Modeling, 53(3), 359-379.
  23. Lee, Y-W., & Sawaki, Y. (2009). Application of three cognitive diagnosis models to ESL reading and listening assessments. Language Assessment Quarterly, 6(3), 239-263.
  24. Leighton, J., & Gierl, M. (2007). Cognitive diagnostic assessment for education: Theory and applications: Cambridge University Press.
  25. Li, H, & Suen, HK. (2013). Constructing and validating a Q-matrix for cognitive diagnostic analyses of a reading test. Educational Assessment, 18(1), 1-25.
  26. Li, H. (2011). A cognitive diagnostic analysis of the MELAB reading test. Spaan Fellow Working Papers in Second or Foreign Language Assessment, 9, 17-46.
  27. Ma, W., & de la Torre, J. (2017). The generalized DINA model framework (Version 1.2.1) [Computer Software]. Rutgers University, New Brunswick, NJ.
  28. Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, 18(2), 5-11.
  29. Narvaez, D. (2001). Moral text comprehension: Implications for education and research. Journal of Moral Education, 30(1), 43-54.
  30. Pressley, M, & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading: Routledge.
  31. Ravand, H. (2015). Application of a cognitive diagnostic model to a high-stakes reading comprehension test. Journal of Psychoeducational Assessment.
  32. Rost, D. (1993). Assessing different components of reading comprehension: Fact or fiction? Language Testing, 10(1), 79-92.
  33. Rupp, A. A., & Templin, J. (2008). The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educational and Psychological Measurement, 68(1), 78-96.
  34. Rupp, A. A, Templin, J., & Henson, R. A. (2010). Diagnostic assessment: Theory, methods, and applications. New York: Guilford.
  35. Sawaki, Y., Kim, H-J, & Gentile, C. (2009). Q-matrix construction: Defining the link between constructs and test items in large-scale reading and listening comprehension assessments. Language Assessment Quarterly, 6(3), 190-209.
  36. Shohamy, E. (1992). Beyond proficiency testing: A diagnostic feedback testing model for assessing foreign language learning. The Modern Language Journal, 76(4), 513-521.
  37. Singer, M., Graesser, A. C., & Trabasso, T. (1994). Minimal or global inference during reading. Journal of Memory and Language, 33(4), 421-441.
  38. Tatsuoka, K. K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20(4), 345-354.
  39. von Davier, M. (2005). A general diagnostic model applied to language testing data. ETS Research Report RR-05-16. ETS, Princeton, NJ: ETS.
  40. Weir, C. J. (1993). Understanding and developing language tests: Prentice-Hall.