Analysis on Teacher Made Tests in JRMSU: Basis for Departmental Tests

Downloads

Download the Article:

Authors

https://doi.org/10.55559/sjahss.v4i11.575

Keywords:

Teacher-Made Tests, Assessment Quality, Bloom’s Taxonomy, Test Item Analysis, Departmental Testing

Abstract

Aimed at evaluating the quality of teacher made tests within the Jose Rizal Memorial State University (JRMSU) system, guided by Bloom’s taxonomy and Gronlund’s typology of test formats, informed by psychometric perspectives from Classical Test Theory and Item Response Theory, this study documents how locally constructed assessments distribute cognitive demand, adhere to language in use standards (grammar and mechanics), and align with institutional learning outcomes. Using documentary analysis of test papers and Tables of Specifications across five JRMSU campuses, the research identifies over reliance on lower order thinking items, occasional misalignment between targeted and actual cognitive levels, and sporadic violations of item writing conventions. The discussion argues for faculty development, peer review of test items, and the institutionalization of departmental testing to stabilize validity and fairness. The sample concludes with actionable recommendations for assessment literacy, including calibration routines, item banks with annotated rationales, and alignment audits that link curriculum, instruction, and testing.

Downloads

Download data is not yet available.

References

Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.

Aquino, L. B. (2011). Study habits and attitudes of freshmen students: Implications for academic intervention programs. University of Saint Louis Monograph.

Bailey, S. (2002). Performance assessment and application skills in education. Harper Education.

Breyton, C. (2001). Target–objective misalignment in classroom tests: Causes and remedies. Assessment in Education, 8(3), 245–262.

Brown, H. D. (2005). Language assessment: Principles and classroom practices. Pearson.

Chambers, J., & Fleming, D. (2001). Teacher-made tests: A review of practices across grade levels. Journal of Educational Measurement, 38(1), 45–61.

Corder, S. P. (2003). Error analysis and interlanguage. Oxford University Press.

Earl, L. M. (2003). Assessment as learning: Using classroom assessment to maximize student learning. Corwin Press.

Embretson, S. E., & Reise, S. P. (2002). Item response theory for psychologists. Lawrence Erlbaum Associates.

Gareis, C. R., & Grant, L. W. (2008). Teacher-made assessments: How to connect curriculum, instruction, and student learning. Eye on Education.

Gronlund, N. E. (1998). Assessment of student achievement (6th ed.). Allyn & Bacon.

Haladyna, T. M. (2002). Developing and validating multiple-choice test items (3rd ed.). Lawrence Erlbaum Associates.

Herman, J. L., & Dorr-Bremme, D. (2004). Assessing writing and higher-order skills. Review of Educational Research, 74(4), 525–561.

Johnson, R., & Johnson, M. (2002). Teachers’ comfort with objective tests across subject areas. Educational Research Quarterly, 25(4), 15–30.

Kuhs, T., et al. (2001). Traditional assessment practices of teachers: Patterns and perceptions. Teaching and Teacher Education, 17(4), 541–557.

Linden, W. J. van der, & Glas, C. A. W. (Eds.). (2001). Computerized adaptive testing: Theory and practice. Springer.

Monahan, T. (2002). Balancing memory and higher-order thinking in test construction. Practical Assessment, Research & Evaluation, 8(12), 1–7.

Nitko, A. J., & Brookhart, S. M. (2014). Educational assessment of students (7th ed.). Pearson.

Oescher, J., & Kirby, P. (2001). Violations of item-writing rules in teacher-made tests. Educational Measurement: Issues and Practice, 20(3), 18–24.

Pagcaliwagan, J. (2016). Quality of teacher-made tests in Philippine secondary schools. Philippine Journal of Education Research, 92(1), 45–58.

Popham, W. J. (2002). Modern educational measurement: Practical guidelines for educational leaders. Allyn & Bacon.

Rabia, M., Mubarak, N., Tallat, H., & Nasir, W. (2017). A study-on-study habits and academic performance of students. Bulletin of Education and Research, 39(1), 183–197.

Reise, S. P. (2002). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 37(4), 539–570.

Sharkness, J., & DeAngelo, L. (2011). Measuring student involvement: A comparison of CTT and IRT in the construction of scales. Research in Higher Education, 52(5), 480–507.

Stiggins, R. (2001). Student-involved classroom assessment (3rd ed.). Prentice Hall.

Tejero, E. G. (2004). Teaching reading in the elementary grades. Katha Publishing.

Verma, A. (2016). Academic achievement and study habits. International Journal of Research in Humanities, Arts and Literature, 4(3), 75–88.

Wainer, H., & Thissen, D. (2003). How is reliability related to the quality of test items? Educational Measurement: Issues and Practice, 22(2), 22–27.

Walstad, W. B., & Becker, W. E. (2004). Multiple-choice and constructed-response tests: A review. Journal of Economic Education, 35(2), 131–162.

Williams, J. M. (2001). Writing quality teacher-made tests: A handbook for teachers.

Wallis, R. (2005). Assessment practices in science education: Cognitive demand and student reasoning.

Zewdu, M. (2010). Analysis of teacher-made tests in preparatory schools of Bahir Dar. VDM Verlag.

Published on: 31-01-2026

Also Available On

Note: Third-party indexing sometime takes time. Please wait one week or two for indexing. Validate this article's Schema Markup on Schema.org

How to Cite

Dapiton, J. (2026). Analysis on Teacher Made Tests in JRMSU: Basis for Departmental Tests. Sprin Journal of Arts, Humanities and Social Sciences, 4(11), 24–31. https://doi.org/10.55559/sjahss.v4i11.575

Issue

Section

Research Article
2583-2387