Deconstructing the Testing Mode Effect: Analyzing the Difference Between Writing and No Writing on the Test

Main Article Content

Daniel M. Settlage
Jim R. Wollscheid


The examination of the testing mode effect has received increased attention as higher education has shifted to remote testing during the Covid-19 pandemic. We argue that the testing mode effect should be broken into four distinct subparts: the ability to physically write on the test, the method of answer recording, the proctoring/testing environment, and the effect testing mode has on instructor question selection. This paper examines an area largely neglected by the literature surrounding the testing mode effect, the ability (or lack thereof) to write on the test. Using a normalization technique to control for student aptitude and instructor bias, we find that removing the ability of students to physically write on the test significantly lowers student performance. This finding holds across multiple question types classified by difficulty level, Bloom’s taxonomy, and on figure/graph-based questions, and has implications for testing in both face-to-face and online environments.


Download data is not yet available.

Article Details

How to Cite
Settlage, D. M., & Wollscheid, J. (2024). Deconstructing the Testing Mode Effect: Analyzing the Difference Between Writing and No Writing on the Test. Journal of the Scholarship of Teaching and Learning, 24(2).
Author Biographies

Daniel M. Settlage, University of Arkansas-Fort Smith

Professor of Economics, University of Arkansas-Fort Smith College of Business

Jim R. Wollscheid, University of Arkansas-Fort Smith

College of Business and Industry-Professor of Economics


Abdul, Baba, Olusola O. Adesope, David B. Thiessen, and Bernard J. Van Wie. 2016. Comparing the effects of two active learning approaches. International Journal of Engineering Education 32, no. 2: 654–669.

Backes, Ben, and James Cowan. 2019. Is the pen mightier than the keyboard? The effect of online testing on measured student achievement. Economics of Education Review 68, 89–103.

Bretz, Stacey Lowery, and LaKeisha McClary. 2015. Students’ Understandings of Acid Strength: How Meaningful Is Reliability When Measuring Alternative Conceptions? Journal of Chemical Education 92, no, 2: 212–219.

Clariana, Roy, and Patricia Wallace. 2002. Paper-based versus computer-based assessment: key factors associated with the test mode effect. British Journal of Educational Technology 33, no. 5: 593–602.

Clark, Richard E. 1994. Media will never influence learning. Educational Technology Research and Development 42, no. 2: 21–29.

Hake, Richard R. 1998. Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics 66, no. 1: 64–74.

The International Test Commission. 2006. International Guidelines on Computer-Based and Internet-Delivered Testing. International Journal of Testing 6, no. 2: 143-171.

Katz, Irvin R., Randy Elliot Bennett, and Aliza E. Berger. 2000. Effects of Response Format on Difficulty of SAT-Mathematics Items: It’s Not the Strategy. Journal of Educational Measurement 37, no. 1: 39–57.

Marshman, Emily, and Chandralekha Singh. 2016. Interactive tutorial to improve student understanding of single photon experiments involving a Mach–Zehnder interferometer. European Journal of Physics 37, no. 2: 1-22.

Prisacari, Anna Agripina, and Jared Danielson. 2017(a). Rethinking testing mode: Should I offer my next chemistry test on paper or computer? Computers & Education 106. 1–12.

Prisacari, Anna Agripina, and Jared Danielson. 2017(b). Computer-based versus paper-based testing: Investigating testing mode with cognitive load and scratch paper use. Computers in Human Behavior 77. 1–10.

Randall, Jennifer, Stephen Sireci, Xueming Li, and Leah Kaira. 2012. Evaluating the Comparability of Paper‐and Computer‐Based Science Tests Across Sex and SES Subgroups. Educational Measurement: Issues and Practice 31, no. 4: 2–12.

Settlage, Daniel M., and Jim R. Wollscheid. 2019. An Analysis of the Effect of Student Prepared Notecards on Exam Performance. College Teaching 67, no. 1: 15–22.

Sherman, Tyler J., Tanner M. Harvey, Emily A. Royse, Ashley B. Heim, Cara F. Smith, Alicia B. Romano, Aspen E. King, David O. Lyons, and Emily A. Holt. 2019. Effect of quiz format on student performance and answer-changing behaviour on formative assessments. Journal of Biological Education 55, no. 3: 1–15.

Siegfried, John J., and Peter E. Kennedy. 1995. Does Pedagogy Vary with Class Size in Introductory Economics? The American Economic Review 85, no. 2: 347–351.

Smolinsky, Lawrence, Brian D. Marx, Gestur Olafsson, and Yanxia A. Ma. 2020. Computer-Based and Paper-and-Pencil Tests: A Study in Calculus for STEM Majors. Journal of Educational Computing Research 58, no. 7: 1256–1278.

Vispoel, Walter P., Carrie A. Morris, and Sara J. Clough. 2019. Interchangeability of Results From Computerized and Traditional Administration of the BIDR: Convenience Can Match Reality. Journal of Personality Assessment 101, no. 3: 237–252.

Wainer, Howard, and David Thissen, D. 1993. Combining Multiple-Choice and Constructed-Response Test Scores: Toward a Marxist Theory of Test Construction. Applied Measurement in Education 6, no. 2: 103–118.

Walstad, William. B. 1998. Multiple choice tests for the economics course. In W. B. Walstad & P. Saunders (Eds.), Teaching undergraduate economics: A handbook for instructors, 287–304. McGraw - Hill.

Walstad, William B., and William E. Becker. 1994. Achievement Differences on Multiple-Choice and Essay Tests in Economics. The American Economic Review 84, no. 2: 193–196.