Show simple item record

dc.contributor.advisor Frick, Theodore W. en_US
dc.contributor.author Myers, Rodney Dean en_US
dc.date.accessioned 2013-05-16T00:49:43Z
dc.date.available 2013-05-16T00:49:43Z
dc.date.issued 2013-05-15
dc.date.submitted 2012 en_US
dc.identifier.uri http://hdl.handle.net/2022/16210
dc.description Thesis (Ph.D.) - Indiana University, School of Education, 2012 en_US
dc.description.abstract In order for simulations and games to be effective for learning, instructional designers must verify that the underlying computational models being used have an appropriate degree of fidelity to the conceptual models of their real-world counterparts. A simulation/game that provides incorrect feedback is likely to promote misunderstanding and adversely affect learning and transfer. Numerous methods for verifying the accuracy of a computational model exist, but it is generally accepted that no single method is adequate and that multiple methods should be used. The purpose of this study was to propose and test a new method for collecting and analyzing users' interaction data (e.g., choices made, actions taken, results and feedback obtained) to provide quantified evidence that the underlying computational model of a simulation/game represents the conceptual model with sufficient accuracy. In this study, analysis of patterns in time (APT) was used to compare gameplay results from the Diffusion Simulation Game (DSG) with predictions based on diffusion of innovations theory (DOI). A program was written to automatically play the DSG by analyzing the game state during each turn, seeking patterns of game component attributes that matched optimal strategies based on DOI theory. When the use of optimal strategies did not result in the desired number of successful games, here defined as the threshold of confidence for model verification, further investigation revealed flaws in the computational model. These flaws were incrementally corrected and subsequent gameplay results were analyzed until the threshold of confidence was achieved. In addition to analysis of patterns in time for model verification (APTMV), other verification methods used included code walkthrough, execution tracing, desk checking, syntax checking, and statistical analysis. The APTMV method was found to be complementary to these other methods, providing quantified evidence of the computational model's degree of accuracy and pinpointing flaws that could be corrected to improve fidelity. The APTMV approach to verification and improvement of computational models is described and compared with other methods, and improvements to the process are proposed. en_US
dc.language.iso en en_US
dc.publisher [Bloomington, Ind.] : Indiana University en_US
dc.rights Attribution-NoDerivs 3.0 Unported (CC BY-ND 3.0)
dc.rights.uri http://creativecommons.org/licenses/by-nd/3.0/
dc.subject design en_US
dc.subject games en_US
dc.subject learning en_US
dc.subject models en_US
dc.subject simulations en_US
dc.subject verification en_US
dc.subject.classification Instructional design en_US
dc.subject.classification Educational technology en_US
dc.title Analyzing Interaction Patterns to Verify a Simulation/Game Model en_US
dc.type Doctoral Dissertation en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Attribution-NoDerivs 3.0 Unported (CC BY-ND 3.0) Attribution-NoDerivs 3.0 Unported (CC BY-ND 3.0)

Search IUScholarWorks


Advanced Search

Browse

My Account

Statistics