Theses and Dissertations
Permanent link for this collectionhttps://hdl.handle.net/2022/3086
Browse
Browsing Theses and Dissertations by Title
Now showing 1 - 20 of 682
- Results Per Page
- Sort Options
Item The 2005 Lotus World Music and Arts Festival: Processes of Production and the Construction of Spatial Liminality([Bloomington, Ind.] : Indiana University, 2010-06-01) Fass, Sunni M.; Stone, Ruth M.The dissertation explores the role of space in the production and perception of meaning in the cultural performance genre of festival, using a case-study approach centered on the production of the 2005 Lotus World Music and Arts Festival in Bloomington, Indiana. The study expands the notion of "festival" far beyond the four days of its enactment and encompasses the festival's year-long production process, one significant element of which is how producers conceive and manipulate space to mobilize a "global" festival within a local geography. Drawing on data gathered via ethnographic methods such as interview and participant-observation, the dissertation analyzes the ways in which spatial considerations play into production decisions and become essential components of a uniquely "festivalized" and liminal participant experience. This study emphasizes space as an actor and prioritizes the affective role of space vis-à-vis the construction of meaning in festival contexts, and its conclusions examine how festival producers use spatial transformations, inversions, and juxtapositions to create powerful loci of ambiguity and symbolic tension.Item Accumulating Character: Time and Ethos in Rhetorical Theory([Bloomington, Ind.] : Indiana University, 2019-06) Bjork, Collin; Anderson, Dana“Accumulating Character” develops a theoretical framework that highlights the rhetorical and interrelated workings of character and time. In rhetorical scholarship, character—or ethos—is typically construed as either fleeting or fixed. This dissertation makes explicit the links between these two temporal extremes, identifying the ways that ethos accumulates rhetorical force over time. To attend to the temporal dimensions of ethos, I first construct a rhetorical model of accumulated time. Here, I conjoin Aristotle’s definition of chronos in the Physics with the scholarship of Karen Barad to craft a complex picture of accumulated time as a rhetorically significant materialdiscursive force that emerges from the deep entanglement of nonhuman and human ecologies. In Chapter 2, I sketch a framework for understanding how a rhetorical sense of accumulated time underscores the temporal dynamics of ethos. To make visible the ways that this cumulative ethos operates, I analyze multiple classical texts concerning the trial of Socrates and demonstrate the cumulative rhetorical effects of Socrates’ ethos on the city of Athens. In Chapter 3, I examine how the accelerated circulation of online media disrupts the temporal regularity that often governs the evolution of cumulative ethos. To account for these temporal complexities, I develop the paired concepts of rhetorical saturation and rhetorical rupture and explain how they function in dialectical tension with each other. I then elucidate how these twin ideas contribute to the irregular accumulation of character by analyzing Kanye West’s ethos as it circulates across a variety of online platforms. In Chapter 4, I investigate how cryptocurrencies test the limits of rhetorical character by disrupting the timeline for its accretion and decentering cumulative ethos from human subjects. By studying Bitcoin through the lens of cumulative ethos, I foreground the centrality of both character and time to commerce as well as the consequences of attempting to replace a complex rhetorical figure like ethos with a mechanical facsimile like the blockchain protocol. In sum, “Accumulating Character” emphasizes the importance of accumulated time to rhetorical activity, and especially to the concept of ethos. This, in turn, makes visible the complicated ways that character accrues rhetorical influence as it negotiates shifting temporalities.Item Activity Theory as a lens for considering culture: A descriptive case study of a multinational company developing and supporting training around the world([Bloomington, Ind.] : Indiana University, 2010-06-01) Marken, James; Schwen, Thomas MActivity Theory has often been used in the literature as a way to examine human activity, but the bulk of that work has been done in educational settings. Where it has been used in workplace environments, it has typically been used to enhance theoretical understandings of work and the humans who engage in work. It has not typically been used with an eye to advancing the business causes of the companies it has been used with. In addition, it has not been used internationally with multi-national companies. This is a shame, for with its Elements of Activity and its idea of contradiction, Activity Theory does seem to hold much promise for being able to shed light on cultural issues encountered by companies operating across national boundaries. This research presents a descriptive case study of a company using Activity Theory to shed light on the potential cultural conflicts the company faced as it designed and developed training interventions for use in its affiliates around the globe. The research focused on being practical--on creating tools the company could use, and on detailing the methodology sufficiently that other instructional designers could employ Activity Theory in a similar way in other situations which they felt were relevant. Although Activity Theory was not completely internalized by the company, with the assistance of a facilitator coaching them in its use the company was able to use the theory to avoid cultural conflicts, enhance understandings about cultural conflicts which did occur, debrief cross-cultural training interventions, identify improvements for future training interventions, and publicly share internally held cultural knowledge and beliefs.Item Adapting Anime: Transnational Media between Japan and the United States([Bloomington, Ind.] : Indiana University, 2013-05-15) Ruh, Brian; Klinger, BarbaraThis dissertation examines Japanese animation, or anime, as an example of how a contemporary media product crosses national and cultural borders and becomes globalized. Bringing together the theories of Hiroki Azuma and Susan J. Napier, it develops a theory called the "database fantasyscape" as a way of discussing such transnational flows. In short, the "database" refers to how contemporary media products are assembled from a matrix of constituent elements into combinations that are simultaneously unique and familiar, while the "fantasyscape" element expands on Arjun Appadurai's concept of global flows in order to posit a way in which desire travels transnationally. The dissertation discusses how anime came to the United States and the role this had in anime's development in Japan by examining Tetsuwan Atom (Astro Boy), the first half-hour television animation produced in Japan. It examines how anime has been adapted and distributed in overseas markets like the US by analyzing successful media franchises like Robotech and Voltron, as well as unsuccessful ones like Warriors of the Wind. It analyzes the complex and often fraught relationship between anime fans and producers / distributors and discusses the role played by fansubs (subtitled copies created by fans and often illegally distributed). Bringing in Matt Hills's concepts of cult texts, the dissertation discusses how in certain respects anime can be seen as cult and what this means with regard to transnational reception. Finally, it examines the relationship between anime and physical space, both in a temporally-limited fan-oriented space like an anime convention as well as within the city of Tokyo, with anime-ic perspectives providing ways of perceiving and processing the city.Item ADAPTIVE OPTICS IMAGING OF THE TEMPORAL RAPHE IN NORMAL AND GLAUCOMATOUS SUBJECTS([Bloomington, Ind.] : Indiana University, 2015-06) Huang, Gang; Burns, Stephen AAdaptive optics scanning laser ophthalmoscopy (AOSLO) allows high-resolution in vivo imaging of the retina. It provides us a new way to observe and measure the RNFL in vivo. Especially, it opens a possibility of imaging the RNFL in the temporal raphe which can be affected in early glaucoma. The main objective of this thesis is to use an AOSLO to observe and measure the RNFL in the temporal raphe in both normal and glaucomatous subjects. To do this, we first improved the AOSLO imaging with the following efforts: 1) A novel adaptive optics (AO) image processing algorithm was developed to improve the contrast of AO images. 2) A clinical planning module was developed to enhance the data acquisition efficiency, especially for large-scale RNFL imaging. With the improved AOSLO imaging, we investigated the temporal raphe in young healthy subjects. Moreover, we evaluated changes of the RNFL in the temporal retina between patients with glaucoma and age-similar controls. The results shed light on the generalization that has been drawn about the retinal anatomy. We found that the temporal raphe was not a perfect horizontal dividing line. Its angle varied between individuals but was related to the optic disc position. The angle between the temporal raphe and the line that connects the fovea and the center of optic disc was about 170 degrees on average. The temporal raphe changed with aging and glaucoma. Aging increased the separation between nerve fiber bundles in superior and inferior retina, forming a larger gap in the temporal raphe in AO images. In glaucomatous subjects, this gap significantly increased even when the corresponding local visual-field loss was relatively mild. A bundle index, which integrates information about the density and relative reflectivity of nerve fiber bundles, also decreased in glaucomatous subjects. The thesis demonstrated that AOSLO imaging can elucidate the normal anatomy of the temporal raphe in vivo, and the AOSLO can serve as a tool for understanding individual differences of the temporal raphe. This thesis also opened the possibility of using the temporal raphe as a site for glaucoma research and clinical assessment.Item Adult Learners' Motivation in Self-Directed e-Learning([Bloomington, Ind.] : Indiana University, 2010-05-24) Kim, Kyong-Jee; Frick, Theodore W.As with traditional instruction, learner motivation is important in designing effective e-learning courses. However, lack of motivation has been a major concern in theory and practice for facilitating successful online learning environments. A review of literature indicated that there is little empirical knowledge on how to motivate online learners, particularly in self-directed e-learning settings (SDEL). Research questions addressed in this study included: 1) what motivates or inhibits adult learning in SDEL? 2) does adult learner motivation change as he or she goes through SDEL? 3) what factors are related to motivational change during SDEL? This study used mixed methods. A content analysis was conducted on three SDEL courses in order to better understand the learning context. Twelve qualitative interviews of typical learners were conducted to identify major motivational factors. Analysis of these interview results led to construction of a 60-item Web survey of adult learners who had taken one or more SDEL courses (n = 368). Approximately 60 percent of the respondents were from corporate settings and 40 percent from higher education. A factor analysis of 33 survey items led to identification of three strong factors: 'e-learning is not for me'; 'e-learning is right for me'; and 'I don't want to be all by myself'. Results from both qualitative and quantitative analyses indicated that learners started SDEL for personal or professional development, and that they chose the online training option because of its flexibility and convenience. Both qualitative and quantitative results suggested that lack of motivational quality in the e-learning course was a key factor for some learners who decided not to complete the course, followed by lack of time. A stepwise multiple regression analysis resulted in five factors that significantly contributed to predicting the learner's reported motivational change: 1) E-learning is right for me; 2) satisfaction with their learning experience; 3) interactivity with an instructor or technical support personnel; 4) age (negative relationship); and 5) learning setting (corporate more than higher education). Implications of findings from this study are discussed for design of self-directed e-learning environments that may help increase or sustain learner motivation.Item Age-Related Changes in Human Anatomical and Functional Brain Networks([Bloomington, Ind.] : Indiana University, 2015-09) Betzel, Richard Frank; Sporns, Olafi) The first component characterizes age-related changes in specific connections. We find that functional connections within and between intrinsic connectivity networks (ICNs) follow distinct lifespan trajectories. We further characterize these changes in terms of each ICN’s “modularity” and find that most ICNs become less modular (i.e. less segregated) with age. In anatomical networks we find that hub regions are disproportionately affected by age and become less efficiently connected to the rest of the brain. Finally, we find that, with age stronger functional connections are supported by longer (multi-step) anatomical pathways for communication. ii) The second component is concerned with characterizing age-related changes in the boundaries of ICNs. To this end we used a multi-layer variant of modularity maximization to decompose networks into modules at different organizational scales, which we find exhibit scale-specific trends with age. At coarse scales, for example, we find that modules become more segregated whereas modules defined at finer scales become less segregated. We also find that module composition changes with age, and specific areas associated with memory change their module allegiance with age. iii) In the final component we use generative models to uncover wiring rules for the anatomical brain networks. Modeling network growth as a spatial penalty combined with homophily, we find that we can generate synthetic networks with many of the same properties as real-world brain networks. Fitting this model to individuals, we show that the parameter governing the severity of the spatial penalty weakens monotonically with age and that the overall ability to reproduce realistic connectomes for older individuals suffers. These results suggest that, with age, additional constraints may play an important role in shaping the topology of brain structural networks.Item An Agent Based Model of Disease Diffusion in the Context of Heterogeneous Sexual Motivation([Bloomington, Ind.] : Indiana University, 2010-06-01) Nagoski, Emily; Lohrmann, David; Janssen, ErickThis project focused on building and analyzing an agent-based model of disease diffusion in order to explore the hypothesis that the relative risk associated with an individual's "sexual motivation profile" (SMP) is influenced by the distribution of strategies represented in the population - that is, that sexual motivation functions as a frequency dependent trait. Sexual motivation is hypothesized to be composed of a sexual inhibition system (SIS) and a sexual excitation system (SES), following the Dual Control Model of Sexual Response. Results of the model show that the relative risk of a SMP does vary depending on the relative representation within a population, but that that variance is constrained by agents' absolute values of SIS and SES. The model produced several parallels with empirical data on humans, suggesting that the model accurately reproduced some aspects of human sexual behavior. For example, agents' SES was a better predictor than SIS of total number of partners, while SIS was a better predictor than SES of Age at Infection. Also, the more accurately the agent population matched the human population, the more the model produced human-like results. Future work should focus on increasing the verisimilitude of agents and their environments, in order to make models more practical for designing and testing intervention and policy strategies.Item Agent-based Model Selection Framework for Complex Adaptive Systems([Bloomington, Ind.] : Indiana University, 2010-06-01) Laine, Tei; Menczer, FilippoHuman-initiated land-use and land-cover change is the most significant single factor behind global climate change. Since climate change affects human, animal and plant populations alike, and the effects are potentially disastrous and irreversible, it is equally important to understand the reasons behind land-use decisions as it is to understand their consequences. Empirical observations and controlled experimentation are not usually feasible methods for studying this change. Therefore, scientists have resorted to computer modeling, and use other complementary approaches, such as household surveys and field experiments, to add depth to their models. The computer models are not only used in the design and evaluation of environmental programs and policies, but they can be used to educate land-owners about sustainable land management practices. Therefore, it is critical which model the decision maker trusts. Computer models can generate seemingly plausible outcomes even if the generating mechanism is quite arbitrary. On the other hand, with excess complexity the model may become incomprehensible, and proper tweaking of the parameter values may make it produce any results the decision maker would like to see. The lack of adequate tools has made it difficult to compare and choose between alternative models of land-use and land-cover change on a fair basis. Especially if the candidate models do not share a single dimension, e.g., a functional form, a criterion for selecting an appropriate model, other than its face value, i.e., how well the model behavior confirms to the decision maker's ideals, may be hard to find. Due to the nature of the class of models, existing model selection methods are not applicable either. In this dissertation I propose a pragmatic method, based on algorithmic coding theory, for selecting among alternative models of land-use and land-cover change. I demonstrate the method's adequacy using both artificial and real land-cover data in multiple experimental conditions with varying error functions and initial conditions.Item Air Quality, Mobility and Metropolitan Policy: Empirical Evidence from Mexico City, Los Angeles and San Francisco([Bloomington, Ind.] : Indiana University, 2020-04) Iracheta Carroll, Jose Alfonso; Rubin, Barry M.The flagship air pollution control program in Mexico City Metro Area (MCMA) named Hoy No Circula -HNC- (loosely translated as No Driving Day) regulates the frequency in which motor vehicles can be used in the city from Monday to Saturday based on a biannual vehicle-emissions checkup. Such mandate was first implemented in November 1989, then changed in July 2014 from the emissions-based standard to vehicles’ age regulation, and changed again to its original form in July 2015. It can be argued that neither of the two policy changes responded to shifts in the trends of air pollution concentrations in the city, but rather to increased levels of corruption in the emissions checkup centers in 2014, and to judicial contentions about the new rules in 2015, making them exogenous policy changes. The goal of this paper is to conduct an impact evaluation of HNC on MCMA’s air quality at its three most relevant moments in time: its first implementation (1989), and the two policy changes (2014 and 2015). Past studies used interrupted time-series to show that the program, when first implemented, was relatively ineffective for reducing pollution; however, it substantially increased the number of vehicles in the city, offsetting environmental quality improvements. While the effects are consistent in the short run, in the longer run they remain elusive. This research builds upon those studies by using two difference-in-differences specifications with alternative controls, and addressing potential spatial confounders between monitoring stations that are inherent to ambient data, thus providing a more robust quasi-experimental design. In addition, this research looks at the program’s first implementation, but also at the latest two policy changes that have not been subject to evaluation. For HNC’s first implementation, the results show statistically significant decreases in CO and O3 concentrations in the short run, and increments in the middle/long run. A similar pattern is observed for NOX and NO2 in the long run. This evidence supports the findings of past studies, where HNC had a positive impact on air quality right after its implementation, but a reversion of this effect after about six months due to increments in the size of the vehicle fleet and the amount of driving. For the policy change of 2014, the results show extremely modest improvements in air quality, close to nonexistent. CO experienced mild increases in concentrations, however the opposite is true for NOX, NO2 and O3. Finally, the evidence of the 2015 return to the original rules suggests significant loses in air quality. CO, NO2 and O3 experienced short and long run increments in concentrations, however this was not the case for NOX.Item ALGEBRAIC INFORMATION EFFECTS([Bloomington, Ind.] : Indiana University, 2021-08) Chen, Chao-Hong; Sabry, AmrFrom the informational perspective, programs that are usually considered as pure have effects, for example, the simply typed lambda calculus is considered as a pure language. However, β–reduction does not preserve information and embodies information effects. To capture the idea about pure programs in the informational sense, a new model of computation — reversible computation was proposed. This work focuses on type-theoretic approaches for reversible effect handling. The main idea of this work is inspired by compact closed categories. Compact closed categories are categories equipped with a dual object for every object. They are well-established as models of linear logic, concurrency, and quantum computing. This work gives computational interpretations of compact closed categories for conventional product and sum types, where a negative type represents a computational effect that “reverses execution flow” and a fractional type represents a computational effect that “allocates/deallocates space”.Item Alignment as a Process of Enabling Organizational Adaptation: Extending the Theory of Alignment as Guided Adaptation([Bloomington, Ind.] : Indiana University, 2010-05-24) Ward, Kerry W.; Vessey, IrisThis dissertation seeks to generalize and extend the theory of alignment as guided adaptation (TAGA) (Ward & Vessey Working Paper). TAGA is a descriptive theory that views alignment from a multilevel, process-oriented prospective. It is based upon the premise that in the short run each alignment factor adapts independently of the others in the alignment system. In the long run, however, the alignment factors are an interdependent system. TAGA was developed based on a small firm that had a non-strategic view of IS. This dissertation therefore seeks to generalize the theory to firms that have a formal IS strategy and planning process and are large in size. The dissertation also extends the theory by examining the role that changes external to the alignment factors play in the alignment factor adaptation process. Three case studies were conducted using semi-structured interviews with 31 high-level business and IS managers as data sources. The data was coded into change episodes demarcated by changes in business strategy (intent and initiatives) and was analyzed using alternative templates, visual mapping, and temporal bracketing strategies (Langley 1999; Ward & Vessey Working Paper). The results indicate that TAGA generalizes to large firms and to firms with a formal IS strategy and planning process. Within these additional contexts, TAGA was able to explain the patterns of change in the alignment episodes while the traditional view of alignment as synchronization could not. The results also indicate that changes in the outer environment such as the level at which the changed occurred in the factor hierarchy, the magnitude of the change initiating adaptation, and the pace at which change occurred influenced the need for change in the internal alignment factors. This research has implications for both academic and practitioner communities. The research shows that TAGA is applicable to firms that have a formal IS strategy and planning process; and that factors such as the level, magnitude, and pace of changes impacts the adaptation process. From a practitioner perspective, this research provides insight into managing the alignment process by redefining how to view alignment.Item All-Atom Multiscale Computational Modeling Of Viral Dynamics([Bloomington, Ind.] : Indiana University, 2010-06-16) Miao, Yinglong; Ortoleva, Peter JViruses are composed of millions of atoms functioning on supra-nanometer length scales over timescales of milliseconds or greater. In contrast, individual atoms interact on scales of angstroms and femtoseconds. Thus they display dual microscopic/macroscopic characteristics involving processes that span across widely-separated time and length scales. To address this challenge, we introduced automatically generated collective modes and order parameters to capture viral large-scale low-frequency coherent motions. With an all-atom multiscale analysis (AMA) of the Liouville equation, a stochastic (Fokker-Planck or Smoluchowski) equation and equivalent Langevin equations are derived for the order parameters. They are shown to evolve on timescales much larger than the 10^(-14)-second timescale of fast atomistic vibrations and collisions. This justifies a novel multiscale Molecular Dynamics/Order Parameter eXtrapolation (MD/OPX) approach, which propagates viral atomistic and nanoscale dynamics simultaneously by solving the Langevin equations of order parameters implicitly without the need to construct thermal-average forces and friction/diffusion coefficients. In MD/OPX, a set of short replica MD runs with random atomic velocity initializations estimate the ensemble average rate of change in order parameters, extrapolation of which is then used to project the system over long time. The approach was implemented by using NAMD as the MD platform. Application of MD/OPX to cowpea chlorotic mottle virus (CCMV) capsid revealed that its swollen state undergoes significant energy-driven shrinkage in vacuum during 200ns simulation, while for the native state as solvated in a host medium at pH 7.0 and ionic strength I=0.2M, the N-terminal arms of capsid proteins are shown to be highly dynamic and their fast fluctuations trigger global expansion of the capsid. Viral structural transitions associated with both processes are symmetry-breaking involving local initiation and front propagation. MD/OPX accelerates MD for long-time simulation of viruses, as well as other large bionanosystems. By using universal inter-atomic force fields, it is generally applicable to all dynamical nanostructures and avoids the need of parameter recalibration with each new application. With our AMA method and MD/OPX, viral dynamics are predicted from laws of molecular physics via rigorous statistical mechanics.Item AMERICAN MUSLIM UNDERGRADUATES’ VIEWS ON EVOLUTION([Bloomington, Ind.] : Indiana University, 2016-05) Fouad, Khadija Engelbrecht; Akerson, Valarie L.A qualitative investigation into American Muslim undergraduates' views on evolution revealed three main positions on evolution: theistic evolution, a belief in special creation of all species, and a belief in special creation of humans with evolution for all non-human species. One can conceive of the manner in which respondents chose their respective positions on evolution as a means of reconciling their religious beliefs with scientific evidence in support of current evolutionary theory. Of 19 theistic evolutionists, 18 affirmed that revelation is a source of knowledge. 74% were convinced by the scientific evidence that evolution happens and did not see evidence in the Quran that contradicts this. 37% state that it is consistent with God’s attributes that He would have created organisms to evolve. That seeking knowledge in Islam is important was mentioned by 21%. All 19 participants with a belief in special creation of humans affirmed the idea that revelation is a source of knowledge and considered scientific evidence a source of knowledge as well. Their positions on evolution can be seen as a means of reconciling their religious beliefs with scientific evidence. They found scientific evidence convincing for all non-human species. They thought that humans could not have evolved because the creation of humans is treated with more detail in the Quran than is the creation of other species. Most accepted microevolution, but not macroevolution for humans. Those with a belief in the special creation of all species found the evidence in the Quran and hadith more convincing than scientific evidence. They interpreted the Quran and hadith as indicating special creation of all species. They accommodated scientific evidence by accepting microevolution for all species. Because most respondents accepted microevolution for all species, teaching microevolution before macroevolution might be beneficial for Muslim students. Teachers helped some students navigate the relationship between science and religion to allow them to accept evolution without negating their religious beliefs. Providing role models who reconcile science and religion, Muslim evolutionary biologists, and examples of Muslim scientists from history can help accommodate acceptance of evolution by Muslims.Item “AMERICANS ALL?” – MESSAGES IN MINIATURE([Bloomington, Ind.] : Indiana University, 2023-07) Bennett, Janna Merrill; Robertson, Nancy MarieA small white-collar project of the Works Progress Administration project called the Museum Extension Project (MEP) operated in the latter half of the 1930s in at least twenty-four states including Indiana. A product of this visual aid program was the twelve-inch miniature figure dressed in clothing to reflect periods in US history or countries or cultures throughout the world. Museum and Indiana school educators used the MEP figures, as part of a broader intercultural learning agenda, to demonstrate or encourage ethnic appreciation and inclusion, while also fostering “otherness”–all in the safety of classrooms and informal educational settings. The figures simultaneously expanded the definition of membership in a majority white cultural group by adding and validating recent white immigrants while they continued to differentiate “the other”– Black and Native Americans as well as non-European immigrants through the cultural construct of race. These miniature figures allowed students to learn about the ethnic populations of their state and made the world available to all. At the same time, they prescribed the role of “other” to Indigenous Peoples throughout the world, the inhabitants of South and Central American countries, and those perceived as “non-white” peoples in places like Palestine and Egypt. This research examines educational philosophy in the first quarter of the twentieth century combined with the material culture analysis of these figures to demonstrate how three-dimensional objects were powerful educational tools.Item An Analysis of Muon Neutrino Disappearance from the NuMI Beam Using an Optimal Track Fitter([Bloomington, Ind.] : Indiana University, 2015-09) Baird, Michael David; Messier, MarkThe NOvA experiment is a long-baseline neutrino oscillation experiment based out of Fermilab National Accelerator Laboratory that uses two liquid scintillator detectors, one at Fermilab (the "near" detector) and a second 14 kton detector in northern Minnesota (the "far" detector.) The primary physics goals of the NOvA experiment are to measure neutrino mixing parameters through both the $\nu_{\mu}$ disappearance and $\nu_{e}$ appearance channels using neutrinos from the newly upgraded NuMI beam line. The NOvA $\nu_{\mu}$ disappearance analysis can significantly improve the world's best measurement of $\sin^{2}\theta_{23}$. This analysis proceeds by using the measured $\nu_{\mu}$ charged-current energy spectrum in the near detector to predict the spectrum in the far detector, and comparing this to the measured spectrum to obtain a best fit for the oscillation parameters $\sin^{2}\theta_{23}$ and $\Delta m^{2}_{32}$. Since this fit is governed by the shape of the energy spectrum, the best fit will be maximized by obtaining the best possible energy resolution for the individual neutrino events. This dissertation describes an alternate $\nu_{\mu}$ disappearance analysis technique for the NOvA experiment, based on the idea that estimating the energy resolution of the individual events will allow them to be separated into different energy resolution samples in order to improve the final fit. This involves using an optimal tracker to reconstruct particle tracks and momenta, and multivariate methods for estimating the event energies and energy resolutions. The data used for this analysis was taken by the NOvA experiment from February 2014 to May 2015, representing approximately $3.52 \times 10^{20}$ protons on target from the NuMI beam. The best fit oscillation parameters obtained by this alternate technique are $|\Delta m^{2}_{32}| = 2.49^{+0.19}_{-0.17}$~$[\times 10^{-3} {\rm eV}^{2}]$ and $\sin^{2} \theta_{23} =$~$0.51 \pm 0.08$ which is consistent with the hypothesis of maximal mixing, and with the results from T2K and MINOS+ published in 2015.Item ANALYSIS OF NEUROTENSIN RECEPTOR 1 CONFORMATIONAL DYNAMICS AND ALLOSTERIC INFLUENCES IN ACTIVATION(2022-10) Dixon, Austin D.; Ziarek, Joshua J.G protein-coupled receptors (GPCRs) are the largest integral membrane protein class in eukaryotes with over 800 unique members that regulate numerous biological processes including mood, body temperature, taste, and sight, amongst others. Due to their broad physiological importance and numerous etiological roles, GPCRs are the targets for more than 30% of all therapeutic drugs on the market. A more nuanced mechanistic understanding of the GPCR activation landscape could dramatically expand their therapeutic value. Unfortunately, accurately and efficiently assessing GPCR activation and cognate transducer stimulation is difficult due to inherently poor protein stability and expression; this is further hindered by challenging protein purification requirements. This dissertation seeks to expand the knowledge of GPCR conformational dynamics and their connection to receptor activation and ternary complex formation with transducer molecules G protein and Arrestin. Neurotensin receptor 1 (NTS1) has quickly become one of the most well-characterized GPCRs with structures of the apostate, complexes with various pharmacological ligands, and ternary complexes with both the heterotrimeric Gi protein and β-arrestin-1 (βArr1) transducers. NTS1 is a class A, β group receptor that is expressed throughout the central nervous system and the gastrointestinal (GI) tract. Activation by its endogenous tridecapeptide ligand neurotensin (NT) mediates a variety of physiological processes including low blood pressure, high blood sugar, low body temperature, mood, and GI motility. It is also a long-standing therapeutic target for Parkinson’s disease, schizophrenia, obesity, hypotension, psychostimulant substance use disorders, and cancer. Using NTS1 as a model GPCR, this dissertation details the development of a novel method for Selective 19F-Labeling Absent of Probe Sequestration (SLAPS), which allows for accurate and efficient assessment of GPCR activation by ligands and transducer complexation via solution nuclear magnetic resonance (NMR) spectroscopy. Through 19F NMR and other biophysical techniques, this dissertation shows that the NTS1 allosteric activation mechanism may be alternatively dominated by induced fit or conformational selection depending on the coupled transducer, and the available static structures do not represent the entire conformational ensemble observed in solution. Furthermore, this dissertation explores the pleiotropic effects of biased allosteric modulators and lipids in NTS1 activation and ternary complex formation.Item AN ANALYSIS OF THE DYNAMICS OF PERCEPTION AND DECISION([Bloomington, Ind.] : Indiana University, 2021-05) Harding, Samuel; Shiffrin, RichardEight initially novel objects with four features were learned by three participants over about 70 sessions in a variety of present-absent search tasks. This article analyzes and models trials with a single object presented for test. The features of the object were presented simultaneously, or successively at rates fast enough that the objects appeared to be simultaneous (ISIs were 16, 33, or 50 ms). Classification of a test object as target or foil required a conjunction of two features. When successively presented, features diagnostic for target presence could arrive first or last, and vice versa for features diagnostic for foil presence. Two results were particularly important: 1) The order in which target-diagnostic or foil diagnostic features appeared produced large changes in accuracy and response times; 2) Simultaneous feature presentation produced lower accuracy than sequential presentation with target-diagnostic features arriving first, despite the delay in such features arriving. The results required a dynamic model for perception and decision. The Hidden Markov Model model has features perceived at independent times. It accumulates evidence at each moment based on the particular features perceived up to that time, and the diagnosticities of those features for classifying the test object as target or foil. The model also assumes that configurations of features provide evidence as processing continues: when all four features of an object are perceived the evidence points without error to the correct response. The results and modeling support the view that perceptual and decision processes operate concurrently and interactively during identification, recognition, and classification of well-learned objects, rather than in successive stages. How an object’s features are perceived over time, how they provide evidence, and how the evidence leads to a decision expressed with a binary response are explored using cursor movements. An object is presented on a computer monitor and a cursor controlled by a mouse is moved to one of two response regions to indicate the type of object presented. The object has two features, one 100% diagnostic of the desired response and the other 75% diagnostic. The features are presented simultaneously or sequentially. The cursor movements are analyzed with a Hidden Markov Model, a method that provides an exceptionally detailed inference about the moment-to-moment processes that govern perception and evidence change that produce a final decision: For each trial in each condition for each participant, the method provides a best guess, at each ten milliseconds from trial’s start to trial’s end, of the features that have been perceived by that time, and the evidence that has been accumulated by that time. These inferences can be analyzed in myriad ways to provide insights about the dynamic processes of cognition, and differences among conditions and individuals. A selection of such analyses is provided to illustrate the power of the approach.Item An analysis of the privatization of public AAU institutions and their changing resource acquisition before and after the Great Recession([Bloomington, Ind.] : Indiana University, 2015-03) Lee-Garcia, Rebecca Patricia; Priest, Ed.D., Douglas MPublic institutions have historically adapted to their changing external environment in order to try to best serve their students and achieve their goals. Part of this adaption included dealing with decreasing state support. While state funding has shown increasing patterns of support to higher education after a recession, the Great Recession proved different. As a result, public institutions have become increasingly privatized with increasing proportions of their revenue coming from students and declining proportions coming from the state. The purpose of this study was to examine the changing state appropriations and tuition revenue that public AAU institutions received before and after the start of the Great Recession in order to better understand whether they received a changing amount of total revenue per student and simultaneously became more heavily funded by students. This study used IPEDS variables and all data was collected between 2003-04 and 2011-12 to create the three main variables used to answer six research questions. The three main variables included: gross-tuition revenue per FTE, net-tuition revenue per FTE, and state appropriations revenue per FTE. All data was analyzed through descriptive statistics. The results of this study showed that on average public AAU institutions received increasing amounts of total revenue per FTE between 2003-04 and 2011-12 in terms of tuition revenue and state appropriations per FTE combined. Many of these institutions also became increasingly privatized during this time as there were increases in the proportion of the total revenue that came from tuition revenue per FTE. The findings from this study also showed that while all institutions became increasingly privatized after the start of the Great Recession, an increasing number of institutions began operating with decreasing levels of total revenue per FTE. Others received an increasing amount of total revenue as a result of increases in tuition revenue per FTE. Regardless, this study showed that students have continued to bear increasing proportions of the cost of higher education. It also provided a new perspective on the amount of revenue that these institutions believed they needed in order to continue to provide quality education to their students.Item Analyzing Interaction Patterns to Verify a Simulation/Game Model([Bloomington, Ind.] : Indiana University, 2013-05-15) Myers, Rodney Dean; Frick, Theodore W.In order for simulations and games to be effective for learning, instructional designers must verify that the underlying computational models being used have an appropriate degree of fidelity to the conceptual models of their real-world counterparts. A simulation/game that provides incorrect feedback is likely to promote misunderstanding and adversely affect learning and transfer. Numerous methods for verifying the accuracy of a computational model exist, but it is generally accepted that no single method is adequate and that multiple methods should be used. The purpose of this study was to propose and test a new method for collecting and analyzing users' interaction data (e.g., choices made, actions taken, results and feedback obtained) to provide quantified evidence that the underlying computational model of a simulation/game represents the conceptual model with sufficient accuracy. In this study, analysis of patterns in time (APT) was used to compare gameplay results from the Diffusion Simulation Game (DSG) with predictions based on diffusion of innovations theory (DOI). A program was written to automatically play the DSG by analyzing the game state during each turn, seeking patterns of game component attributes that matched optimal strategies based on DOI theory. When the use of optimal strategies did not result in the desired number of successful games, here defined as the threshold of confidence for model verification, further investigation revealed flaws in the computational model. These flaws were incrementally corrected and subsequent gameplay results were analyzed until the threshold of confidence was achieved. In addition to analysis of patterns in time for model verification (APTMV), other verification methods used included code walkthrough, execution tracing, desk checking, syntax checking, and statistical analysis. The APTMV method was found to be complementary to these other methods, providing quantified evidence of the computational model's degree of accuracy and pinpointing flaws that could be corrected to improve fidelity. The APTMV approach to verification and improvement of computational models is described and compared with other methods, and improvements to the process are proposed.