Psychometric Portfolio
Permanent link for this collectionhttps://hdl.handle.net/2022/24482
Browse
Browsing Psychometric Portfolio by Author "Chiang, Yi-Chen"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Construct Validity(Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, AllisonStarting with FSSE 2013, sets of items were grouped within several scales. Forty-two survey items were included in these scales: Higher-Order Learning, Reflective and Integrative Learning, Learning Strategies, Quantitative Reasoning, Collaborative Learning, Discussions with Diverse Others, Student-Faculty Interaction, Quality of Interactions, and Supportive Environment. For details about the construct validity of these scales, see the FSSE Psychometric Portfolio. A tenth scale, Effective Teaching Practices, was added to the FSSE scales in 2014. The purpose of this study was to evaluate the quality of the Effective Teaching Practices scale, with particular focus on their internal structure.Item Construct Validity: Effective Teaching Practices(Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, AllisonStarting with FSSE 2013, sets of items were grouped within several scales. Forty-two survey items were included in these scales: Higher-Order Learning, Reflective and Integrative Learning, Learning Strategies, Quantitative Reasoning, Collaborative Learning, Discussions with Diverse Others, Student-Faculty Interaction, Quality of Interactions, and Supportive Environment. For details about the construct validity of these scales, see the FSSE Psychometric Portfolio. A tenth scale, Effective Teaching Practices, was added to the FSSE scales in 2014. The purpose of this study was to evaluate the quality of the Effective Teaching Practices scale, with particular focus on their internal structure.Item Equivalence reliability: How often is often?(Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, AllisonWithout reliability, valid score interpretation is meaningless (Throndike & Throndike-Christ, 2010). Based on a similar study conducted earlier (Nelson Laird, Korkmaz, & Chen, 2008), this study focuses on assessing the equivalence reliability of the updated FSSE. In particular, the emphasis is on whether two parallel forms or different versions of survey items produce similar results (have equal means, variances, and errors, etc.). Survey researchers often wonder about the meaning of vague quantifiers such as “sometimes” or “often” as employed by surveys. These analyses examined a set of FSSE questions asked in two different ways, first with vague quantifiers and second with a quantifiable time allocation. If the two versions of items were essentially asking for the same information, we would expect much of the following to be true: each response option will have a distinct meaning (Often means something different than Sometimes, etc.), the intervals between response options would progressively increase in frequency from Never to Very often, and the intervals would be approximately equal (Very often means nine times per week, Often means six times per week, and Sometimes means three times per week).Item Internal Consistency(Faculty Survey of Student Engagement, 2013) BrckaLorenz, Allison; Chiang, Yi-Chen; Nelson Laird, TomOne way to estimate reliability, specifically the internal consistency, of FSSE results is by calculating Cronbach’s alphas and intercorrelations for the FSSE scales. Internal consistency is the extent to which a group of items measure the same construct, as evidenced by how well they vary together, or intercorrelate. A high degree of internal consistency enables the researcher to interpret the composite score as a measure of the construct (Henson, 2001). Assuming the FSSE scales effectively measure an underlying construct, we would expect to find high estimates of their internal consistency.Item Item Nonresponse Bias(Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, AllisonThe purpose of this study was to investigate the prevalence of item nonresponse bias among participants in the FSSE survey and its impact on the estimates of ten FSSE scale scores, by comparing item nonresponse patterns across faculty-level characteristics such as gender identity, racial or ethnic identification, citizenship, employment status, academic rank, and the number of undergraduate or graduate courses taught.Item Temporal Stability(Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, AllisonOne way to estimate reliability, specifically the temporal stability, of FSSE results is by an institution-level correlation analysis. Assuming no major shifts in an institution’s policies, we would expect an institution to have relatively similar FSSE scale scores from one year to the next. It is possible that results for a given institution may vary substantially from one administration to another—this is more likely to occur for schools that have a small number of respondents. Gradual changes over longer periods of time are much more likely, and should not be interpreted as unreliability. Overall, however, we would expect the correlation between institutions’ scale scores from year to year to be relatively high.