Psychometric Portfolio

Permanent link for this collectionhttps://hdl.handle.net/2022/24482

Browse

Recent Submissions

Now showing 1 - 18 of 18
  • Item
    Internal Consistency Statistics
    (Faculty Survey of Student Engagement, 2018) Strickland, Joe; Fassett, Kyle; BrckaLorenz, Allison
  • Item
    Internal Consistency Statistics
    (Faculty Survey of Student Engagement, 2017) Paulsen, Justin; BrckaLorenz, Allison
  • Item
    Internal Consistency Statistics
    (2016) Wang, Xiaolin; BrckaLorenz, Allison
  • Item
    Internal Consistency Statistics
    (Faculty Survey of Student Engagement, 2015) Wang, Xiaolin; BrckaLorenz, Allison
  • Item
    Internal Consistency Statistics
    (Faculty Survey of Student Engagement, 2014) Wang, Xiaolin; BrckaLorenz, Allison
  • Item
    Internal Consistency Statistics
    (Faculty Survey of Student Engagement, 2013) Wang, Xiaolin; BrckaLorenz, Allison
  • Item
    Internal Consistency
    (Faculty Survey of Student Engagement, 2017) Paulsen, Justin; BrckaLorenz, Allison
    One way to estimate reliability, specifically the internal consistency, of FSSE results is by calculating Cronbach’s alphas and intercorrelations for the FSSE scales. Internal consistency is the extent to which a group of items measure the same construct, as evidenced by how well they vary together, or intercorrelate. A high degree of internal consistency enables the researcher to interpret the composite score as a measure of the construct (Henson, 2001). Assuming the FSSE scales effectively measure an underlying construct, we would expect to find high estimates of their internal consistency.
  • Item
    Internal Consistency
    (Faculty Survey of Student Engagement, 2013) BrckaLorenz, Allison; Chiang, Yi-Chen; Nelson Laird, Tom
    One way to estimate reliability, specifically the internal consistency, of FSSE results is by calculating Cronbach’s alphas and intercorrelations for the FSSE scales. Internal consistency is the extent to which a group of items measure the same construct, as evidenced by how well they vary together, or intercorrelate. A high degree of internal consistency enables the researcher to interpret the composite score as a measure of the construct (Henson, 2001). Assuming the FSSE scales effectively measure an underlying construct, we would expect to find high estimates of their internal consistency.
  • Item
    Construct validity: 2017 scales
    (Faculty Survey of Student Engagement, 2018) Paulsen, Justin; BrckaLorenz, Allison
    FSSE 2017 grouped 50 survey items into several scales: Higher-Order Learning, Reflective and Integrative Learning, Learning Strategies, Quantitative Reasoning, Collaborative Learning, Discussions with Diverse Others, Student-Faculty Interaction, Effective Teaching Practices, Quality of Interactions, and Supportive Environment. The purpose of this study was to evaluate the quality of these scales, with particular focus on their internal structure.
  • Item
    Construct Validity: Effective Teaching Practices
    (Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, Allison
    Starting with FSSE 2013, sets of items were grouped within several scales. Forty-two survey items were included in these scales: Higher-Order Learning, Reflective and Integrative Learning, Learning Strategies, Quantitative Reasoning, Collaborative Learning, Discussions with Diverse Others, Student-Faculty Interaction, Quality of Interactions, and Supportive Environment. For details about the construct validity of these scales, see the FSSE Psychometric Portfolio. A tenth scale, Effective Teaching Practices, was added to the FSSE scales in 2014. The purpose of this study was to evaluate the quality of the Effective Teaching Practices scale, with particular focus on their internal structure.
  • Item
    Construct Validity
    (Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, Allison
    Starting with FSSE 2013, sets of items were grouped within several scales. Forty-two survey items were included in these scales: Higher-Order Learning, Reflective and Integrative Learning, Learning Strategies, Quantitative Reasoning, Collaborative Learning, Discussions with Diverse Others, Student-Faculty Interaction, Quality of Interactions, and Supportive Environment. For details about the construct validity of these scales, see the FSSE Psychometric Portfolio. A tenth scale, Effective Teaching Practices, was added to the FSSE scales in 2014. The purpose of this study was to evaluate the quality of the Effective Teaching Practices scale, with particular focus on their internal structure.
  • Item
    Equivalence reliability: How often is often?
    (Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, Allison
    Without reliability, valid score interpretation is meaningless (Throndike & Throndike-Christ, 2010). Based on a similar study conducted earlier (Nelson Laird, Korkmaz, & Chen, 2008), this study focuses on assessing the equivalence reliability of the updated FSSE. In particular, the emphasis is on whether two parallel forms or different versions of survey items produce similar results (have equal means, variances, and errors, etc.). Survey researchers often wonder about the meaning of vague quantifiers such as “sometimes” or “often” as employed by surveys. These analyses examined a set of FSSE questions asked in two different ways, first with vague quantifiers and second with a quantifiable time allocation. If the two versions of items were essentially asking for the same information, we would expect much of the following to be true: each response option will have a distinct meaning (Often means something different than Sometimes, etc.), the intervals between response options would progressively increase in frequency from Never to Very often, and the intervals would be approximately equal (Very often means nine times per week, Often means six times per week, and Sometimes means three times per week).
  • Item
    Temporal Stability
    (Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, Allison
    One way to estimate reliability, specifically the temporal stability, of FSSE results is by an institution-level correlation analysis. Assuming no major shifts in an institution’s policies, we would expect an institution to have relatively similar FSSE scale scores from one year to the next. It is possible that results for a given institution may vary substantially from one administration to another—this is more likely to occur for schools that have a small number of respondents. Gradual changes over longer periods of time are much more likely, and should not be interpreted as unreliability. Overall, however, we would expect the correlation between institutions’ scale scores from year to year to be relatively high.
  • Item
    Known-Groups Validity: 2017 FSSE Measurement Invariance
    (Faculty Survey of Student Engagement, 2018) Paulsen, Justin; BrckaLorenz, Allison
    A key assumption of any latent measure (any questionnaire trying to assess an unobservable construct) is that it functions equally across all different groups. In psychometrics this is called measurement invariance and means that members of different groups understand and respond to the scales similarly and that items have the same relationship with the latent measure across groups (Embretson and Reise, 2000). Having ascertained this, data users can confidently assert that differences between groups are actual differences unrelated to any measurement error.
  • Item
    Relations to other Variables: 2018 Convergent & Divergent Correlations
    (Faculty Survey of Student Engagement, 2018) Paulsen, Justin; BrckaLorenz, Allison
    One of the primary validity evidences established in The Standards for Educational and Psychological Testing (AERA, APA, NCME, 2014) comes via relations between the measurement of interest and other established constructs. This is accomplished through convergent and divergent correlations. The idea behind this type of evidence is that instruments should positively correlate with similar but established constructs and not correlate with unrelated constructs. Ideally, convergent correlation would be positive but without being extremely high as correlations too high would suggest the instrument under review offers little beyond what the established instrument already measures. FSSE’s relationship to other variables has not been demonstrated. Thus, this study examines whether FSSE scales relate to other variables as expected.
  • Item
    Response Process Validity
    (Faculty Survey of Student Engagement, 2018) Yuhas, Bridget; BrckaLorenz, Allison
    Response process validity is the extent to which the actions and thought processes of survey responders demonstrate that they understand the construct in the same way it is defined by the researchers. In other words, it questions whether respondents understand survey questions to mean what we intend. There is no statistical test for this type of validity, but rather it is examined through respondent observation, interviews, and feedback. This document summarizes findings from a response process validity study of the Faculty Survey of Student Engagement (FSSE).
  • Item
    Social Desirability Bias
    (Faculty Survey of Student Engagement, 2017) Miller, Angela; Dumford, Amber
    The use of surveys in higher education for assessment and accreditation purposes is steadily increasing,and institutions must provide a variety of evidence on their effectiveness (Kuh & Ewell, 2010). While surveys are a relatively easy means of gathering a large amount of data, the use of self‐reports sometimes leads to concerns about the data quality. If there is the potential that certain items will prompt untruthful answers as respondents attempt to provide a socially appropriate response, researchers may want to examine whether social desirability bias is present in the data (DeVellis, 2003). This bias can impact interpretations of survey results, as well as the design of future data collection and analysis. Although encouraging student engagement is not what one might consider a “sensitive” topic, faculty may be aware that answering items in ways that display higher levels of engagement is desired by their institutions and they want to appear to be “good” employees. Therefore, the current study was developed to address the issue of social desirability bias and self‐reported engagement behaviors at the faculty level.
  • Item
    Item Nonresponse Bias
    (Faculty Survey of Student Engagement, 2015) Chiang, Yi-Chen; BrckaLorenz, Allison
    The purpose of this study was to investigate the prevalence of item nonresponse bias among participants in the FSSE survey and its impact on the estimates of ten FSSE scale scores, by comparing item nonresponse patterns across faculty-level characteristics such as gender identity, racial or ethnic identification, citizenship, employment status, academic rank, and the number of undergraduate or graduate courses taught.