Is a HIP Always a HIP? The Case of Learning Communities

With the increasing adoption of learning communities, it is imperative to document their effectiveness. Using a large, longitudinal, multi-institutional dataset, we found that linked-learning communities positively impact students’ engagement and perceived gains. We also found that the estimated effect of learning communities varies widely across institutions on various measures. Some learning communities are very beneficial, while others have a negligible impact on students.

In this article, we seek to quantify how learning communities' effectiveness varies across a host of outcomes. Some view the mere presence of a HIP as a sign of educational quality (Hatch, 2012); thus, additional research is needed to nuance their effectiveness. We hope our findings contribute to the ongoing conversations about HIP quality and inspire practitioners to continuously improve their programming.

Previous Literature
Defining learning communities has been difficult for researchers as they have varied forms and designs. Generally speaking, intentionally formed groups of students and faculty designed to maximize learning are the basis of learning communities (Lenning & Ebbers, 1999). Lenning and Ebbers (1999) classified learning communities into four typologies along the following dimensions: (a) curricular, (b) classroom, (c) residential, and (d) student type. Their classifications draw attention to the scope of learning communities. They include small-scale groups of students learning together in a classroom to the more intense community of students living and learning together in a residence hall.
The Washington Center, a national resource center for learning communities, attempts to delimit the wide-range of practices designated as "learning communities." It identified three minimum standards for learning communities: (a) "a strategically-defined cohort of students taking courses together which have been identified through a review of institutional data," (b) "robust, collaborative partnerships between academic affairs and student affairs," and (c) "explicitly designed opportunities to practice integrative and interdisciplinary learning" (Washington Center,n.d.,para. 4). Despite these learning community standards, this definition is more of an ideal than a standard (Cross, 1998;Kuh, 2008). Due to this variability, many learning communities do not follow the best practices established by the field.
Although a great deal of literature highlighted the benefits of learning communities, some question the assumption that participation has uniformly positive impacts. Heaney and Fisher (2011) demonstrated that learning communities engender high social integration, but the relationship varies across student-types. Their results indicated that students who struggled with social adaptation and homesickness did not experience their peers' positive effects. Jaffee (2007) pointed out that "positive outcomes are a contingent rather than the automatic result" of learning communities (p. 66). Jaffee also found that learning communities can unintentionally foster conditions that delay academic development, precipitating regressive behavior conformity, inciting role conflict, and encouraging groupthink, among other adverse outcomes.

Is a HIP Always a HIP?
Due to the mixed findings regarding learning communities, it is essential to understand better their efficacy (Finley & McNair, 2013). For example, a well-run learning community program might be highly effective. In contrast, a hastily created program created to appease accreditors may have limited effectiveness or be detrimental to students' learning and development. We suspect that HIPs' implementation is not uniformly optimal, as they frequently fail to convey basic best practices such as identifying learning outcomes (Finley & McNair, 2013;Washington Center, n.d.). In this study, we take a step to improve the field's understanding of the impacts of learning communities by examining the estimated variability in their effectiveness on various dimensions of engagement across 83 institutions. Our findings help identify if learning communities are a simple solution to solve many higher education issues or if learning communities require a continuous assessment to improve student outcomes.

Student Engagement and Social Capital Theory
Student engagement and social capital theory guided the study. Both theories offer cogent explanations as to why learning communities improve outcomes. Student engagement theory argues that the quality and quantity of students' effort and institutional effort matter to student success. In contrast, social capital theory asserts that social networks and the corresponding trust developed among individuals within the social network demarcates opportunities (Halpern, 2005). Student affairs administrators construct learning communities to foster valuable social connections.
Engagement goes beyond Astin's (1984) involvement concept as it considers student efforts and institutional policies and practices, like learning communities, which contribute to student success (Kuh, 2005). Institutional efforts include creating a climate that enables programs that encourage and a curriculum that promotes student learning and development for all students and must be cross-cultural (Quaye & Harper, 2015). Learning communities are programs designed by institutions to foster student engagement in-and outside of the classroom and have the ability to be targeted and tailored to small groups of students. Learning communities enhance students' likelihood to succeed through the intentional time and effort expended by both students and institutions.
While a variety of definitions of social capital exist, Halpern (2005) argued the theory has three central tenets: (a) a network, (c) a network with shared norms, values, and expectations, and (c) the network enforces sanctions, positive or negative consequences for behavior in the maintenance of the status quo. Social capital's potential comes from access to power and resources (e.g., financial, human, material) due to the social group(s) to which an individual belongs. Learning communities build relationships between a student, their peers, and faculty. The networks formed through learning communities can increase information flows, which would allow a student to more easily become aware of campus events, special educational opportunities, and access campus resources that promote student learning and development. Additionally, social capital contends that connections such as the ones fostered in learning communities engender reciprocity and trust among individuals, facilitating students' perceptions that they are valued members of the campus community. As evident from its name, learning communities seek to build mini-communities within larger institutions and facilitate their entry into the broader institutional culture.

Research Questions
While much research correlated learning community participation with positive student outcomes, the existing research focuses on a single institution or averages the effects across institutions. Little is known about the variability across institutions in regards to how learning communities influence student outcomes. We examined the influence of learning communities using a multi-institutional sample, focusing on the variability in programmatic effects across institutions. The research questions guiding this inquiry were the following: 1. What is the relationship between learning community participation and student engagement and self-perceived gains for bachelors seeking first-year students, controlling for other factors?
2. To what extent does the relationship between learning communities and student engagement and self-perceived gains vary between institutions, controlling for other factors?

Data
To answer these research questions, we utilized a sample of full-time, first-year students who responded to the 2013 administration of the Beginning College Survey of Student Engagement (BCSSE) and 2014 administration of the National Survey of Student Engagement (NSSE). BCSSE is a survey administered during orientation or before enrollment that examines first-year students' high school experiences and college expectations. In contrast, NSSE is administered in the winter during the academic year and focuses on students' college experiences and administered to first-year and senior students. We excluded from our sample students who attended a special-focus institution (e.g., seminaries, conservatories, and engineering schools) due to these institutions' specialized nature. Due to our focus on learning communities, we eliminated students from our sample who attended institutions where less than 5% of the first-year respondents indicated they participated in a learning community to ensure that a critical mass of students participated in a learning community. Additionally, we removed international students from our dataset, as many learning communities are designed for and targeted at domestic students. Our analyses also utilized many variables focusing on the respondents' high school experiences, which may not apply to international students.
After accounting for these exclusions, our sample included 9,986 students who attended 83 institutions. Seventeen percent of the sample participated in a learning community. Seven out of 10 students sampled were females. Seventy-five percent of the students identified as White, while Asians, African Americans, and Hispanics or Latinos each comprised 5% of the sample. The remaining students were multi-racial or classified as "Other." About a third of the students were first-generation. A quarter of the sample attended doctoral universities, 44% were enrolled in master's colleges and universities, while the remaining students attended baccalaureate colleges.
Our primary dependent variables were the 10 NSSE Engagement Indicators and a perceived gains index. The Engagement Indicators were introduced in conjunction with the updated NSSE in 2013 and are summary measures of student engagement. The 10 Engagement

Is a HIP Always a HIP?
Indicators are Higher-Order Learning, Reflective & Integrative Learning, Quantitative Reasoning, Learning Strategies, Collaborative Learning, Discussions with Diverse Others, Student-Faculty Interaction, Effective Teaching Practices, Quality of Interactions, and Supportive Environment. The Engagement Indicators were chosen as our dependent variables to represent students' participation in effective educational practices, which prior research has associated with students' learning and development (McCormick et al., 2013). The Cronbach's α of the Engagement Indicators range from .77 to .87, with all but one greater than .80. National Survey of Student Engagement's (n.d.) Psychometric Portfolio contains additional information on their validity. In addition to the Engagement Indicators, we also used a perceived gains index as a dependent variable. The index comprises 12 items inquiring about how students' college experience improved their knowledge, skills, and personal development. The Cronbach's α for Perceived Gains was 0.90. All of the dependent variables were standardized with a mean of 0 and a standard deviation (SD) of 1 to allow for the efficient estimation of effect sizes by learning community participation.
Our key independent variable was learning community participation. Students who indicated they participated or were currently involved in a "learning community or some other formal program where groups of students take two or more classes together" were coded as participating in a learning community. We coded all other students as not having participated in a learning community.
The control variables utilized included various data on students' characteristics, high school experiences, and expectations for college. Our data on students' sex, race, and standardized test score (SAT or ACT equivalent) were reported by their institutions, while all other data was self-reported by students on BCSSE and NSSE. We used many student characteristics to control for self-selection into a learning community. These characteristics included high school grades, parental education, distance from home to college, and the number of friends attending the same college. We selected these variables as proxies for students' knowledge of special opportunities, like learning communities, via their social networks, which could influence their decision to participate in a learning community. Additionally, we also included students' anticipated major, as learning communities are frequently themed, and interest in a field of study could be an eligibility requirement or a factor leading students to participate. We captured high school experiences through the following BCSSE scales on high school engagement and selected items related to the quality of high school engagement in effective educational practices: quantitative reasoning, learning strategies, time spent preparing for class, participating in co-curricular activities, and relaxing and socializing, and the extent to which their courses challenged them. Other BCSSE scales were selected as they act as pretests for our within college dependent variables: collaborative learning, student-faculty interaction, discussions with diverse others, and importance of the campus environment (the previously mentioned BCSSE scales, quantitative reasoning, and learning strategies also served as pretest measures. Variables on students' involvement in performing or visual arts programs, athletic teams, student government, publications, vocational clubs, and volunteering were utilized. These variables were selected to account for students' inclination to participate in social activities and be a prerequisite for participation in a themed learning community. Finally, data on the following BCSSE scales accounted for students' expectations for college: expected academic perseverance, expected academic difficulty, and perceived academic preparation. For results on the validity and reliability of the BCSSE scales, see Cole and Dong (n.d.). The codebook for the dataset is available from National Survey of Student Engagement (2014).

Is a HIP Always a HIP?
Our theoretical framework guided our selection of the control variables, which was particularly important due to the lack of previous research indicating the correlates of learning community participation. In addition to focusing on common demographic characteristics, we selected variables such as the number of friends also attending the same college to indicate social networks' power in distributing information on the possibility and value of participating in a learning community. Similarly, we included variables on high school experiences and major choice to represent students' interest in activities or topics that may form learning communities like community service or the sciences.
We utilized multiple imputation by chained equations (MICE) to impute missing data (Raghunathan et al., 2001;Rubin, 1987). Our primary rationale to impute data was to have consistent sample sizes across the analyses described below. MICE uses a series of regression models to impute missing data and models each variable according to its distribution. We created a total of 20 imputed datasets to minimize the loss of statistical power while keeping the time to run the imputation and analytic models reasonable (Graham et al., 2007). Continuous variables were imputed using predictive mean matching. Binary, ordinal, and nominal variables were imputed using the appropriate form of logistic regression.

Analyses
We performed the following procedures for each of our 11 dependent variables. Our analyses utilized multilevel modeling due to our data's nested nature and its ability to describe how the relationship between a dependent and independent variable varies across institutions. We began our analyses by estimating a null model to calculate the intra-class correlation (ICC), which is the proportion of variance attributable to between-institution factors. We also used the ICCs to calculate the design effects, which compare the variability between a clustered dataset and a simple random sample of the same size. Design effects greater than 2 indicate a need for multilevel modeling (Peugh, 2010). Next, we estimated a multilevel model that included an indicator for learning community participation, the control variables noted in the data section, an institution-specific random intercept, and the institution-specific random coefficient (also known as a random slope) for the learning community variable. Due to our focus on the influence of learning community participation on student engagement and its variability across institutions, the model's key parameters of interest were the fixed effect estimate of learning community participation and its random effect, which describes its variability. For each dependent variable, we used the fixed effect estimate and the random effect to calculate the estimate for a program at −2 standard deviations (SDs), −1 SDs, +1 SDs, and +2 SDs from the fixed-effect estimate to highlight the variability in how learning communities differ across institutions. This range of estimates provides the 95% confidence interval at the school level.
As our data utilized multiple imputation, all results are the mean estimates from the 20 imputed datasets. Standard errors were adjusted to account for the uncertainty of the imputation, according to Rubin's (1987) rules for multiple imputation, and the nesting of students within schools.

Positionalities
We conducted our research at the Center for Postsecondary Research (CPR), which "investigates processes and practices that influence student success and institutional excellence in higher education and promotes those found to be effective" (Center for Postsecondary Research, n.d.). We believe in the CPR mission and embarked on this study, believing in leveraging data to improve student outcomes. One author worked in admissions for two years Is a HIP Always a HIP? and subsequently has focused on student success research as a graduate student and faculty member. A quantitative researcher by training, his work seeks to identify programs and policies that facilitate student success. The other author spent the early part of her career in student affairs, developing a living-learning program at a small, liberal arts institution in the Midwest. She believes that interventions such as learning communities have the potential to impact student outcomes positively. She is convinced that impacts often vary by identity characteristics due to sociocultural factors, including discrimination.

Limitations
Before presenting the results, we need to acknowledge the study's limitations. Our data is not a random sample of all postsecondary students in the United States; instead, it utilizes students who attended institutions that chose to participate in both BCSSE and NSSE. Our results may be subject to self-selection bias, particularly at the institution level. In particular, our sample contains more White students than the overall first-year population attending four-year colleges (75% vs. 57%; National Center for Education Statistics, n.d.). Due to a lack of data, our results neglect the variability of learning community programs within institutions. Institutions have different models for administering learning communities, and the effect of learning communities is most likely not constant across the different types of learning communities. For example, a learning community that involves both curricular and residential components may have a different impact on students than one only featuring a curricular component, even if located at the same institution, making our variability estimates conservative. Additionally, our results focus on changes in various facets of student engagement and students' perceived gains in learning and holistic development. A learning community may focus on improving only some aspects of the student experience; the lack of an effect in a specific domain is not an indictment of the entire program.

Results
The ICCs from our null models were generally low (<0.05), indicating that most dependent variables variance was attributable to within school variance. The design effects were all greater than 2.0, indicating a need for multilevel modeling (Peugh, 2010). Table 1 contains the fixed-effects estimates of learning community participation on student engagement and perceived gains. As the dependent variables were standardized with a mean of 0 and SD of 1, the estimates represent the expected effect size change in the dependent variable associated with participating in a learning community, holding other factors constant. Overall, the estimates indicate that learning community participation is positively and significantly associated with an increase in student engagement across various domains and students' selfperceived gains. The exception, Effective Teaching Practices, had a positive coefficient; however, the estimate narrowly missed the threshold for significance at p < 0.05.
The estimated coefficients for Higher-Order Learning, Quantitative Reasoning, Learning Strategies, Discussions with Diverse Others, Quality of Interactions, Supportive Environment, and Perceived Gains all ranged between 0.10 and 0.19 in SD units. The coefficients for Reflective & Integrative Learning and Collaborative Learning fell between 0.20 and 0.29 in SD units. The largest estimate at 0.33 SDs was for Student-Faculty Interaction. The Snijders and Bosker's level one R 2 coefficient indicated that the amount of variance accounted for at the student-level ranged between 6% and 19%.

Is a HIP Always a HIP?
Next, we focused on how the relationship between learning community participation and our dependent variables varied across institutions. Figure 1 visually displays the results demonstrating the variability in the estimated effects of learning community participation. The dark boxes represent the estimated population-mean effect sizes plus or minus one standard deviation of the estimated random coefficient for learning community participation. We would expect the estimated effects of learning communities at two out of three schools to fall within the black boxes. The gray lines extend out to plus or minus two standard deviations, and we would expect about 95% of the estimated effects to fall within this range.
We observed virtually no variability in the estimated effects between institutions for the following dependent variables: Learning Strategies, Collaborative Learning, Quality of Interactions, and Perceived Gains. The relationship between learning community participation and these variables appears to be highly consistent across institutions. In contrast, the range from the 5th percentile to the 95th percentile estimates was at least 0.33 SDs for Higher-Order Learning, Reflective and Integrative Learning, Student-Faculty Interaction, and Supportive Environment. The effect of learning communities appears to differ substantially across institutions for these variables. In between these extremes, the results for Quantitative Reasoning, Effective Teaching Practices, and Discussions with Diverse Others demonstrated some variability in the estimated relationships across institutions; however, the estimates' magnitude was not as substantial.
The variability in the estimated relationship between learning community participation and our dependent variables across institutions indicates that learning communities can negatively influence students' engagement and perceived gains. While the fixed effects estimates suggest that learning community participation has a positive influence across various engagement domains, some programs appear to have a trivial impact on engagement. In particular, our  Bosker (1994, 1999) level-one R 2 for multilevel models 2 Bosker (1994, 1999) level-two R 2 for multilevel models Note. Estimates expressed in effect size units. Results are the average of 20 imputed datasets. Standard errors were adjusted to account for the uncertainty of the imputation. The models hold constant other characteristics.

Is a HIP Always a HIP?
results suggest that low-performing programs may have little to no influence (< 0.10 SDs) on Higher-Order Learning, Discussions with Diverse Others, and Supportive Environment. In contrast, high performing learning community programs may have a substantial influence (> 0.30 SDs) on Higher-Order Learning, Reflective & Integrative Learning, Student-Faculty Interaction, and Supportive Environment.

Discussion
As a HIP, learning communities have the potential to serve as a pathway to student success. This article examined the efficacy of learning communities on students' engagement and perceived gains and focused on the variability in the relationships between learning community participation across institutions. Our results comport with previous research on the impacts of learning community participation on student engagement and self-perceived gains. On average, we estimated that learning communities have a significant and positive effect on multiple facets of student engagement and students' self-perceived gains (see Table 1). The lone exception occurred for Effective Teaching Practices, as learning community participation was positively but not significantly associated with this dependent variable. Note. The black boxes represent the population average effect size ± 1 SD of the random coefficient. The gray bars represent the population average effect size ± 2 SDs of the random coefficient. Estimates are the average from 20 imputed datasets and hold constant other characteristics.

Is a HIP Always a HIP?
While we found significant differences in our student engagement measures by learning community participation, learning community participation does not appear to have an equivalent influence across the various student engagement domains examined. Our results show that learning communities have the most substantial effects on Student-Faculty Interaction, Reflective and Integrative Learning, and Collaborative Learning. The latter two dependent variables' results are less surprising as learning communities are frequently designed to pair students and integrate coursework frequently across a common theme, helping to integrate students into their institution's culture, norms, and expectations. The increase in Student-Faculty Interaction is more surprising due to the lack of a relationship between Effective Teaching Practices. Following social capital theory, learning communities appear to alter students' relationships with professors outside of the classroom and help incorporate students into their institution's social networks. We did not observe a corresponding increase in Effective Teaching Practices inside the classroom by instructors. This finding is intriguing as learning communities frequently are a part of an intentional restructuring of the curriculum, which should influence the classroom experience. There is the possibility that this result is attributable to program effect diffusion, as non-participants may also benefit from improved pedagogy inside the classroom if the curriculum is changed wholesale.
While learning communities appear to be useful across various domains, they do not appear to alter the student experience drastically, on average. A study of effect sizes in the NSSE Engagement Indicators recommends classifying effects sizes smaller than 0.10 as trivial, 0.10 to 0.29 as small, 0.30 to 0.49 as medium, and 0.50 or above as large (Rocconi & Gonyea, 2018). Except for the Effective Teaching Practices (trivial) and Student-Faculty Interaction (medium) estimates, nearly all of our effect size estimates were between 0.10 and 0.29 and small.
While the above results reflect the population-average effect of learning communities, we found substantial variation in the estimated effects across institutions in more than half of the dependent variables. Our findings comport with other research that demonstrates learning communities vary in their effects, even learning communities at the same institution (Purdie & Rosser, 2011). In particular, the estimates for Higher-Order Learning, Reflective & Integrative Learning, Student-Faculty Interaction, and Supportive Environment ranged from trivial for institutions with the least impactful programs to medium or large for the most impactful institutions. The effects of learning communities on students' experiences appear to vary substantially across institutions. Thus the effectiveness of a program is linked to its' nature and structure. Consequently, implementing a HIP alone should not be a marker of academic quality. Rather, program staff and administrators should assess their programs to document the effectiveness and quality of their programs.
As demonstrated in this article, implementing a learning community program will not automatically improve all facets of students' learning and development. Likely, the quality of institutional time and effort varies. As we mentioned previously, the theory of engagement maintains that learning communities enhance students' likelihood to succeed through the intentional time and effort expended by both students and institutions. Differences in outcomes may result from the inconsistent quality of effort and time by institutional personnel or students. The estimates for Collaborative Learning, Perceived Gains, Quality of Interactions, and Learning Strategies were highly consistent across schools and non-trivial. We conclude that learning communities' primary feature of having a cohort of students attending the same courses together appears to be a practical approach to promoting these types of student engagement and perceived gains.

Implications for Research
This study attempted to provide more concrete evidence for the effectiveness of learning communities. Our results comport with previous research on learning communities, validating others' findings that learning communities are a vital tool for improving student learning and success. We also discovered that all learning communities are not created equal. The natural follow-up question is which types of learning communities are most effective and why. Besides answering this vital question, research should also focus on how learning communities' effects vary by different student populations (e.g., race and ethnicity, gender, socio-economic status).

Implications for Practice
Our study corroborates previous research that learning communities effectively encourage students to engage in activities that previous research has demonstrated to promote student learning and success (McCormick et al., 2013). Practitioners must consider the variance among learning communities and the relatively small effect sizes and have realistic expectations. In other words, there are better and worse learning communities, and even the best ones will not in and of themselves solve all student success concerns. This understanding should inform resource allocation decisions and encourage decision-makers to thoughtfully develop or sustain learning communities on their campuses (Finley & McNair, 2013). In particular, we echo Finley and McNair's (2013) recommendation that the administrators of learning communities should be more intentional in articulating the learning outcomes associated with a learning community and how learning community participation will help prepare students for their careers and civic engagement.
Targeted Assessments to Ensure Quality. In this vane, we recommend targeted and routine assessment of individual learning communities. Assessments should focus on both long-term outcomes and short-term effectiveness. Assessors should measure long-term metrics, including persistence, grades, and alumni giving. Short-term assessments of programming and generalized student engagement should occur in academic and social domains, directly connecting to goals particular to the individual learning community. We recommend learning community staff partner with institutional researchers to analyze and report on data routinely. We also encourage disaggregating data to assess how learning communities influence student subgroups. For example, a large-scale study has identified that most of the benefits of residential learning community participation on persistence go to male students (Fosnacht et al., 2021). A better understanding of how a learning community impacts particular student groups will allow decision-makers to deploy their resources strategically. To ensure data leads to informed action, learning community directors and decision-makers should meet at least annually to review data and allocate resources accordingly.
Also, we recommend that learning communities frequently assess student demand, particularly for themed communities. A benefit of many learning communities' structure is that they can be adapted to suit student needs annually. Annual demand assessments will indicate if programs should be restructured and rebranded. Administrators should continuously monitor new course offerings to seed new communities. Suppose student demand in a topic is sufficient for a learning community. In that case, administrators should suggest the topic to the appropriate department's faculty to consider adding a new themed course to the curriculum. We view learning communities as living organisms that should be highly reactive to student needs and desires while simultaneously upholding their purpose to build community and support student success.

Is a HIP Always a HIP?
Participating in Cross-institutional Information Sharing. It is prudent to consider learning community scholarship recommendations to inform practice. For example, our research suggests that learning communities may not sufficiently address pedagogical issues. While learning communities can foster higher integrative learning and faculty-student interaction, ineffective programming and teaching practices can persist. Staff could then use what they learn from literature to improve practice. Our research might prompt staff to incorporate colleagues from teaching and learning centers to enhance effective learning community teaching practices. Creating structures that incentivize staff to stay current in literature will work to ensure it is a priority. For example, a staff member could periodically report at a staff meeting on a relevant article or book. Reports could be compiled and reviewed annually along with institutionalspecific data to inform practice.
Increased information sharing and networking across campuses could address inconsistent quality across learning communities. We recommend resourcing and encouraging staff to participate in professional organizations and affinity groups such as ACUHO-I. Also, publishing in journals such as the Washington Center's Learning Communities Research and Practice (LCRP) and presenting at conferences will promote knowledge sharing and expand professional networks.

Conclusion
With the increasing adoption of learning communities by postsecondary institutions to improve student learning and development, it is imperative to document learning communities' effectiveness. This study found that learning communities positively impact students' engagement and perceived gains, confirming previous research; however, this effect does not drastically alter the student experience on average. Additionally, we found that the estimated effect of learning communities varies widely across institutions on various measures. It appears that while some learning communities are exceptionally beneficial, others have a negligible impact on students. Institutions should not assume that merely having and maintaining a program termed "learning community" is beneficial. Instead, careful attention must be given to incorporate best practices, assess their benefits, and make the necessary changes to genuinely "highly impact" student success.