Development and Evaluation of Scales for Measuring Self-Efficacy and Teaching Beliefs of Students Facilitating Peer-Supported Pedagogies

Two scales measuring teaching self-efficacy and beliefs were developed from previous instruments for use with near-peer facilitators assisting with peer-supported pedagogies. Construct and face validity, measurement reliability, and factor structure were determined using a population of nearpeer facilitators working in a peer-led team learning chemistry classroom at a large research-intensive postsecondary institution in the Southeast United States. Results suggest that the scales produce valid and reliable data. Teaching self-efficacy and beliefs were found to increase between pre and post administrations with small to medium effect sizes. The scales can provide a means to evaluate peersupported pedagogies and as discussion points for faculty members training near-peer facilitators.

. Student perceptions of their learning, as measured by Finn and Campisi (2015), have been shown to increase positively. Critical thinking skills, as measured by the California Critical Thinking Skills Test, have been shown to increase (Quitadamo, Brahler, & Crouch, 2009. A key critique is that "time on task" or "time engage with tasks" is greater for PLTL learning experiences, and thus, student-level metrics are expected to increase. The last two themes from Wilson and Varma-Nelson (2016) consider how the PLTL experience effects peer leaders and the how varying the PLTL experience can affect the process. When interviewed after participating in a PLTL course as near-peers, 92% of former peer leaders positively rated their peer leading experience due to an increase for appreciation of small-group learning, different learning styles, efforts made by teachers, as well as an increased confidence in presenting and working as a team (Gafney & Varma-Nelson, 2007). Peer leaders who adopt a facilitator approach to their interactions with students were more likely to acknowledge, build upon, and elaborate ideas as opposed to a more instructional based approach lend to students working individually when not listening to the peer leader, be answer-focused, and unequally participate (Brown, Sawyer, Frey, Luesse, & Gealy, 2010). Integrating active collaboration was found to be a potentially crucial element as it was discovered that organic chemistry students that participated in cyber PLTL (a synchronous online version of PLTL) had significantly less success drawing the correct predicted product of a chemical reaction (Wilson & Varma-Nelson, 2018). Facilitating collaborations is necessary to catalyze social constructivist learning experiences.

Learning Assistants (LAs)
Learning assistants (LAs) are similar to peer leaders of PLTL in that their primary goal is to facilitate learning and reduce the student-to-instructor ratio (Otero, Pollock, McCray, & Finkelstein, 2006;Otero, Pollock, & Finkelstein, 2010). A key component of LAs is the focus on pedagogical content knowledge (Shulman, 1986) as the underlying theoretical framework with an emphasis on content, pedagogy, and practice (Otero, Pollock, & Finkelstein, 2010). Weekly planning sessions with the course instructor are used to review the content. Occasionally, LAs enroll in a teaching and learning course to gain a better understanding of the learning processes and how to best facilitate learning (Otero, Pollock, McCray, & Finkelstein, 2006;Otero, Pollock, & Finkelstein, 2010). Learning assistants are incorporated into instruction in two ways: First, facilitating small group work activities similar to the PLTL pedagogical model. Second, assisting with clicker questions, similar to the Mazur's (1997) peer instruction pedagogical model, wherein the LAs are additional instructors during the peer instruction experience. Oetero et al. (2006) have reported that fostering interest in the teaching profession (particularly, K12 instruction) is a secondary goal of learning assistant programs. Unlike PLTL with its origin in chemistry, the origin of LA programs is not attributed to one discipline; LA programs are now found in many disciplines: biology (Sellami, Shaked, Laski, Eagan, & Sanders, 2017); physics (Otero, Pollock, McCray, & Finkelstein, 2006); and chemistry (Jardine & Friedman, 2017).

Teaching and Learning Beliefs
An instructor's beliefs about teaching are related to the instructional practices implemented in their courses (Lotter, Harwood, & Bonner, 2007;Simmons et al., 1999;Gibbons, Villafañe, Stains, Murphy, & Raker, 2018). The implication is that instructors implement pedagogies deemed to be beneficial to learning. When instructors perceive that the best way of learning is through transmission of knowledge, more lecture-based pedagogies are reported by such instructors and observed in their classrooms. When instructors perceive that learning is best through construction of knowledge, additional small, group work-based pedagogies are reported and observed. These beliefs about learning have origins in how the instructor believes they learn best (Simmons et al., 1999). Thus, an instructor's experience as a student has a powerful influence on their views of teaching (Smith, 2005;Trigwell, Prosser, & Waterhouse, 1999;Kember & Kwan, 2000).
Unlike instructors who predominately have experienced more lecture-based pedagogies in their postsecondary and graduate education, near-peer facilitators have the unique experience of typically having participated as a student in active learning pedagogies prior to their participation in peer-supported instructional pedagogies. Self-selection to be a near-peer facilitator could be, in part, the result of a belief in the effectiveness of the pedagogy. We expect that near-peer facilitators will have some foundational belief in collaborative approaches to learning. Streitwieser and Light (2010) found, through qualitative interviews, that peer instructors implementing PLTL had strong studentcentered beliefs about teaching; they also found that peer leaders had positive or no changes in teaching beliefs as a result of their peer leading experience. Johnson, Robbins, and Loui (2015) found through reflection journals that leaders learned to appreciate intellectual diversity among students and that the leaders expressed an increased interest in teaching. French and Russell (2002) found that as graduate teaching assistants gained experience implementing inquiry-based laboratory experiments, they conceptualized their role in learning more as a guide than a conveyer of information. This 'guide' role is a typical characterization of how peer instructors should perceive their role in instruction (Gosser et al., 1996;Hockings, DeAngelis, & Frey, 2008;Kampmeier, Varma-Nelson, & Wedegaertner, 2000).
[Authors] (accepted) found that peer leaders report different interactions with students based on how they perceived their role; for example, peer leaders viewing themselves as "mentors" reported engaging with students beyond the scope of assignment including providing broad study skill advice and sharing their experience in the course, in comparison to peers leaders viewing themselves as "teachers" reported more transmission of knowledge interactions including feeling the need to "give students the answers" when the learning activity was challenging.

Teaching Self-Efficacy
Self-efficacy refers to an individual's belief about their capability to achieve a specific task (Bandura, 1986). Lack of confidence in a task can lead to avoidance of the task. Typically within STEM disciplines, we think about the confidence a student has in solving problems and answering questions, and how that confidence relates to their achievement on an assessment (e.g., Pajares, 1996;Ferrell & Barbera, 2015;Britner & Pajares, 2006;Cheung, 2015;Zeldin, Britner, & Pajaras, 2008;Villafañe, Xu, & Raker, 2016). Teaching self-efficacy is confidence in one's ability to teach in specific ways, and how that confidence relates to how and what occurs in the classroom (c.f., Gibbons, Villafañe, Stains, Murphy, & Raker, 2018).
While there is an absence of literature on the teaching self-efficacy of near-peer facilitators, investigations into the teaching self-efficacy of graduate teaching assistants provide insight into what to expect with near-peer facilitators. Bond-Robinson and Bernard Rodriques (2006) found that low confidence may preclude effective teaching by graduate teaching assistants. Reeves et al. (2018) analyzed pretest/posttest data with first time biology and chemistry laboratory graduate teaching assistants using the Anxiety and Confidence in Teaching scale; they found statistically significant gains in graduate teaching assistants' teaching self-efficacy and pedagogical knowledge, with significant reductions in teaching anxiety. Research has shown that teaching self-efficacy impacts teacher behaviors, and by association student outcomes. A teacher's self-efficacy beliefs positively impact student learning and the actual success or failure of a teacher's behavior (Henson, 2002). Teachers with high teaching self-efficacy tend to perform better, have a greater desire to continue teaching, and their students have higher achievement metrics (Ashton & Webb, 1986;Tschannen-Moran, Hoy, & Hoy 1998). Teaching selfefficacy typically develops early in a teacher's career and becomes relatively stable over time (Morris & Usher, 2011;Tschannen-Moran, Hoy, & Hoy 1998). Morris and Usher (2011) found that early successful instructional experiences, which were are a combination of mastery experiences (i.e., having a command of the course content) and positive feedback from students in the course and fellow instructors, are important for developing high teaching self-efficacy of twelve teaching award winning professors, and that their teaching self-efficacy solidified within the first few years as a faculty member. These studies suggest that experiences in peer-supported instruction, and as a near-peer facilitator, may lead to more active learning experiences being incorporated into postsecondary educational settings as these postsecondary students begin to seek and commence careers in academia.

Research Purpose and Questions
The purpose of our study is to develop and evaluate an instrument to measure the teaching and learning beliefs and teaching self-efficacy of peer instructors. Our work is guided by two key questions: 1. Do the Teaching Belief Scale and Self-Efficacy Scale produce valid and reliable data? 2. What change in teaching and learning beliefs and teaching self-efficacy occur as a result of participation as a peer instructor?

Research Setting
Data were collected at a large research-intensive university in the Southeastern United States between Fall 2017 and Spring 2019. PLTL is implemented in two variations at the research setting: First, PLTL is incorporated into weekly 50-minute recitation sessions for the first semester general chemistry course. Peer leaders facilitate up to six small groups of three to four students per recitation session, completing worksheets created by the course instructors; on average, 1,500 students are enrolled in the course each term, with peer leaders facilitating up to three recitation sessions per week. Second, PLTL is incorporated into half of the second semester general chemistry course lecture periods. In this variation, students in the course watch instructional videos prior to each peer learning lecture periods (i.e., flipped-class approach). Peer leaders then facilitate up to four small groups of two to three students within the context of a large-lecture hall completing worksheets created by the course instructors; up to 24 peer leaders are simultaneously assisting in the lecture period. The course instructor is also present in the classroom assisting with small group facilitation and interjecting classroom response questions (i.e., clickers) to formatively assess learning throughout the lecture period. On average, 500 students are enrolled in the course each term.
Peer leaders enrolled in a three-credit training course for both the first and second semester general chemistry courses. The training course was instructed by chemistry faculty members with experience implementing and evaluating PLTL. Within the training course, peer leaders discussed how to facilitate learning, potential problems and opportunities encountered in implementing PLTL, and experienced the small group learning activity from the perspective of a student.

Scale Development
Our teaching self-efficacy and beliefs scales evolved from the Teaching Assistant Professional Development (TAPD) survey reported by Wheeler, Maeng, Chiu, and Bell (2017); the TAPD survey originated from the College Teaching Self-Efficacy Scale (Navarro, 2005) and the STEM Graduate Teaching Assistant-Teaching Self-Efficacy Scale (DeChenne, Enochs, & Needham, 2012). The TAPD is composed of two scales: beliefs (8 items) and self-efficacy (13 items). The TAPD instrument was intended for use with graduate teaching assistants, and thus revisions and additions were necessary to focus the instrument for use with near-peer facilitators.
We first removed mentions of specific course structures (e.g., "Laboratory courses should be used primarily to reinforce a science idea that the students have already learned in lecture") to broaden the utility of the tool across multiple chemistry courses that may or may not have instructional laboratory components. TAPD items addressing two ideas were split into two items. Referents to "chemistry" were added to multiple items to focus respondents on the particular course. Eight beliefs items were added to the instrument to address constructivist underpinnings of peer-supported pedagogies. Nineteen self-efficacy items were added to the instrument to the address the numerous tasks expected of near-peer facilitators as reported in literature on PLTL and LA programs. A fivepoint confidence scale from "not at all confident" to "extremely confident" was adopted in congruence with the TAPD survey. A total of 14 beliefs items and 32 self-efficacy items were evaluated in our study. The resulting items were reviewed by four chemistry education researchers and two general chemistry instructors to establish face validity.

Participants
Peer leaders completed the instrument during the first week of term before they led a peer leading session (pre), and again at the end of the term after their last peer leading session (post). Data were collected via Qualtrics over four academic terms (Fall 2017, Spring 2018, Fall 2018, and Spring 2019. Peer leaders received credit for completing the instrument amounting to 5% of their overall grade in the training course. The instrument was administered to 227 peer leaders, with 211 peer leaders (93%) completing all items at both administrations. With 9 peer leaders completing just one administers. Therefore 431 individual response instances were collected. Peer leaders can only serve for one term at the research setting; therefore, participants had no prior experience serving in the role prior to the study.

Data Analysis
Data were pooled and then split into an exploratory analysis set (n = 217 responses) and a confirmatory analysis set (n = 214 responses). These samples are sufficient for conducting the proposed analyses (Costello and Osborne, 2005). Principle components exploratory factor analyses (EFA) with Varimax rotation, Kaiser Criterion, and Scree tests were conducted using SPSS 24.0 on each scale (i.e., beliefs and self-efficacy) to determine the internal structure. Confirmatory factor analysis (CFA) was conducted using Mplus 7.31 on each scale to verify internal structure. Comparative fit indices (CFI) greater than 0.90 and root mean square error of approximation (RMSEA) values less than 0.08 determine good fit (Browne & Cudeck, 1993). RMSEA values can be unreliable, however, with models that have a small degrees of freedom (Kenny, Kaniskan, & McCoach, 2015). Internal consistency was measured with using JASP (https://jasp-stats.org) to measure McDonald's omega values; an omega coefficient greater than 0.60 indicates acceptable consistency (Cortina, 1993). Because of the Journal of the Scholarship of Teaching and Learning, Vol. 21, No. 3, October 2021. josotl.indiana.edu randomization process it is possible that some individuals had both their pre and post responses recorded in either the EFA or CFA.

Teaching Beliefs Scale -Development
Exploratory factor analysis of the Teaching Beliefs Scale originally suggested between one-and fivefactor solutions with support from Kaiser Criterion, eigenvalues greater than one. Inspection of the Scree plot indicated either a two-factor or three-factor solutions. Loadings from the three-factor solution resulted in a non-result, and so the two-factor solution was examined with the removal of one item (see Table 1) due to the item (14) cross loading across both factors. Upon closer inspection of the two-factor items revealed that one factor was a collection of items that would be considered non-supportive of social constructivism. To verify this, the five items (1,2,5,8,12) were reversed coded; the resultant EFA was again two-factor with the non-supportive items grouping together. Because of the redundancy of two factors differing only in positive or negative valence, the five nonsupportive items were removed. This left one factor with eight items in the teaching beliefs scale (see Table 2). This parsimonious set of items resulted in a one-factor solution with support from the Kaiser Criterion and Scree plot. All factor loadings were significant at p<.05.

Table 1. Teaching Beliefs Scale -First iteration and reasons for item removal.
Item Reason Removed Chemistry instruction should cover many topics superficially to maintain interest from the largest variety possible of students NS Students learn chemistry best when grouped with students of similar abilities NS Inadequacies in students' chemistry knowledge and skills can be overcome through effective teaching Students should be provided with the reason for why the content they are learning is important Personal studying is the best way to learn chemistry NS Chemistry instruction should be aimed at helping students make connections between their science courses Students learn chemistry best when grouped with students of differing abilities Learning from peers is not helpful in chemistry because they do not have the same level of understanding as a professor NS Small group work should be used to learn chemistry Chemistry courses should provide opportunities for students to share their thinking and reasoning Small group work should be used to reinforce concepts already learned in lecture Chemistry instruction that makes connections to other science courses can lead to confusion NS Chemistry instruction should focus on ideas at an in-depth level, even if that means covering fewer topics Small group work should be used to learn new concepts CL Note. Items are listed in the order in which they were presented to the respondent. "CL" denotes cross-loading. "NS" denotes a non-supportive item. Inspection of the items within the factor suggest the emergence of a single factor with 8 items using a WLSMV parameter estimator which is required for ordinal and categorical data. Item statistics and Spearman rho correlations for the Teaching Beliefs Scale are reported in Appendix 1. CFA on the confirmatory data set supports the one-factor solution: χ 2 (20) = 52.553, p = .0001, CFI = 0.908, RMSEA = 0.087 (see Figure 1).

Figure 1: Confirmatory Factor Analysis of Teaching Beliefs Scale.
McDonald's omega is 0.61 for the factor indicating acceptable reliability for a low stakes test measuring change in beliefs about teaching. While McDonald's omega is sensitive to the number of items; 8 items seems reasonable to give appropriate results (Cortina, 1993;Murphy & Davidshofer, 2005). Items TB4 and TB8 have lower than normally accepted values (< .400); however, we believe that these items are integral to the overall theoretical construct. We agree with Bandalos and Finney Journal of the Scholarship of Teaching and Learning, Vol. 21, No. 3, October 2021. josotl.indiana.edu (2019) that while variable elimination is an important part of the process for creating a model, researchers should be less cavalier with the elimination of variables because doing so changes the construct. Bandalos and Finney (2019) suggest retaining any questionable variable until further research can be done to verify if the transgressing variable repeats upon replication of the study. These psychometric measures suggest that the scales produce valid and reliable data.

Self-Efficacy Scale -Development
Exploratory factor analysis of the initial 32-item self-efficacy scale (see Table 3) using the exploratory data set suggested a one-factor solution based on the Scree plot; Kaiser criterion suggested up to four factors; however, three of those factors had eigen values near one. As such a one-factor solution is a probable solution. Improve the critical thinking skills of my students HC Note. Items are listed in the order in which they were presented to the respondent. "DNL" denotes does not load onto factor. "NN" denotes non-normal. "HC" denotes highly correlated.
To obtain a more parsimonious self-efficacy scale, we engaged in multifaceted item reduction. First, examination of EFA factor loadings showed one item ("Learn all my students' names") did not sufficiently load (< 0.300) on the factor. Second, one item ("Show my students respect through my actions") was extremely non-normal (kurtosis = 6.19). Lastly, Spearman correlations were evaluated between scale items to determine redundancy; values greater than 0.4 were examined with 15 items being removed due to correlating to a large number of other items. An EFA was run on the resulting 15 items of the exploratory set; per EFA criterion, a one-factor solution was best. Factor loadings are between 0.50 and 0.68 for all items of the self-efficacy scale (see Table 4).

Impact of Participation in Peer Leading
Spearman's rho correlations between the Teaching Beliefs Scale and the Self-Efficacy Scale by pre and post measures are reported in Table 5; only peer leaders who had completed all pre and post items are included in this analysis (n = 211). These correlations suggest that the constructs are related; however, the constructs are independent (rho < .75) and are not autocorrelated between pre and post measures. Differences between pre and post measures are determined using Wilcoxon signed rank tests (see Table 6). The Wilcoxon signed test is a comparison of pre and post tests, similar to a t-test but has more flexibility in that it allows for non-parametric data to be examined. Significant pre/post differences were observed for both factors with increasing Self-Efficacy and increasing constructivist Teaching Beliefs; these differences have small to medium effect sizes: r = z / sqrt(npre + npost) (Cohen, 1988;Pallant, 2007).

Discussion and Implications
Two scales, a Teaching Beliefs Scale and a Self-Efficacy Scale, were developed to measure the impact of peer-supported instruction experiences on near-facilitators in postsecondary chemistry courses. Exploratory factor analyses were conducted on half of the data set, followed by item-reduction procedures in order to obtain parsimonious measures. Confirmatory factor analyses were conducted on the remaining half of the data set. Suitable psychometric evidence for the validity and reliability of the data were obtained to justify initial use of the instrument.
The developed instrument serves two purposes: First, as used in this study, administration of the instrument in a pre/post manner can provide evaluative data on the combined impact of any professional development experiences (i.e., weekly peer leader training in our study) and experiences implementing peer-supported instruction (i.e., enacting PLTL experiences). Use of the scales at multiple settings should include additional reliability and validity investigations. Second, results of the two scales can inform trainers of peer leaders and learning assistants as to initial confidence levels and teaching beliefs prior to professional development experiences; thus, we suggest the scales be used as a formative assessment tool to measure the current state of the near-peer facilitators. Administration of the instrument followed by a whole group discussion could serve to further prepare the near peers for their learning facilitator roles. Because of the convenience and prevalence of online surveys the complete instrument for each scale is presented within the paper complete with the 5-point Likert scale. We hope that use of these scales becomes implemented across near-peer programs across the globe. Our tool was developed for chemistry programs which limits its transferability as near-peer programs exist in a variety of disciplines (Wilson & Varma-Nelson, 2016). Previous instruments such as the Achievement Emotions Questionnaire (AEQ; Pekrun, Goetz, Frenzel, Barchfeld, & Perry, 2011) have been taken from a general context and converted into chemistry specific (AEQ-OCHEM; Raker, Gibbons, & Cruz-Ramírez de Arellano, 2019) and we hope that future researchers will implement the reverse in creating discipline specific variations so the impact can be universal.
Looking at the long-term effects of peer-leading on individuals Gafney and Varma-Nelson (2007) found similar results as the individuals that they surveyed finding that 32% (n=38) of those surveyed described a new appreciation for differences among people, particularly in how they learn or understand new material. In the same study 28% (n=33) reported increased confidence, comfort, or patience in working with people, particularly in teaching-learning situations which relates well with our findings of increase self-efficacy (Gafney & Varma-Nelson, 2007). In today's society installing students with activities that give them opportunities for growth are vital. In a study comprising 875 students from 10 institutions done by Cress, Astin, Zimmerman-Oster, and Burkhardt, showed that when students are involved in leadership activities, they "showed growth in civic responsibility, leadership skills, multicultural awareness, understanding of leadership theories and personal and societal values." While this study did not look at near-peer facilitating specifically we believe that the principles learned during near-peer facilitating are supporting these leadership values and will continue to play a role in the betterment of near-peer facilitators.
Positive impacts of the peer instruction experience on self-efficacy mirror those found with graduate teaching assistants (Burton, Bamberry, & Harris-Boundy, 2005;Prieto & Almaier, 1994;Prieto, Yamokoski, & Meyers, 2007;Tollerud, 1990). The effect size of our pre/post teaching beliefs differences are much lower, potentially confirming that teaching beliefs are malleable, but may be resistant to change; such a conclusion is support by studies on the teaching beliefs of postsecondary instructors (Morris & Usher, 2009;DeChenne, Enochs, & Needham, 2012;Simmons et al., 1999). Given the importance of learning experiences both as a student and as a facilitator of learning on future choices to enact instructional practices (Sunal et al., 2001), the data from our developed scales show promise for a long-term, broader impact on instruction should our participants choose to pursue a career in education.
Teaching beliefs and self-efficacy, by proxy through how these constructs are related to the use of more effective pedagogies, are associated with increase course performance (Ashton & Webb, 1986;Tschannen-Moran, Hoy, & Hoy, 1998). While such an investigation is beyond the scope of the study we report herein, our scales could be used in further work to identify the association between peer instructor espoused beliefs and self-efficacy, and the performance of students for whom the peer instructor assists in facilitating learning. Analogous studies have been conducted considering the beliefs and efficacy of graduate teaching assistants (e.g. Prieto & Almaier, 1994;Prieto, Yamokoski, & Meyers, 2007;DeChenne, Enochs, & Needham, 2012;Wheeler, Maeng, Chiu, & Bell, 2017).

Conclusions
Two scales were created to help measure the teaching self-efficacy and beliefs of near-peer facilitators assisting with peer-supported pedagogies. These instruments were taken from previous work done that addressed teaching assistants and general teaching, however it is believed that the unique context of near-peer facilitators deemed that more specific scales be developed. Construct and face validity, measurement reliability, and factor structure were determined and show that the scales produce reliable data, although we recommend that addition research be conducted in order to extend the scope and validity of our work. Teaching self-efficacy and beliefs were found to increase among nearpeer facilitators between pre and post administrations with small to medium effect sizes. These newly developed scales can provide a means for faculty training near-peer facilitators to efficiently evaluate their students and programs and can help serve as discussion points for improving their programs.

Limitations
Three key limitations should be noted for our study: First, the development of instruments that produce valid and reliable data necessitate a sufficient number of respondents in order to conduct thorough psychometric evaluations. Four iterations of data collection were necessary at our research setting in order to collect a sufficient number of respondents even with the large number of peer leaders facilitating general chemistry courses each term; we expect for smaller institutions and smaller courses that even more data collection iterations would be necessary. Despite our sufficient sample size, we acknowledge that more data is needed to further confirm our results and establish stronger evidence for the reliability and validity of data generated by our instrument.
Second, while our instrument is designed for near-peer facilitators, our instrument development and psychometric evaluations were conducted with a specific type of near-peer facilitators: peer leaders in a peer-led team learning pedagogical environment. Given the parallel roles of peer leaders and learning assistants, we do not anticipate that the instrument will function differently; however, we recommend thorough psychometric evaluations when using the tool in any new setting, and strongly recommend when using the tool with learning assistants. Third, Likert-scale self-report is one form of data from which to gather teaching beliefs and self-efficacy data. Interview data, reflection essays, and even observation data can provide additional insights into the experiences of near-pear facilitators; such methods have shown to be a value for studies of teachers and graduate teaching assistants. These additional data courses would provide a more holistic understanding, including triangulation of assertions. While data collected from all methods synthesized in a single study may be impractical (and a burden on participants to provide such copious data), studies parallel to those of teachers and graduate teaching assistants would further illuminate the dimensionality of teaching beliefs and self-efficacy of near-peer facilitators 1.00 .30** .10* .17** .28** .12* .14** TB3 1.00 .15** .24** .39** .20** .22** TB4 1.00 .22** .20** .22** .10* TB5 1.00 .43** .42** .14** TB6 1.00 .45** .14** TB7 1.00 .16** TB8 1.00 TB9 Note. * p < .05; ** p < .01.