To Chat or Not To Chat: Text-Based Interruptions From Peers Improve Learner Confidence in an Online Lecture Environment

: Technology-driven interactions are becoming commonplace, particularly as online classes, telecommuting, and virtual meetings across distances and time zones have all increased in popularity. Platforms such as Google Meet, Skype, Webex, and Zoom use synchronous audio-visual communication supported by text-based chat, emoticon responses, and other supplementary functions. Given this uptick in the use of video conferencing with dynamic integrated features, it is important to understand how attention and cognitive resources may be taxed in these environments, and what that may ultimately do to participants’ ability to comprehend the target content. In the current study, we investigated the impact of topically-relevant student-initiated text chat frequency on comprehension during an online lecture. The findings revealed that chat involvement alone does not affect learning itself. Chat activity was found to not be a distraction but in fact, a facilitator of increased confidence in learning in an online lecture environment when controlling for other outside distractions. Overall, the findings suggest that relevant chat content is not distracting and can be helpful in reinforcing concepts through supportive examples in adjacent modalities.

uptick in the use of video conferencing with these enriched features, it is important to understand how attention and cognitive resources may be taxed and what that may ultimately do to participants' ability to comprehend the target content. Specifically, the impact of text-based chat on comprehension of online video lecture content is of interest in the present study.
Effective learning draws heavily on an array of cognitive processes, including memory and attention. Research in cognitive psychology makes plain the necessary conditions and resources for effective retention of learned information (Bjork & Bjork, 2011;Dunlosky et al., 2013;Putnam et al., 2016). In memory, information is encoded, or acquired and represented in the mind, stored for a period of time, and then retrieved in order to be used (McDermott & Roediger, 2014;Melton, 1963). In order for effective learning to occur, a student must engage in successful encoding of the material as well as successful retrieval (Bjork & Bjork, 2011). Meaningful encoding can be accomplished through connecting newly learned information to already-learned information, and by developing strong, detailed representations of a new construct (Bjork & Bjork, 2011;Dunlosky et al., 2013). Once a learner has invested some time and effort into developing an adequate representation of the newly learned information, the new information can be held in storage indefinitely (McDermott & Roediger, 2014). When it is time for the information to be used, it must be pulled out of memory via a retrieval process (Bjork & Bjork, 2011;Melton, 1963). Retrieval is successful when the targeted information is called back and can be reported, used, or shared. Effective retrieval can be difficult to accomplish, especially when the process is unpracticed (Bjork & Bjork, 2011;Dunlosky et al., 2013). Many strategic interventions to improve learning in college students focus on retrieval practice and the development of effective retrieval strategies ( Dunlosky et al., 2013;Putman et al., 2016). In short, the memory processes that drive successful learning are effortful and require both time and cognitive resources.
Because of this high demand for cognitive resources during the learning process, it is no secret that multitasking affects comprehension and learning outcomes. Early groundwork in both cognitive load theory (Chandler & Sweller, 1991;Sweller, 1988) and working memory theory (Baddeley, 1998) argue that engaging in multiple tasks can demand more resources than are available, leading to deficits in processing, performance, and learning. In addition to the cognitive demands of the task at hand, distractions in a learner's environment further contribute to the mental workload required to engage and learn. Even simple background distraction promotes sharing of attentional resources, which contributes to declines in performance. For example, participants completing a digit span task performed better when working in silence, rather than when distracting auditory stimuli like instrumental or vocal music were played in the background (Alley & Greene, 2008). The effects of distraction are especially pronounced when individuals have to effortfully attend to more than one thing at a time. In one study of reading comprehension, participants were asked to study a passage either in silence or while an informative video played in the background (Lee et al., 2012). Participants that read in silence performed better on a comprehension test than participants that read with a background video playing.
In addition to everyday environmental distractions, the modern classroom is rich with distraction opportunities as well. The increased availability of technology in learning spaces presents a tremendous opportunity for attentional disruption to take hold (Ravizza et al., 2017). In a study of introductory psychology students, Ravizza and colleagues (2017) showed that students engage in a wide variety of technology-based distractions during class, including web browsing, engaging on social media platforms, streaming television or sporting events, and playing games. Multiple studies have demonstrated that these multitasking behaviors during class can impact a student's learning and comprehension outcomes (Sana et al., 2013), and ultimately affect course grades (Ravizza et al., 2017). In addition to demonstrating the negative effects of in-class laptop multitasking on lecture comprehension, Sana and colleagues (2013) showed that laptop multitasking was detrimental to learning outcomes of students who were not multitasking, but who could view the screens of in-class Journal of the Scholarship of Teaching and Learning, Vol. 23, No. 2, June 2023. josotl.indiana.edu multitaskers. That is, even when a student attempts to focus on the lecture material, their attention can be disrupted by the behavior of a nearby multitasking classmate. In short, increased access to technology has lifted the cap on distraction opportunities in the traditional classroom.
Online learning environments are a unique context for distraction research, as students may engage in synchronous online work in the face of a multitude of environmental distractions. Not surprisingly, a wide variety of run-of-the-mill distractions that affect in-person learning can affect comprehension of lecture content in online learning environments as well (Blasiman et al., 2018). In one study, participants watched an online lecture while simultaneously engaging in any number of tasks, ranging from motor tasks (i.e. folding laundry, playing a video game), passive communication tasks (i.e. playing either a low-or high-arousal video in the background), or active communication tasks (i.e. taking a phone call or texting). Having a conversation was most detrimental to measures of comprehension, but all six forms of distraction resulted in significant performance declines when compared to participants in a no-distraction control condition. Blasiman and colleagues' (2018) findings strongly suggest that any form of distraction, regardless of modality, arousal, or degree of demand significantly affects student success in online learning. These findings build on those suggested by Zeamer and Fox Tree (2013) which indicate that concurrent speech information (e.g. overhearing a nearby conversation) can impair learning and comprehension for short recorded lecture sessions.
Much of the research regarding the cost of managing two competing streams of language (i.e. written text messages and auditory lecture material) has been motivated by the ubiquity of cell phones, and is particularly relevant now given the introduction of chat functions in online learning environments. Texting while listening to a lecture reliably impairs comprehension, regardless of the texter's level of proficiency, experience, or confidence. In multiple studies, participants have been assigned to text messaging or non-text messaging conditions while being asked to attend to a lecture (Barks et al., 2011;Dietz & Henrich, 2014;Gingerich & Lineweaver, 2013). Consistently, participants assigned to engage in text messaging during the lecture performed significantly worse than participants who did not text (see Chen & Yan, 2016 for review). Interestingly, proficient texters performed worse on lecture comprehension assessments than text messaging novices, suggesting that texting proficiency contributed to more switching between the two tasks (Barks et al., 2011). Participants' awareness of distraction (Dietz & Henrich, 2014) did not moderate the relationship between texting and test performance, but participants who did not engage in text messaging felt more confident in their performance on the comprehension task (Gingerich & Lineweaver, 2013). In short, engaging in text message distractions has been shown to contribute to poorer learning outcomes when the text information was irrelevant to the target information being presented.
The specific cost of competition for the same attentional resources across different modalities is particularly relevant to understanding the impact of concurrently processing audio-based lecture content and text-based chat, but the mechanism for this performance cost is a source of extensive debate. When two simultaneous tasks demand the same cognitive resources (e.g. perceptual discrimination, tactile/spatial manipulation, language), interference between the two tasks may lead to a cost in overall performance (Bourke et al., 1996;Chandler & Sweller, 1991;Sweller, 1988). Salame and Baddeley (1989) argue that simultaneous reading and processing of auditory information compete for shared phonological processing resources, resulting in decreased performance. Pashler (1990) suggests that when two tasks require the same mechanisms or resources, queuing takes place prior to execution, whereby one task takes priority and the other waits. Cognitive Theory of Multimedia Learning (CTML; Mayer, 2005) suggests that the working memory resources required for both auditory processing of spoken language and visual processing of written language are the same, as they require the organization of words to contribute to the verbal model. So, competition for verbal resources simultaneously in a multimedia learning environment could inhibit comprehension and performance. Regardless of the attentional mechanism, it is clear that sharing language resources across auditory and visual processing of language tasks results in poorer comprehension (Lee et al., 2012;Zeamer & Fox Tree, 2013).
In each of these studies, text-based information was irrelevant to the target lecture information. In addition to the required attentional shift from texting to lecture-watching, the contents of the message were also distracting. However, research in multimedia learning indicates that topically relevant text information may not necessarily be subject to the same rules and principles. Mayer & Moreno (2003) describe several mechanisms through which cognitive load is increased in multimedia learning. When both essential and incidental processing are required simultaneously, as is the case when students are viewing a lecture and managing text-based information (e.g. chat activity) the essential processing (e.g. attending to the lecture) is encroached upon by incidental processing (i.e. following the conversation in the chat; Mayer & Moreno, 2003). For example, Wecker (2012) shows that students who are asked to listen to a lecture while viewing a set of accompanying text-heavy presentation slides tend to suppress auditory processing of the oral lecture information in service of reading the slide text. To avoid significant impairments in comprehension, a weeding strategy is recommended, such that the extraneous information is eliminated to better allow students to focus on the essential processing task (Mayer & Moreno, 2003). This weeding strategy may give rise to a coherence effect, where participants better comprehend multimedia information when "interesting but extraneous material" is eliminated (Mayer & Moreno, 2003). Confirmed by Wecker's (2012) test of concise and reduced-text slides, students retain more information during lectures when supplemental material is not simultaneously provided.
It is possible that integrating relevant text-based information alongside targeted lecture information may improve, or at least not hinder, comprehension and retention. Some studies indicate that leveraging the interactive or dynamic nature of multimedia interfaces can be advantageous and is preferred by learners. Xie (2018) showed that learners retained information better when both visual and auditory cues were coordinated during a lecture, compared to visual only cues or no cues. In contrast with Mayer and Moreno (2003), Xie (2018) showed that coordinated presentation of essential and extraneous information improved test performance compared to presentation of essential information alone, suggesting that co-presentation of relevant information can help students solidify their own comprehension. In addition to improvements in comprehension, multimodal presentation also can change the frequency of important learning-related behaviors. Lee and colleagues (2013) showed that students who used an online home learning system preferred using both text and video to communicate, rather than just one single modality. Students in these multimodal environments asked questions more frequently than they would have done in traditional in-person school settings. In addition to these changes in behavior, students also reported beliefs that their comprehension was better off in a multimodal environment than in a non-video conferencing platform (Lee et al., 2013). These studies indicate that incorporating relevant text-based information alongside lecture information can improve beliefs about comprehension and personal judgments of learning (JOL).
In these new remote learning environments, interfaces such as Zoom are equipped with builtin chat functions that are fully integrated with the lecture environment. Instead of spreading attentional demands across multiple devices, Zoom chat windows allow for the secondary extraneous information to be presented in the same general visual field that the essential information occupies, reducing the visual task-switching cost (LaBerge & Brown, 1986). There is limited research on the impact of text chat-integrated environments for learning, but some preliminary work indicates that using the text chat function may be helpful in some aspects of secondary language learning (Kozar, 2016). Text chat can be used to introduce new terms, provide additional information, deliver feedback and corrections, or maintain an agenda for the session (Kozar, 2016). When used in alignment with the contents of the lecture or learning task, text-based chat may facilitate, or at least not harm, comprehension of the target material.
Beyond actual performance outcomes, students' perceptions of their own learning are easily influenced by distraction (Alley & Greene, 2008;Barnes & Dougherty, 2007;Blasiman et al., 2018;Gingerich & Lineweaver, 2014). These perceptions are commonly measured using JOLs, or selfreported judgments of learning (Dunlosky et al., 2005;Koriat, 1997). Making accurate JOLs requires students to be aware of the potential threats to their own learning, and to account for the impact these factors may have on learning outcomes. This metacognitive task often results in overconfident judgments, whereby students provide JOL estimates that exceed their actual performance on an assessment (Koriat & Bjork, 2005). Surprisingly, Blasiman & colleagues (2018) have shown that moderate levels of distraction during a learning task can improve JOL accuracy, suggesting that students' awareness of the impact of certain distractions may contribute to more accurate metacognitive judgments. These JOLs can contribute to students' decisions about how to manage distractions in their own learning environments in the future. In the current study, we investigate the impact of both imposed distractions (i.e. incidental information via text-based chat) and students' awareness of existing distractions (e.g. other environmental noise) on JOL accuracy for essential information.
With the growing integration of technology in classrooms (and everyday life) the prevalence of distractions and the pressure to manage them in real-time has grown as well (Calderwood et al., 2014;Lee et al., 2012;Jacobsen & Forste, 2011). Depending on the lens with which the current virtual learning context is viewed, it's possible that text-based chat interjections could be a harmful distraction (Blasiman et al., 2018;Wecker, 2012) or a helpful supplement (Kozar, 2016;Lee et al., 2013;Xie, 2018). The competition for resources required task switching, and increase in cognitive load may compromise comprehension and undermine learner confidence. Alternatively, the added stream of relevant information may serve to facilitate comprehension and bolster confidence. In the current study, we investigate the impact of topically relevant student-initiated text chat frequency on comprehension and confidence during an online lecture. The current study evaluates the impact of increasing extraneous information on comprehension of essential information in a multimedia learning environment while accounting for the impact of existing environmental distractions. Participants engaged in a brief online lecture session and were exposed to varying amounts of topicallyrelevant text chat provided by actors presenting as fellow participants. Participants provided their JOLs (Blasiman et al., 2018) prior to completing a comprehension assessment comprised of a variety of free recall, aided recall, and recognition questions (Srivastava, 2013). Participants also completed a working memory task (Kirchner, 1958;Stoet, 2010;Stoet, 2017) and a digital learning self-report assessment.

Hypotheses
In the current study, we evaluate three hypotheses regarding the impact of topic-relevant chat interruption frequency and environmental distractors on test performance, JOLs, and JOL accuracy.
H 1: Participants' comprehension of lecture content will increase from exposure to topicallyrelevant chat activity, and this positive relationship will be impacted by both working memory capacity and awareness of other environmental distractors as covariates (Alley & Greene, 2008;Baddeley, 1998).
H2: Participants' confidence in their own learning, as reported through JOLs, will increase with exposure to topically-relevant chat activity. This relationship will be affected by the impact of both frequency and awareness of other environmental distractors as covariates.
Journal of the Scholarship of Teaching and Learning, Vol. 23, No. 2, June 2023. josotl.indiana.edu H3: The accuracy of participants' JOLs will be negatively associated with the degree of exposure to distracting chat information as shown in previous work by Blasiman and colleagues (2013). As chat frequency increases, we expect that participants' JOLs will decrease.

Method Design
This study utilized a three group between-subjects design with the independent variable being chat condition, with three levels (No Chat, Moderate Chat, and Heavy Chat occurrences). The primary dependent variables are comprehension performance and JOLS. Additionally, individual differences variables (e.g. working memory capacity, distraction estimates, and digital learning and experiences assessment) were used in analysis.

Participants
Participants were 89 undergraduate students (77 female, 10 male, 2 non-binary) enrolled in introductory psychology courses at a small liberal arts college in the United States. The average age of participants was 18.23 years (SD = 0.61). The majority of participants (89.89%) were first-year students. The sample was composed mostly of white students (91.0%). Participants received one research participation credit for their voluntary involvement, outside of their psychology course, in the study.

Power Analysis and Sample Size Justification
An a priori power analysis was performed for sample size estimation using GPower 3.1 software. The effect size for this analysis (η 2 p = 0.15) was selected as a conservative estimate based on Srivastava's (2013) report (η 2 p = 0.2). Using = .05 for a power value set at .95, the estimated sample size required was approximately n = 93. Given the challenges inherent in recruiting participants for a study via entirely synchronous remote sessions in the midst of a global pandemic, the current study sample was just shy of this targeted sample size. Ultimately, the achieved sample size of n = 89 was sufficient for the objectives of the study.

Materials
The synchronous data collection sessions were conducted via Zoom. All self-report questionnaires and assessments were delivered via Qualtrics survey software. To clarify the order materials were presented for both the non-counterbalanced and counterbalanced conditions, a task diagram is presented in Figure 1

Lecture
The lecture delivered by the researcher was a brief and detailed overview of the fundamentals of language from a cognitive psychology perspective. The lecture consisted of 1,644 words and took approximately 8 minutes to deliver in its entirety at a typical conversational pace. To simulate an authentic synchronous online learning environment, this lecture was delivered live during each data collection session by the same speaker. The lecture script is presented in Appendix 1.

Chat Contents
Two researchers acting as student participants engaged in scripted conversation in the public-facing chat area during the lecture. In the No Chat condition, the actors did not enter any information into the chat. In the Moderate Chat condition, the actors engaged in 4 paired chat instances at spaced intervals throughout the lecture. In the Heavy Chat condition, the actors engaged in 8 paired chat instances throughout the lecture. These chat interactions were scripted and locked to the content of the lecture. Chat instances, timing, and content are provided in Appendix 2.

Working Memory Task
The n-back task (Kirchner, 1958) measured an individuals working memory capacity by presenting single digit stimuli sequentially and asking the individual to recall if the current stimulus was the same stimulus that was present "n" trials ago. In this study, we implemented a 2-back version of this task. Participants were provided with one block of 25 practice trials before completing 2 blocks of 25 test trials each. This version of the n-back task was conducted using PsyToolKit (Stoet, 2010;Stoet, 2017) Accuracy and response time measures across all 50 test trials were collected for each participant. Both speed and accuracy measures of working memory capacity were computed. Mean accuracy was determined by computing the mean number of correct responses in each test block. Average speed was determined by computing the mean response time in milliseconds for each correct response in all trials across both test blocks.

Judgments of Learning
The Judgements of Learning Questionnaire, modeled after the questions used by Blasiman et al., (2018) examined how confident individuals feel that they understood the information presented in the Zoom lecture using a 10-point Likert scale ranging from low to high level of confidence (Appendix 3). This was modified to be specific to the current study. Participants provided individual JOLs for overall learning (Overall JOL), ability to list topics discussed in the lecture (JOL -Free Recall), completing fill-in-the-blank questions (JOL -Aided Recall), recognizing material not presented in the lecture (JOL-Recognition), and answering multiple choice questions (JOL -Multiple Choice). JOL scores were determined using the raw value (out of 10) reported by participants on each of the five JOL measures.

Distraction Estimates
The Distraction Estimate Inventory (Appendix 4) assessed whether individuals were distracted during this particular Zoom lecture and what other activities they may have engaged in. Participants responded to items regarding distractions such as engaging in other tasks during the lecture, using a cell phone, chat feature interruptions, and having a conversation with another person not participating in the Zoom session. Participants' awareness of their own baseline level of distraction during the current study was assessed using a 5-point Likert scale (ranging from strongly disagree to strongly agree) in the Metacognitive Awareness of Distraction subset of the inventory. Metacognitive Awareness of Distraction was determined by computing the mean of the eight items, including three reverse-scored items. Participants' awareness of the frequency of distraction during the current study using a 4-point Likert scale (ranging from most of the time to never) was assessed using the Distraction Frequency Estimate subset presented in the inventory. Distraction Frequency was determined by computing the mean of these five items, including three reverse-scored items.

Lecture Comprehension
The Comprehension Assessment Questionnaire (modeled after the assessment used by Srivastava, 2013) evaluated how much content participants learned and remembered from the Zoom lecture through free recall, aided recall, and recognition questions. This assessment is presented in Appendix 5. The free recall prompt instructed participants to write as much information as they were able to remember from the lecture. The aided recall questions required participants to complete sentences with the appropriate word pertaining to the language lecture, similar to a typical fill-in-the-blank exam question. The recognition questions displayed various pieces of information that were covered by the lecture, covered in the chat, or not covered at all. For each piece of information, participants were asked to determine whether the statement was presented in the lecture using a binary "Yes" or "No" response.

Lecture Comprehension Scoring Methods
Four measures of comprehension were computed based on responses provided on the Comprehension Assessment Questionnaire. Free recall responses were scored by awarding one point per piece of lecture information correctly reported by the participant. Information that was incorrect or presented ambiguously was not counted toward the free recall score. All points were summed to determine the Free Recall Score. Two independent raters scored the free recall responses. Cohen's κ was computed to determine the degree of agreement on free recall scores between raters. There was strong agreement between raters, κ = .868, p < .001. For the aided recall section, participants were awarded one point per correct target word with a maximum of 10 points possible. Incorrect, blank, or ambiguous answers were not counted toward the aided recall score. All points were summed to determine the Aided Recall Score. Participants were awarded one point per correctly identified target item for recognition questions with a maximum of 24 points possible. Incorrect, blank, or ambiguous answers received no points. Recognition Score was determined by summing all points. Overall Comprehension Scores, the primary dependent variable of interest, were computed by summing the Free Recall Score, Aided Recall Score, and Recognition Score. Correlations for all performance measures (n-back, JOLs, comprehension, and distraction awareness) are presented in Table 1. Given the rapid increase in use and exposure to digital learning environments, participants were asked to complete a series of assessments to evaluate individual differences in digital learning preferences, experiences, and opinions. This assessment was comprised of a series of existing questionnaires that were modified to reflect current technologies. These questionnaires are oriented toward general individual differences in typical behaviors outside of this study. Each of the scales used in the Digital Learning & Experiences Assessment were scored by computing the mean response according to the guidelines from the original sources. Correlations between all digital learning measures are provided in Table 2 to evaluate reliability of participant attitudes across measures.

Computer Mediated Communication
The Computer Mediated Communication Assessment (CMCA; Scott & Timmerman, 2005) assessed individuals' experiences with computer-based interactions using a 5-point Likert scale ranging from strongly disagree to strongly agree. These questions address a variety of feelings and opinions individuals may have when communicating through computers.

Online Learning Perceptions
The Perceptions of Online Learning Questionnaire (adapted from Astani et al., 2010) addresses individuals' opinions of online courses compared to traditional in-person courses using a 5-point Likert scale ranging from strongly disagree to strongly agree.

Distractions
Distractions experienced while studying were assessed via a brief self-report measure developed by Mokhtari and colleagues (2015). In the current study, we refer to this measure as the Distractions While Studying Questionnaire (DWSQ). The DWSQ assesses the typical behaviors of individuals while they are completing assignments and studying for various courses. This questionnaire also addresses whether individuals feel multitasking impacts their concentration.

Student Preferences
The Student Preferences Questionnaire (adapted from Lee et al., 2013) assesses individuals' preferences of Zoom lectures and the use of the chat feature on Zoom using a 5-point Likert scale ranging from strongly disagree to strongly agree. This was modified to assess recommendations and preferences between Zoom and in-person lectures as well as to additionally assess preferences of the chat function on Zoom.

Zoom Chat Tendencies
The Zoom Chat Tendencies Questionnaire examined participants' existing chat feature usage in Zoom. Participants answered questions regarding frequency of chat use, specific conditions for chat use, and reasons why individuals may or may not use the chat feature. All questions and response options are presented in Appendix 6.

Procedure
After confirming consent via an online form, participants were provided a link to the current study's Zoom room. Participants were initially sent to the waiting room, a temporary digital holding space outside of the main Zoom meeting room. The instructions on the waiting room screen asked participants to change their current username to a unique numeric identifier, generated from components of their student ID number. The instructions also asked participants to close all other web browser windows, and to wait to be admitted to the main room. The two actors were also logged in as participants and followed these instructions accordingly.
At the start of the session, participants were admitted from the waiting room to the main meeting space. The researcher, whose camera remained on for the duration of the study, welcomed participants, reminded participants to update their usernames to reflect the unique identifier, and stated that the session would be recorded, including all chat, audio, and video activity. Participants were notified that they would remain muted, and that they had the option to turn their cameras on or off for the duration of the study. Both actors left their cameras on throughout the experiment.
The researcher explained that participants would answer some questions about themselves, would participate in a simple short term memory test, would listen to a short lecture, and would take a brief test to assess what they've learned. The researcher then explained that the various components of the study required that the participant use links to access different pages, and that these links would be provided in the chat area. Participants were notified that they should continue to remain logged into the Zoom room for the duration of the study, and that they should always return to the Zoom room after the completion of each task. At the conclusion of each task, participants were asked to use the "Raise Hand" function to confirm that they were ready to proceed with the next segment.
Participants were then provided with a link to the initial Qualtrics-based survey which contained the demographic self-report and the Digital Learning and Experiences Assessment.. At the conclusion of this set of assessments, the Qualtrics page reminded the participants to return to the main Zoom room and click the "Raise Hand" button to confirm that they were prepared for the next segment.
Participants were told that the next segment of the study involved a brief test of short-term memory. Participants were asked to access the task via a new link in the chat, and were reminded to follow the instructions on their screens. The link to the PsyToolKit-hosted n-back task was sent via the chat. Participants viewed the instructions and engaged in one block of 25 practice trials. Feedback on both speed and accuracy were delivered after the practice block. Participants then completed two Journal of the Scholarship of Teaching and Learning, Vol. 23, No. 2, June 2023. josotl.indiana.edu blocks of 25 trials, receiving feedback on accuracy after each. At the conclusion of the final trial block, participants were reminded to return to the Zoom room, and to use the "Raise Hand" function to indicate completion of the n-back task.
The researcher introduced the lecture segment and instructed participants to simply listen. The researcher delivered the scripted lecture. In the Moderate Chat and Heavy Chat conditions, the actors used the chat to provide their scripted input. At the conclusion of the lecture, participants were asked if they had any questions about the material.
Participants were sent a link to a final Qualtrics survey, which contained the JOL questionnaire, the Distraction Estimate Questionnaire, and the Comprehension Assessment Questionnaire. Participants returned to the Zoom room and used the "Raise Hand" function to indicate completion. Participants were thanked for their time and were dismissed. The Zoom session ended, and the recording of the session was saved.
In counterbalanced sessions, participants followed these same steps, but completed the initial round of self-report measures (including the demographic questions and the Digital Learning and Experiences assessment) at the end of the session after the final comprehension test. Counterbalancing was implemented to account for the potential influence of participants' experiences in the current experimental session on their Digital Learning and Experiences responses.

Chat-Based Disruption Does Not Affect Test Performance, Even When Controlling for Other Distractors
To test H1, that comprehension of lecture content will be affected by exposure to chat activity and that working memory capacity and external distractors may have an impact on this relationship, a oneway Analysis of Covariance (ANCOVA) was conducted with working memory capacity and metacognitive awareness of distractions as covariates. The one-way ANCOVA indicates that the frequency of chat-based disruption in an online learning environment does not affect comprehension and retention of lecture content, even when considering the impact of working memory capacity and awareness of other distractions during the testing session, F(2, 84) = .963, p = .386. Levene's Test for equality of error variances was conducted and the assumption was met, F(2, 84) = 1.308, p = .276. The first covariate, working memory capacity, was not significantly related to comprehension test score, F(1,84) = 3.42, p = .068. The second covariate, awareness of distractions, significantly impacted participants' scores on the comprehension test, F(1,84) = 6.07, p = .016. Once participants' awareness of distractions was controlled for, there was no significant effect of chat frequency on test scores. Estimated marginal means, reflective of average comprehension score when controlling for covariates, are displayed in Figure 2. This suggests that topically-relevant text messages delivered throughout a lecture do not detract from students' ability to learn the target material. In contradiction to Mayer and Moreno (2003), we demonstrate that student learning in online environments is not necessarily sensitive to contemporaneous presentation of relevant incidental information. All means and standard deviations for the three components of the comprehension test (free recall, aided recall, and recognition) are presented alongside the total comprehension test score in Table 3.

Chat-Based Disruptions Improve Confidence in Learning When Adjusting for Other Distractions
To evaluate H2, the effect of chat-based disruption on JOLs while accounting for awareness and perceived frequency of other distractions, a one-way ANCOVA was used. Levene's Test for equality of error variances was conducted and the assumption was met (F(2,86) = 2.483, p = .089).
Participants' metacognitive awareness of their own level of distraction significantly affected JOLs, F(1, 84) = 14.359, p < .001. Participants' reports of frequency of these distractions also  Pairwise comparisons of estimated means with a Bonferroni adjustment for multiple comparisons revealed that participants report significantly more confidence in learning when exposed to a moderate level of chat activity (Moderate Chat) than when the chat is not used at all (No Chat), p = .029. JOLs provided by participants assigned to the Heavy Chat condition did not differ significantly from JOLs provided by participants in the Moderate Chat condition (p = .977) or the No Chat condition (p = .351). Means and standard deviations for all JOL sub-scales are displayed in Table 4. Finally, to test H3, whether students' abilities to accurately assess their own performance was influenced by chat disruption, correlations and correlation comparisons were conducted. The correlations between Overall JOL and Total Comprehension Test Score were computed for the No Chat (r = .443, p = .011), Moderate Chat (r = .093, p = .637) and Heavy Chat conditions (r = .381, p = .042). JOLs produced by participants in the Moderate Chat condition were not significantly correlated with performance, whereas participants in the No Chat and Heavy Chat conditions reported JOLs that were significantly correlated with their actual test performance, suggesting greater accuracy of JOLs.
Fisher's r-to-z transformations were used to compare the relative strengths of these correlations between JOLs and test performance. Similar to findings presented by Blasiman et al. (2018), we found that JOL accuracy was not compromised by the presence of distractors. Specifically, there were no significant differences in the accuracy of JOLs across levels of distraction (Table 5).

Relationships Between Digital Learning and Experiences, Comprehension Test Performance, and JOLs
Given that the widespread adoption of video conferencing tools in educational settings, we seized the opportunity to understand how individual differences in digital learning and experiences are linked to JOLs and learning outcomes. To explore the relationship between participants' opinions and beliefs about computer-mediated learning, distractions, JOLs, and comprehension of lecture material, a series of exploratory bivariate correlations were conducted. All correlations are displayed in Table 6. Overall JOLs were positively correlated with positive attitudes toward online learning (r = .38, p < .001) and use of Zoom (r = .352 , p = .001). Overall comprehension test performance was also positively correlated with positive attitudes toward online learning (r = .278, p = .008) and positive attitudes toward the use of Zoom (r = .262, p = .013). Computer Mediated Communication Apprehension (CMCA) was negatively correlated with preferences for Zoom (r = -.571 , p < .001), Zoom chat (r = -.298, p = .008), and online learning (r = -.576, p < .001). These correlations hint at the possibility of strong, consistent dispositional beliefs about online learning that can contribute to differences in confidence in learned material as well as performance on assessments.

General Discussion
The aim of this study was to evaluate the impact of text-based chat interruptions on comprehension and learner confidence in an online lecture environment. Findings from this study indicate that increasingly frequent chat from peers in a synchronous online lecture environment does not serve as a significant distractor. Contrary to existing findings (Barks et al., 2011;Blasiman et al., 2018;Dietz & Henrich, 2014;Gingerich & Lineweaver, 2013), exposure to these text-based interjections does not detract from students' ability to learn and master lecture content. When considering the already noisy environment students are often immersed in as they learn in these online spaces, it is possible that the effect of chat disruption is simply drowned out by the effects of other distractors students experience at the same time (e.g. listening to music, socializing, watching tv, social networking). However, when controlling for participants' other self-reported distraction experiences and their working memory capacity, we find that test performance is unaffected by chat input, which echoes existing findings regarding text messaging and learning (Dietz & Henrich, 2014). Put simply, interruption via chat activity isn't distracting enough to affect learning. These findings strike a contrast with the framework posited by Mayer and Moreno (2003), who initially suggest that the cognitive load associated with multimedia learning can be taxed through the introduction of incidental (i.e. extraneous and seemingly unnecessary) information. In this study, we show that the introduction of incidental information does not introduce enough distraction to affect learning.
Instead, the current study demonstrates that incidental information can boost confidence in learned material. When controlling for outside distractions, participants in this study showed a significant increase in JOLs when exposed to moderate levels of topic-relevant chat in concert with the lecture when compared to participants who were not exposed to topic-relevant chat content. The predictive accuracy of JOLs did not differ based on the frequency of chat activity, indicating that the impact of chat is relegated specifically to students' perceptions of learning. To phrase in terms of Mayer and Moreno's (2003) work, the incidental information available via moderately frequent chat can support students' confidence in learning the essential information presented in the lecture. This incidental information does not need to be deliberately integrated into the learning environment; in the current study, the lecturer does not engage with or acknowledge the chat content. The chat serves as a separate stream of topically-related incidental information provided by peers, and does not require effortful incorporation into the essential information. In short, the leveraging of embedded chat functions in online learning environments is not harmful for learning, is extremely easy for participants to use, and can build confidence in the essential information without involvement from the lecturer.
These findings exist in clear contrast with the existing literature on the detrimental effects of text messaging and learning. How is it possible that exposure to more information in an online learning environment can have no detrimental effect on learning and can, in some cases, facilitate increased confidence in learned material? One potential explanation for these findings is that participants find chat engaging in an otherwise unengaging and isolated lecture environment. During lecture sessions, students are expected to remain engaged by leveraging sustained attention. However, this degree of engagement is difficult to maintain for an extended period of time, and can lead to increased instances of mind wandering and distraction (Risko et al., 2011;Szpunar et al., 2013a, Szpunar et al., 2013b. The increased exposure to topically relevant information may serve to keep students engaged in the material and in the digital lecture space, much like interpolated testing has been shown to do (Szpunar et al., 2013a).
It is also possible that exposure to topically-relevant chat in small doses is helpful in reinforcing concepts covered in the lecture. The substance of the chat itself may not be sufficient to bolster learning, but could serve as confirmation that the students are understanding the essential information in real time. Peer support has been shown to increase learning confidence in contexts such as teacher education (Prince et al., 2010), computer science (Packard et al., 2020), and nursing education (Gray et al., 2019). Leveraging existing digital space to allow for the exchange of examples and information among peers without added involvement from the instructor may reduce uncertainty and elevate confidence without significantly taxing students' cognitive load.
Despite extensive research showing the predictive value of WMC on performance in cognitively demanding environments, findings from the current study do not show any effect of WMC on comprehension outcomes across levels of distraction. Although WMC is usually an excellent predictor of a person's ability to manage distractions and allocate attention, a growing collection of research has demonstrated evidence to the contrary. In a meta-analysis of several WMC studies, Sörqvist and colleagues (2017) demonstrate that individual differences in WMC may not predict the ability to handle varying levels of distracting information, particularly in contexts where visual and verbal tasks are competing for resources. The lack of effect of WMC on comprehension outcomes in the current study may provide an additional piece of evidence to support this new perspective. It is also possible that instead of relying on executive functions to mitigate the effects of ever-present disruptions, students have developed effective strategies to manage distractions in online learning environments, such as closing the chat window or disabling notifications. By strategically offloading this attentionally demanding task, distraction management would require fewer cognitive resources, reducing the potential impact of individual differences in WMC on task performance outcomes. Ball and colleagues (2021) have shown that the impacts of WMC on task performance outcomes are eliminated when students are encouraged to use offloading strategies. Although it is initially surprising that WMC does not seem to play a role in the relationship between distraction and comprehension, this null relationship presents some interesting opportunities for future research regarding strategic management of distractions during the learning process.

Future Directions & Limitations
The current study explores one specific instance of chat use in an online learning environment. Given the novelty and increased popularity of these platforms, a number of questions still remain. Certainly, it's important to confirm the impact of topically-irrelevant chat on learning outcomes and JOLs, in line with the existing literature on text messaging and learning (Barks et al., 2011;Chen & Yan, 2016;Dietz & Henrich, 2014;Gingerich & Lineweaver, 2013). In addition, an exploration of the effects of lecturer engagement with the contents of the chat could begin to bridge the gap between what is considered essential and what is considered incidental information in an integrated online learning platform (Mayer & Moreno, 2003). If the instructor acknowledges and incorporates useful chat information into the lecture, this may blur the line between essential and incidental information, further clarifying the boundaries of a coherence effect (Mayer & Moreno, 2003). Finally, as previously mentioned with regard to the impact of WMC, investigation into student strategies for managing incoming chat information may shed light on the individual differences at play in online learning spaces. Although the vast majority of participants in the current study were aware of the chat contents, some participants may exercise strategies to avoid interruptions in an attempt to regulate focus on the task at hand. Understanding these strategic forms of distraction management may allow educators and other users of online meeting interfaces to encourage and support successful attention regulation.
Future research may also target the limitations of the current research and attempt to address some of the shortcomings of the current study design. In the current study, the lecture portion lasts Journal of the Scholarship of Teaching and Learning, Vol. 23, No. 2, June 2023. josotl.indiana.edu for less than 10 minutes. The brevity of the lecture is not reflective of the lengthier lectures that are common in college courses. Because the lecture portion was not long-lasting, students may have encountered fewer difficulties with maintaining attention than they would for a lengthier lecture (Ravizza et al., 2017). Additional research may evaluate the effects of distraction in online learning environments over a longer period of time. The role of the instructor as director of attention is also largely omitted in the current study. Instead of drawing attention to the chat window by addressing chat activity and answering questions, the instructor in the current study does not deliberately acknowledge or incorporate chat contents into the learning session. Future research may aim to investigate the effects of effortful incorporation of these multimedia learning components on the part of the instructor. By further evaluating the impacts of capitalizing on extraneous information (e.g. chat contents) in an online learning environment, recommendations for best practices in synchronous online teaching can be developed with the goal of leveraging the power of these online tools to improve student learning.

Conclusion
Despite initial apprehensions about the ample opportunities for distraction in online learning environments, the current study shows that the use of the chat function for topically relevant chat is not harmful for learning. Moderate amounts of relevant chat from peers can bolster confidence in learned material, even when the contents of the chat are unacknowledged by the lecturer, and even when the learners themselves are not actively participating in the chat. These findings present an exciting opportunity for educators to support student confidence during the learning of tough concepts. Providing supportive examples in adjacent modalities and highlighting the usefulness of these concurrent information streams can encourage students to continue engaging in the target material. Instead of viewing these features as distractors and ultimately attempting to minimize their use, teachers can use simple prompts to leverage these integrated features to improve student outcomes. The small practice of taking a simple pause to remind students to "drop an example in the chat!" may be enough of a nudge for students to engage with adjacent learning opportunities, and an opportunity for instructors to take a small step toward embracing peer-led text chat as a learning tool rather than an attentional liability. josotl.indiana.edu terms of size. If we investigate smaller pieces of language, we can consider the individual sounds that make up a language. If we investigate the largest levels of language, we can think about a speaker's intent when uttering a sentence or paragraph (like in situations where you'd use sarcasm). Let's go through the units or levels of spoken language from smallest [M1], [H1] to largest. Signed languages like American Sign Language share most of these attributes, but can differ slightly in some places, so we'll stick to talking about spoken languages.
The smallest unit of spoken language is a phoneme. A phoneme is a fundamental unit of sound. Each word is made up of one or more phonemes. For example, the word dog contains three phonemes, /d/, /o/ and /g/. There are three distinct sounds. This should not be confused with the number of letters in a word. In many cases, one phoneme is represented by several letters. For example, the word "three" has three phonemes. The /th/ is one sound unit. It can't be broken down any further. The /r/ is a second sound unit, and the long /e/ is a third, even though it's spelled with two letters. So the word "three" is spelled with five letters, but only uses three phonemes, or fundamental sound units [H2].
Each language differs in terms of the number of phonemes it's made of. North American Spoken English contains approximately 42 phonemes, which means that the entirety of our spoken linguistic system is made up of a little over 40 sounds. When we consider the complexity of spoken language, it's pretty remarkable that everything we have to say can be broken down into these few pieces. Other linguistic systems are comprised of different numbers and kinds of phonemes. Some linguistic systems have far more phonemes than North American English. For example, Taa, a language spoken by many people in Botswana and Namibia, contains about 140 phonemes, including five distinctive types of clicks. Other linguistic systems are comprised of far fewer phonemes, like Hawaiian, which is made of approximately 13 sound units.
To recap, all spoken languages are comprise of a specific set of phonemes, or sound units. These sound units are different from the letters required to spell each word, and really have to do with what sounds are required for a speaker to produce the language. These phonemes can vary dramatically between languages.
On their own, phonemes do not necessarily mean anything. For example, the "th" phoneme doesn't carry informative meaning by itself. This takes us to the next level of spoken language, which is the meaning unit. Morphemes are the smallest unit of meaning in a language. Each morpheme is made of one or more phonemes, and carries meaning on its own. Morphemes can be root words like "cook" [M2], [H3], which is definable by itself. It can't be broken down into smaller parts and still retain the same meaning. Morphemes can also be prefixes or suffixes, like "un-" or "-ed" [H4]. We can define "un-" and "-ed" without having to attach them to a root word. "un-" means "not" and "-ed" refers to something that has already happened. These affixes have meaning on their own, but aren't standalone words. Morphemes can be combined to layer complex meanings. For example, the word "uncooked" is made of 3 of morphemes: /un/, /cook/ and /ed/.
Our minds need to hold representations of all of these morphemes and phonemes, as well as the rules for how to appropriately put morphemes together to make meaningful words. All of this information is stored in your mental lexicon. You can think of the mental lexicon as combination between a dictionary and a concept map. The morphemes in your vocabulary are stored here in close proximity to other morphemes that sound the same. For example, morphemes cat, bat, sat, hat and mat are all clustered together in your mental lexicon because they sound similar. The morphemes in your mental lexicon are also organized by meaning. Your mental lexicon likely represents the morpheme doctor alongside nurse, surgery, medical, and health because all of those morphemes have meaningful relationships with one another. Because your mental lexicon stores all of your morphemes and the rules for putting them together, you are able to stick several morphemes together to make meaningful words on the fly. Instead of storing "tie", "untie", "untied" and "untying" as separate items, you store "tie" as one morpheme, the prefex "un-" as another, and the suffixes "-ed" and "-ing" as two other morphemes. All of these items hold meaning on their own, and you know the rules required to string them together appropriately to create words. Think about how much information needs to be stored in the mental lexicon. Every meaningful linguistic unit in your vocabulary lives here. In your head, try and estimate how many morphemes exist [M3], [H5] in the average adult English speaker's mental lexicon. The answer is about 80,000 morphemes. Everything you communicate through language is encoded in these 80,000 linguistic building blocks.
If we move beyond morphemes and step a few levels up, we can start to investigate the meaning of linguistic components. If we focus on word meaning, we are thinking about something called semantics. Semantics can refer to a word's "textbook definition". For example, a "skyline" refers to a view of a horizon. However, this textbook definition can vary from the way that the word is conventionally used. We don't typically use the term "skyline" to refer to all horizon views. Instead, we usually reserve this term for city landscapes (for example, the Chicago skyline). So "skyline" technically means one thing, but in reality means another. One remarkable thing about semantics is that they can and do change dramatically over time. Words that were once used in one way (and that mean one specific thing in a dictionary sense) can be adopted for use in a completely different way, and language users generally just agree that this is okay. This is called a semantic shift or semantic change. Take the term "literally". "Literally", in a dictionary definition sense means absolutely, directly, or exactly. Over time, the meaning of "literally" [H6] has shifted, and now it gets used as an exaggeration, or to mean the exact opposite of its original definition. It's not uncommon to hear someone say "I literally died after I walked up those three flights of stairs", and to understand that the meaning behind that statement is that the person was really tired by the time they finished climbing the stairs. Even though some sticklers for language will try to correct people when they use "literally" in the figurative sense, most speakers of a language will ultimately agree on the new, intended meaning. As a result, the semantic nature of the term changes! Other contemporary examples include the term "lit" and "dead". When used in casual conversation, "lit" and "dead" don't mean "on fire" and "deceased", respectively. Instead, "lit" is used to indicate that something is exciting or intense, and "dead" can be used to refer to a response to something particularly funny or outrageous (as if you died laughing).
Semantic shift is one really common example of how language can change gradually over time. Another important example of language change is the development of brand new words. Language is dynamic, which means it has to grow, change and adapt to accommodate new phenomena. When we come up with these new words to refer to these new things, we are developing neologisms. It's important to note that a neologism is made up of existing morphemes to create a new meaningful word or phrase, but is different from semantic shift, where an existing word takes on a new meaning. So the components are old but the word is new [H7]! One example of a neologism is the term "staycation", which combines the terms "stay" and "vacation" to create a new word to reflect a vacation that is taken without straying far from home. A staycation might involve a day trip to a local theme park, or a fort-building contest in the back yard, or an at-home spa day. Staycations [M4], [H8] have become commonplace, and families may find themselves being less inclined to make a big vacation trip to some far-off destination. With these behaviors on the rise, we needed a concise term Journal of the Scholarship of Teaching and Learning, Vol. 23, No. 2, June 2023. josotl.indiana.edu to refer to them. And thus, "staycation" was born! Another very timely example of a new word that reflects current events is "doomscrolling". Doomscrolling refers to a social media behavior whereby a person gets stuck in a pattern of scrolling through a feed in shock or horror as a result of a substantial amount of negative news or information. Doomscrolling didn't exist as a word in 1850 because it wasn't necessary and wouldn't have reflected anything meaningful. The ability to scroll through social media, and the increase in bad, sad news has motivated the creation and development of this new word. Overall, what we see with semantic shift and neologisms is that language has an important job in expanding and modifying to accommodate new phenomena and cultural practices.
Overall, what we've seen here is that language has many levels at which it can be understood. These range from individual sounds you produce as you utter a word all the way up to the intended meaning behind a specific word or phrase. We know that language has evolved dramatically over past centuries, and it will certainly be interesting to see how linguistic systems change in the future. During the lecture, I had a conversation with another person not participating in the Zoom session. During the lecture, I did not use my computer for reasons unrelated to the Zoom session.* During the lecture, I stayed focused on the content of the lecture.* During the lecture, I did not engage in other tasks.* * Item was reverse scored.

Appendix 5. Lecture Comprehension Assessment.
Based on method used by Srivastava et al. (2013) For the following questions, consider the language lecture you just listened to. Please answer the questions to the best of your ability.

Lecture Content Questions
Free Recall Instructions: Write down as much information as you are able to remember from the lecture you just heard. You may use bullet points, sentences, phrases, and lists.

Aided Recall Instructions:
For the following questions, you will be asked to complete several different sentences with the appropriate word pertaining to the language lecture. Please complete the sentence to the best of your ability.
(Note: Italized, underlined items indicate target items. These are the correct answers) We use language to exchange information, express thoughts and discuss ideas. The smallest unit of spoken language is called a phoneme.
The smallest unit of meaning in a spoken language is called a morpheme. Suffixes like "-ed" and prefixes like "un-" are considered to be morphemes. Taa, the language spoken in regions of Africa, has phonemes that include five different types of tongue clicks. New words that get added to a language system (such as "staycation") are called neologisms. Semantic shift is when an existing word takes on a new meaning/definition. The word "unhelpfulness" contains four morphemes. Each person's mental dictionary, which contains their entire vocabulary, is called the mental lexicon. The mental lexicon of an adult english speaker contains approximately 80,000 morphemes.
Recognition Instructions: For each statement, determine whether or not this information was presented in the lecture. For each statement, select whether this or specific example was delivered by the lecturer, or was not delivered by the lecturer. Please answer each question to the best of your ability.

Note: * indicates a false target. This information was not presented by the lecturer)
The morphemes in your vocabulary are mentally stored in close proximity to other morphemes that sound the same. Do you prefer the whole group chat function, one-on-one chat function, or mobile texting on a separate device?

Open-Ended
For what reasons (if any) do you use the chat function?
For what reasons (if any) do you not use the chat function?