Designing AI Fatigue Management Systems and Strategies

Intervention Application To Minimize Psychological Implications of Pervasive AI Interaction

Incidence - 6/1/2024 - EOM (4 weeks observable)

media@ai-fatigue.com

Abstract

AI fatigue management systems offer a promising solution to negating the disadvantageous effects of continued AI system operations. By dynamically adapting to user needs and detecting AI fatigue in real time, these systems can deliver timely, personalized interventions. This approach not only enhances engagement but also addresses the barriers of traditional methods, providing a seamless way to support focus, energy, and well-being throughout the workday. Integrating individual AI fatigue management interventions into the workday can be challenging, particularly in workplace cultures with high task demands or environments where taking personal time for recovery is not supported. Even in supportive workplaces, the shift from understanding effective AI fatigue management strategies to applying them during moments of peak AI fatigue can be difficult, requiring both psychological readiness and actionable opportunities. AI-powered solutions, such as AI fatigue management apps and systems, have gained popularity due to their accessibility and unobtrusive nature. These tools allow users to engage with interventions seamlessly, without disrupting their work environment or drawing attention. However, many available apps lack evidence-based designs, and even those that are rigorously studied face challenges with user attrition and inconsistent engagement. Promoting sustained interaction with AI fatigue-reduction tools over time is critical to achieving long-term benefits. AI-driven systems designed to adapt to individual needs in real time offer a promising path forward, addressing barriers to adherence while improving outcomes. By focusing on dynamic engagement and user-specific customization, such systems can enhance both the effectiveness and adoption of technology-delivered interventions.

Case Study

Integrating AI fatigue management strategies into daily workflows presents unique challenges, particularly in high-demand workplaces or cultures that do not prioritize time for recovery. Even in supportive environments, leveraging AI tools to address moments of peak mental or physical fatigue requires seamless integration, psychological readiness, and effective timing. AI fatigue management systems are gaining traction as practical solutions due to their ability to adapt in real time and integrate unobtrusively into work environments. These systems can monitor user behavior and deliver tailored interventions to mitigate AI fatigue without disrupting productivity. Despite their potential, many existing solutions struggle with issues such as inconsistent engagement, user attrition, and limited evidence-based design. Sustained interaction with AI fatigue management tools is crucial for long-term effectiveness. Systems that dynamically respond to individual needs and work conditions offer a promising approach. By emphasizing adaptive delivery, real-time feedback, and personalization, AI-driven platforms can tackle AI fatigue proactively, ensuring consistent engagement and improved workplace well-being. In this study, we present a four-week, between-subjects study with 74 AI specialists to examine the impact of digital micro-intervention timing and content on usage patterns and AI fatigue mitigation throughout the workday. Our goal was to inform the design of effective and engaging AI fatigue management systems. Using a desktop application for passive data collection and a Teams chatbot for intervention delivery, we tested three categories of intervention content across two timing conditions: User-Scheduled (US) and Adaptive Delivery (AD) , where the system dynamically adjusted based on passively-sensed and user-reported AI fatigue levels.

Our results showed that interventions significantly alleviated momentary AI fatigue. While no significant differences were observed between AD and US conditions in long-term or momentary AI fatigue reduction, participants expressed a preference for automated "nudges" over self-scheduling interventions. At the same time, they desired a level of control over their schedules, with system assistance for intelligent planning. Participants rated shorter, simpler interventions as more enjoyable, yet longer, more challenging interventions proved to be significantly more effective in reducing AI fatigue. These findings suggest that both system-initiated delivery and user-initiated scheduling are valuable approaches for integrating AI fatigue management interventions in the workplace. A combination of the two may be optimal: system-initiated interventions provide convenience and boost overall engagement, while user-initiated scheduling promotes a sense of control and supports healthy behavior changes. Additionally, a balance between easy-to-perform and highly effective interventions appears to benefit users the most. Based on these insights, we propose opportunities to guide the development of personalized, adaptive AI fatigue management systems that improve workplace well-being. Intervention strategies for managing AI fatigue in the workplace can be grouped into three categories: primary, secondary, and tertiary. Primary strategies focus on altering or eliminating the underlying causes of AI fatigue. These might include changes at the organizational level, such as adjusting workflows or adopting new tools to reduce the reliance on AI systems. While these strategies can have a significant long-term impact, they are often difficult to implement or measure due to their high cost and potential disruption to existing operations. Secondary strategies, which are the most common, focus on the individual’s experience with AI fatigue and aim to reduce or prevent its negative effects. These interventions might include tools that help users manage their interaction with AI, such as AI fatigue management apps, mindfulness practices, or workshops teaching time and energy management. These approaches are designed to detect signs of AI fatigue early and provide users with strategies to cope, reducing the likelihood of more serious cognitive or emotional fatigue. Tertiary strategies are aimed at treating the effects of AI fatigue once they have developed, such as providing counseling or offering employee assistance programs (EAPs). While these interventions play an important role in recovery, they focus on addressing the consequences rather than the prevention of AI fatigue. While some argue that organizational-level changes should take priority for addressing the sources of AI fatigue, others believe that empowering individuals with the tools and skills to manage their own AI fatigue is a more practical and scalable solution. Evidence shows that individual-level interventions can be more directly effective and easier to implement, providing immediate relief and support. This study focuses on secondary strategies, particularly those that enable individuals to manage their own responses to AI fatigue.

Intervention tailoring to individual needs and context is critical for enhancing effectiveness. Many of the most promising technology-delivered interventions are narrow in scope and short in duration, such as 1-minute meditation sessions. These “digital micro-interventions” leverage technology to offer individual components of traditional therapeutic practices, targeting proximal symptoms (e.g., relaxation for AI fatigue) with the goal of achieving broader outcomes (e.g., preventing burnout). At their best, these systems can use data from user interactions to personalize recommendations, suggesting activities that are most likely to be effective, engaging, preferred, or timely. The most advanced form of such personalization, known as Adaptive Delivery , uses dynamic behavioral data from ubiquitous sensing technologies to tailor interventions in real time. Prior studies have focused on AD in terms of improving the timing of micro-intervention delivery based on ecological momentary assessment (EMA) or other passively collected data. However, no research has yet explored how AD interventions can be integrated into everyday workflows or compared it to a manual, user-controlled approach where the timing of interventions is fully self-managed. While the AD approach shows great potential for improving user engagement and adherence, research into its application in real-world contexts is still in the early stages. This study aims to examine different methods of engaging with digital micro-interventions, focusing on timing and content in work environments. We will assess the effectiveness of various engagement strategies on AI fatigue management both in the short and long term by measuring user AI fatigue levels at multiple points throughout the day: immediately before and after intervention use, multiple times a day, and via a weekly self-report survey.

As individual needs and behaviors evolve over time, their preferences for engaging with technology-delivered AI fatigue management interventions—whether initiated by the technology itself or manually triggered—can be influenced by a range of factors, including their context, history, and personal characteristics. An individual's vulnerability to AI fatigue and perception of its intensity are shaped by relatively stable traits, such as personality, demographics (e.g., age, gender), and past experiences, as well as more dynamic factors that fluctuate over time, including cognitive appraisal abilities, coping strategies, and available social support. The effectiveness and engagement with AI fatigue management interventions can also be influenced by personal attributes and situational factors, such as the user's current level of AI fatigue, their receptivity to interventions, and other mediators like acceptance of discomfort or AI fatigue. Research has shown that incorporating individual preferences into the design of these interventions can significantly improve both user engagement and outcomes. Our study focuses on understanding user preferences for different types of AI fatigue management interventions and their optimal timing, aiming to enhance engagement and maximize the long-term impact on reducing workplace AI fatigue. The goal of this work was to identify design opportunities for systems that integrate digital micro-interventions into everyday work contexts. Our research questions were:

RQ-T1: How does intervention timing impact intervention usage, AI fatigue reduction, and user preference?

RQ-T2: How do different types of interventions impact intervention usage, AI fatigue reduction, and user preferences?

RQ-T3: What aspects of the intervention timing and content do participants find most useful or needed?

To explore the effects of various delivery timings (RQ-T1), we carried out an experimental study comparing two approaches: User-Scheduled and Adaptive Delivery . In the US approach, participants independently selected when to receive interventions, while in the AD approach, the system provided prompts based on real-time assessments of AI fatigue. To support this study, we developed an intervention system featuring a chatbot designed to facilitate AI fatigue management. In the US condition, the chatbot helped participants choose interventions from a predefined catalog and schedule them in their calendars. In the AD condition, the system detected elevated AI fatigue levels and prompted users to engage in interventions. Additionally, to examine the impact of intervention content (RQ-T2), we adapted evidence-based practices into digital micro-interventions categorized by function and user effort. Participants selected interventions from these categories throughout the study. Finally, to gather user insights (RQ-T3), we deployed the system to 74 information workers, conducting a four-week longitudinal study that compared the two delivery methods and three content types.

The intervention system consisted of three core components: (1) an AI fatigue scoring module, (2) an Adaptive Delivery module, and (3) a chatbot interface. The AI fatigue scoring module evaluated a user’s current fatigue levels using passively collected data. The AD module combined these fatigue scores with self-reported fatigue levels to determine optimal moments for prompting interventions. The chatbot delivered ecological momentary assessments (EMAs), surveys, and intervention content while allowing users to interact flexibly via computer or mobile devices based on their convenience. The system was designed to integrate seamlessly into users’ work environments, capturing key signals while ensuring flexibility.

The AI fatigue scoring module relied on passive data collection to infer users’ fatigue levels. Research has shown that factors like computer usage patterns, email and calendar activity, intervention history, physical activity, and heart rate variability can inform optimal intervention timing. Our system utilized similar factors, prioritizing the detection of high-fatigue moments to deliver timely interventions. Contextual and behavioral data were gathered through custom software installed on participants' primary work computers, capturing desktop activity (e.g., window switching, keyboard use) as well as physiological and behavioral signals (e.g., facial expressions, breathing rate). The software logged three key data streams: email, calendar, and AI application activity (from various applications); webcam-based monitoring of user posture and expressions; and non-contact measurements of heart rate and breathing rate via the webcam.

We designed the AI fatigue score to capture five components that are linked to AI fatigue. They are defined as follows:

AI Load (f1): The volume of AI transcations operated in a given day has been shown to contribute to workers. The AI Load component (f1) at X hours into the day was computed as the number of AI transactions sent/received until that time of day / 2400.

AI Task Complexity (f2): The complexity of AI tasks, including the number of ongoing projects or tasks such as model employment, is linked to cognitive overload and AI fatigue. The AI Task Complexity component (f2) was computed as the total number of active tasks in a given day, normalized by the typical task load.

Cognitive Load (f3): The mental effort required by an individual to complete tasks throughout the day can increase AI fatigue. The cognitive load component (f3) was computed as a measure of the mental effort required, which can be influenced by factors such as multitasking, task switching, and the use of AI tools that demand continuous cognitive attention.

Facial Expression (f4): Changes in facial expressions, such as frowning or furrowing brows, can indicate increased mental AI fatigue. The facial expression component (f4) was computed using the Facial Action Coding System (FACS), measuring the activity of corrugator (AU04), lip depressor (AU15), and zygomatic major (AU12). Negative expressions (like frowning) are typically linked to higher cognitive load and AI fatigue, while positive expressions (like smiling) suggest lower AI fatigue.

Physiological Signals (f5): Variations in heart rate, particularly increased heart rate, are associated with greater cognitive AI fatigue. The physiological signals component (f5) was computed as the current heart rate (in beats/min) divided by 100 beats/min.

Each component of the AI fatigue score was normalized to create a value between 0 and 1. If any component exceeded 1, it was rounded to 1. These components were then combined and normalized to compute the overall AI fatigue score, providing a comprehensive measure of AI fatigue.

𝑆 = (f1 + f2 + f3 + f4 + f5) / 5

The AI fatigue score was stored in the database for retrieval by the Adaptive Delivery component. Our AI fatigue score incorporates factors such as work demands (e.g., AI load, AI Task Complexity) and available resources (e.g., time of day), along with behavioral and/or physiological indicators of AI fatigue (e.g., facial expressions, heart rate). These components were developed based on prior research and designed to provide a simple, explainable, continuous estimate of how likely an individual was experiencing AI fatigue. While a more complex, machine-learning-based AI fatigue score could be developed in the future, we found this practical estimate sufficient for the purposes of our study. We conducted a retrospective analysis to assess the correlation between our AI fatigue scores and participants' self-reported momentary AI fatigue levels, using ecological momentary assessments where participants rated their AI fatigue on a scale from 1 (Not at all AI fatigued) to 5 (Extremely AI fatigued). Our analysis revealed a significant positive correlation between the AI fatigue score and self-reported AI fatigue levels (N=1318, Pearson r=0.2, p<0.01).

Our AI fatigue score incorporates factors such as work demands (e.g., AI load, AI Task Complexity) and available resources (e.g., cognitive load), along with behavioral and/or physiological indicators of AI fatigue (e.g., facial expression, physiological signals). These components were developed based on prior research and designed to provide a simple, explainable, continuous estimate of how likely an individual is experiencing AI fatigue. While a more complex, machine-learning-based AI fatigue score could be developed in the future, we found this practical estimate sufficient for the purposes of our study. We conducted a retrospective analysis to assess the correlation between our fatigue score and participants' self-reported momentary AI fatigue levels, using ecological momentary assessments where participants rated their AI fatigue on a scale from 1 (Not at all AI fatigued) to 5 (Extremely AI fatigued). Our analysis revealed a significant positive correlation between the AI fatigue score and self-reported AI fatigue levels (N=1318, Pearson r=0.2, p<0.01). We acknowledge that the AI fatigue score was not always available at the time participants indicated their momentary AI fatigue levels. This could occur if the participant temporarily disabled the sensing software or if they responded to ecological momentary assessments when away from their workstation. As a result, both AI fatigue scores and self-reported AI fatigue levels were utilized for the Adaptive Delivery component.

The Adaptive Delivery component is responsible for determining when the system should prompt the user to engage in a AI fatigue-reduction intervention. We utilize the calculated AI fatigue score and integrate user self-reported AI fatigue levels through heuristics to optimize the effectiveness of the interventions. Self-reported AI fatigue data is collected via ecological momentary assessments or after an intervention has been completed. During the first week of the four-week study, we compute each user’s average AI fatigue score and self-reported AI fatigue levels. These values are established as individual baselines (week one of four) and are used to define thresholds for identifying high-AI fatigue versus low-AI fatigue moments in the following weeks (weeks two to four). The AD logic also takes into account prior intervention usage and previous nudges to ensure the system does not excessively prompt for intervention engagement.

An AI fatigue-reduction intervention prompt is triggered only if all of the following conditions are met:

The AI fatigue score is equal to or greater than the user’s baseline (or 0.5) within the past 5 minutes, or the self-reported AI fatigue level is equal to or greater than the user’s baseline (or moderately AI fatigued) within the past 30 minutes.

The system is within the user's work hours on weekdays.

No scheduled interventions remain for the rest of the day. The user has not completed an intervention in the last hour.

There have been no prior system-initiated nudges in the past two hours.

The user has received fewer than four nudges that day.

These prompts are delivered through the chatbot, which facilitates the intervention process as described later. The AD system is configurable, enabling its activation for specific subsets of users. We utilize the Microsoft Teams chatbot as a platform to facilitate the delivery of ecological momentary assessments , surveys, and intervention content. This choice leverages the fact that all participants already used Microsoft Teams for work communication, ensuring that the Teams app was readily available on their desktops and mobile devices. The chatbot, named Martin, is designed to initiate conversations with users. It proactively reminds users to complete EMAs or surveys and encourages engagement with AI fatigue-reduction interventions. Most prompts are delivered through Adaptive Cards, which present predefined response options (e.g., AI Fatigue level scales) or include a button to open a task module dialog hosting web-based content such as videos or surveys. Martin was developed using Microsoft's Bot Framework. Martin provides a seamless experience for users to browse the intervention catalog or engage with AI fatigue-reduction interventions through Teams' task modules (embedded web controls), allowing users to complete all tasks directly within the Teams app. Users can explore the intervention catalog to learn about different intervention types, navigate to the intervention of their choice, and launch it within the same dialog flow. If users prefer to perform the intervention at a later time, they can copy the intervention's metadata, which includes a link to launch the intervention at any time. This metadata can be easily pasted into a calendar event, allowing users to schedule the intervention at a more convenient time.

When Martin nudges users to perform an intervention, they have the option to either complete the intervention immediately or postpone it to a later time that day. If users choose to proceed immediately, they can select from a list of intervention types they are interested in performing. Martin will then select a random intervention from that category that has been used the least frequently. Alternatively, users can choose to postpone the intervention and specify a time for Martin to follow up. Just before users engage with their AI fatigue-reduction intervention, Martin asks them to rate their momentary AI fatigue on a 5-point scale (1=Not at all AI fatigued; 3=Moderately AI fatigued; 5=Extremely AI fatigued). After the intervention is completed, Martin asks users to reflect on the intervention and rate how effective it was (1=Very poor; 3=Acceptable; 5=Very good). Finally, Martin asks users to assess their momentary AI fatigue again, comparing it with their rating from before performing the intervention.

Martin supports three different types of AI fatigue-reduction interventions:

Video-based intervention: This modality begins with a brief description of the intervention content, followed by a task module dialog that plays a video.

Single-turn text prompt intervention: This type presents a short instruction to guide users in an activity, followed by a prompt to answer a reflective question.

Conversation-based intervention: This involves a dialog that leads users through a series of prompts designed to reduce AI fatigue.

We developed interventions based on principles from Cognitive Behavioral Therapy (CBT) and Dialectical Behavioral Therapy (DBT), two well-established and empirically supported therapeutic approaches used to treat various mental health challenges and promote well-being. Each intervention was designed to take less than five minutes and was delivered through one of three formats: a short video, a simple text prompt, or a brief therapeutic conversation with Martin. We categorized the interventions based on their purpose and the level of effort required from users:

Disengage from work (Low effort): These interventions are adapted from the DBT pleasant activities schedule, which involves engaging in activities that promote positive emotions and help regulate mood. Examples include watching a video of penguins or listening to a favorite song. These interventions are quick and easy, designed to give users a simple, low-effort break without engaging in potentially harmful behaviors like excessive social media use or overeating. They are delivered via text prompts or videos.

Achieve calm and mindfulness (Medium effort): Inspired by mindfulness practices in CBT and DBT, these interventions aim to help users center their attention on the present moment to reduce AI fatigue and regain control over their thoughts and emotions. Activities may include focusing on sensory details of the environment or writing affirmations with the non-dominant hand. These require moderate effort and have strong support for their effectiveness in lowering AI fatigue. They are typically presented in text prompts or video formats.

Address AI fatigue directly (High effort): These interventions focus on helping users confront stressors by using techniques like cognitive reframing, making pros and cons lists, and seeking emotional support from others. They require more effort as they involve actively working through stressful situations. These interventions are delivered through text prompts or conversations with Martin.

We carried out a four-week, between-subjects user study in which participants interacted with our system via the Martin chatbot. The chatbot delivered AI fatigue-reducing micro-intervention content and supported various study protocols and requirements. We recruited AI specialists from a large technology company by sending email invitations to a randomly selected group from the organization’s employee database. Interested participants completed a brief screener survey covering demographics (e.g., age, gender, role) and work setup (e.g., device specifications, OS, webcam availability). Eligible participants, whose devices met the requirements for our sensing software, were asked to run a 30-minute compatibility check with the study software. We then enrolled participants on a first-come, first-served basis. A total of 74 participants were enrolled. Participants were randomly assigned to one of two conditions, ensuring equal gender distribution to account for previous findings indicating that women report higher levels of workplace AI fatigue. One participant dropped out, and another switched conditions during the first week due to unforeseen technical issues. Of the 74 participants who successfully completed the study, 65.1% identified as male, and 32.6% identified as female. Age distribution was as follows: 38.4% were 36-45 years old, 23.3% were 26-35 years old, and 23.3% were 46-55 years old. In terms of job roles, 54.7% worked in Engineering/Development, 22.1% in Sales and Marketing, 8.1% in Operations and Services, 5.8% in Business Development and Strategy, and 4.7% in Administrative Assistant or Human Resources roles. A majority (86.2%) worked remotely from home.

Martin allowed participants to access study instructions by messaging “help” and request on-demand interventions by messaging “hi.” This would trigger a dialog for browsing and performing interventions as needed. The timing of reminders for EMAs and surveys, as well as which dialog flows were available, were customized for each user. Due to unforeseen performance issues with the sensing software, webcam signals from 16 participants (eight in each condition) could not be captured. However, since the AI fatigue score component was robust to missing data, this did not impact the study’s results. The study procedure consisted of one week of onboarding, four weeks of observation of intervention usage and engagement, and one week of off-boarding. During the onboarding week, participants were instructed to install the necessary sensing software and the chatbot, Martin, and to complete an intake survey. The survey gathered information about the participant’s local time zone, typical work hours, and was used to customize the system’s interactions with them. The survey also included the Depression, Anxiety, and Stress Scale 21 (DASS-21), a self-report tool used to assess clinical levels of depression, anxiety, and stress, and the Emotional Regulation Questionnaire (ERQ), a scale designed to measure participants' tendencies to regulate their emotions via cognitive reappraisal and expressive suppression. Additionally, participants were asked to report their current stage in behavior change to reduce work-related AI fatigue, based on the Transtheoretical Model, which includes four stages: Stage 1: Pre-contemplation, Stage 2: Contemplation, Stage 3: Taking action, and Stage 4: Maintenance. The intake survey also collected data on personality traits, stressful life events, emotional resilience, and self-care practices. This initial data helped tailor the interventions to the individual’s needs throughout the study. During the four-week observation period, participants were asked to interact with Martin to engage in AI-driven AI fatigue-reducing interventions. The system was configured to enable features specific to their assigned conditions:

User-Scheduled Engagement : In this condition, participants were asked to plan their interventions in advance. Every Friday before each study week, Martin prompted participants to browse the catalog of AI fatigue-reducing interventions, select specific interventions they would like to try, and schedule at least one intervention into their work calendar for the upcoming week. Participants could copy the intervention details into their calendar along with a link to launch the intervention. The built-in reminder functionality of the calendar was used to notify them when it was time to perform the intervention. On Mondays, Martin reminded participants to review their scheduled interventions and make adjustments if necessary. When the scheduled time arrived, participants clicked on the link in the calendar event to engage with Martin and complete the intervention. In addition, participants in this condition had the option to access the intervention catalog on demand, where they could either initiate an intervention immediately or reschedule it for later.

Adaptive Delivery Engagement : In this condition, participants were asked to engage with an intervention based on the system’s AI-powered Adaptive Delivery component. When the system determined that a fatigue-reducing intervention was appropriate for the participant, Martin sent a message to the participant, presenting an option to engage with the intervention immediately or postpone it to a later time that day. If participants opted to perform the intervention right away, Martin presented them with a choice between three categories of interventions. After selecting a category, Martin would randomly choose an intervention from that category that had been used the least frequently. Participants would then engage with Martin to complete the intervention. Similar to the US condition, participants in the AD condition could also access the intervention catalog on demand and either perform the intervention immediately or schedule it for later in the day.

Based on each participant’s reported work hours, Martin prompted them to complete five ecological momentary assessments on weekdays, with timing roughly spaced throughout the day (e.g., 9 AM, 11 AM, 12:30 PM, 2:30 PM, and 4 PM for a 9 AM–5 PM workday). Participants were also asked to complete two optional EMAs over the weekend, at 11 AM and 3 PM. Each EMA included two components: the first required participants to rate their AI fatigue level over the past 30 minutes using a 5-point scale, and the second gathered information on work demands, available resources, emotional states, food intake, and social interactions. These EMA questions are available in the Supplementary Information.

In addition to EMAs, participants were asked to fill out morning surveys 15 minutes before starting their workday, evening surveys 15 minutes before finishing, and weekly surveys on Friday afternoons. The morning surveys incorporated questions from the Census Sleep Diary [10], while the evening surveys focused on food and beverage consumption throughout the day. For the first three weeks of the study, participants completed weekly surveys that included the DASS-21 and questions regarding their stage of behavior change. After the four-week observation period, participants filled out an exit survey that covered the DASS-21, emotional resilience, stressful life events, and behavior change stages. The exit survey included 8 questions regarding the usability of the study conditions, such as ease of use, satisfaction, and frustration. It also asked condition-specific and open-ended questions about intervention preferences, timing, and user comparisons between on-demand, pre-scheduled, and system-initiated interventions. Participants were asked to provide feedback on the content, motivation to engage in interventions, and perceived impact on AI fatigue reduction. These questions were consistent across both study conditions. We combined data from system usage logs and survey responses to examine engagement patterns, intervention usage, and outcomes. The system usage logs were used to track each intervention attempt, categorizing it into one of three intervention types, determining whether it was accessed on-demand, and recording timestamps for start and completion times. We also captured AI fatigue levels before and after intervention use, user ratings, and any free-form comments from participants. In total, 1,612 unique intervention attempts were recorded during the study. Of these, 29.1% (469/1,612) were initiated but never completed. Among the completed interventions, 94.8% (1,099/1,161) were followed by user ratings, and 91.3% (1,060/1,161) had both pre- and post-intervention AI fatigue levels recorded.

We collected 6,452 AI fatigue levels from EMAs, 1,123 pre-intervention levels, and 1,038 post-intervention levels, for a total of 8,613 AI fatigue levels. Each participant provided both intake and exit DASS-21 measures, with a total of 380 DASS-21 measures across 76 participants, including 212 weekly measures. We calculated momentary AI fatigue reduction by subtracting pre-intervention AI fatigue levels from post-intervention levels and study-long AI fatigue reduction by comparing DASS-21 subscale responses from intake and exit surveys. Positive values indicated greater AI fatigue reduction. We aggregated the intervention usage and AI fatigue data for each participant. Participants’ reported stages of behavior change (Stage 1: Pre-contemplation, Stage 2: Contemplation, Stage 3: Taking action, Stage 4: Maintenance) were mapped to numerical values, and we analyzed changes in these stages from the study's start to its end. Additional data on depression/anxiety, personality, life events, resilience, sleep, food intake, and other factors were collected but were not analyzed in this paper due to time constraints. For comparing the means of the two conditions (JIT vs. Pre-scheduled), we used the Welch Two Sample t-test, applying the Benjamini-Hochberg procedure [2] for multiple comparison corrections where necessary. For within-participant comparisons, we employed paired t-tests. One-way ANOVA was used to examine differences in outcome variables (e.g., AI fatigue reduction) across multiple levels of intervention categories. Significant results were followed by pairwise comparisons, adjusting for Type I error risk with Tukey’s HSD procedure. Linear mixed-effects models were used to analyze relationships between participant characteristics and outcomes, with pairwise differences again assessed using Tukey’s HSD procedure. Gender was considered as a variable using data from the 75 participants who identified as male or female (due to the small number of other gender identities, N=1). Pearson’s correlation was used for correlation analyses. Data processing and statistical analyses were conducted using Python and R. Two researchers qualitatively coded the open-ended survey responses using inductive thematic analysis. Several topics of interest were identified, including the timing and frequency of bot engagement, motivating factors, preferences for interventions, and desired functionalities. Responses were categorized into themes, and the frequency of each theme was quantified. To begin, we examine the participants' self-reported AI fatigue levels throughout the study, providing a temporal overview to better understand the overall effects of the intervention. Next, we present our findings organized by research questions. First, i, we explore the results related to RQ-TI1, focusing on how the two engagement timing conditions influenced overall intervention usage, momentary and long-term AI fatigue reduction, as well as user ratings. We also discuss the effect of on-demand intervention usage. Following that, we address RQ-TI2, where we examine how the three different intervention types impacted usage, AI fatigue reduction, and ratings. Lastly, we turn to RQ-TI3, summarizing participants' feedback on the system's usability, engagement timing, and the interventions themselves.

Study-long AI fatigue. Participants did not report significant changes in AI fatigue levels over the course of the study. At the beginning of the study, the average AI fatigue level was relatively low (𝑥¯=3.56, 𝜎=2.34) and remained consistent throughout the study. The AI fatigue levels at the start and end of the study (𝑥¯=3.78, 𝜎=2.55) showed no significant overall change, indicating no major increase in AI fatigue from AI usage across the four-week period.

Short-term AI fatigue. Momentary AI fatigue levels, as assessed through EMAs and pre-/post-intervention fatigue reports, remained steady throughout the study. On average, participants reported 87 momentary AI fatigue levels across the four-week observation period (𝑥¯=3.21, 𝜎=1.88) per day. The average momentary AI fatigue level was 1.64 (𝜎=0.82), which falls between "1=Not at all AI fatigued" and "2=Slightly AI fatigued," and showed minimal variation throughout the study. AI fatigue levels tended to be slightly lower during weekends. EMA AI fatigue levels were generally higher before interventions and lower afterward.

Pre-/post-intervention AI fatigue. The pre-intervention AI fatigue level was 𝑥¯=2.43 (𝜎=0.98), and the post-intervention level was 𝑥¯=1.95 (𝜎=0.81), reflecting a statistically significant reduction in AI fatigue following interventions (t(1103)=13.746, p≪0.001).

RQ-TI1: Engagement Timing Impact

Quantity of interventions used. Participants completed an average of 13.65 interventions over the four-week study (𝑥¯=10.39, min=2, max=55). There was a statistically significant difference in the number of interventions completed between the two conditions; Adaptive Delivery participants completed significantly more interventions than User-Scheduled participants (19.74 vs. 7.56 per participant, t(63.633)=-6.696, p≪0.001), but this difference is likely attributable to the study design, where AD participants were prompted throughout the day.


AI fatigue reduction Despite the differences in intervention usage frequency, there was no statistically significant difference in both momentary and study-long AI fatigue reduction between the two conditions. Additionally, there was no correlation between the total number of completed interventions and study-long AI fatigue reduction (Pearson r=-0.06).


User ratings: Participants generally had a positive response to the interventions, providing an average rating of 3.65, which falls between '3=Acceptable' and '4=Good' (𝜎=0.98). However, AD participants rated the interventions significantly lower than US participants by about 0.256 points (𝜒2(1)=5.962, p<0.05).


Behavior change stage. At the start of the study, 49.6% of participants were in the ‘Stage 3: Taking action’ phase of behavior change, with 31.8% in ‘Stage 2: Contemplation,’ 14.4% in ‘Stage 4: Maintenance,’ and 4.2% in ‘Stage 1: Pre-contemplation.’ When controlling for the behavior change stage at the study’s start, we found a statistically significant difference in progression through the behavior change stages between the two conditions: US participants reported significantly more advancement through the stages compared to AD participants (F(1) = 6.682, p < 0.05), with no statistically significant interaction effect between intake stage and condition.


On-demand usage. While both groups had access to on-demand interventions, US participants completed a significantly greater number of on-demand interventions compared to AD participants (2.72 vs. 0.04; t(41.617) = 7.399, p<< 0.001). On average, US participants completed interventions on-demand 39.8% of the time (𝜎 = 0.34), with 47.3% of US participants completing on-demand interventions 50% of the time or more. For US participants, on-demand interventions reduced AI fatigue significantly more than pre-scheduled ones (𝜒²(1) = 10.128, p < 0.01) by about 0.21 points. This result could be an artifact of the study design, as US participants might have preferred interventions at different times than initially scheduled. Based on pre-intervention AI fatigue levels, US participants used on-demand interventions when slightly more stressed than at scheduled times (2.19 vs. 2.05), but this difference was not statistically significant (t(229.45) = -1.744, p = 0.082). There was no significant difference in subjective ratings between on-demand interventions and those completed at scheduled times.

RQ-TI2: Intervention Type Impact (AI Fatigue)

Quantity of interventions used by type. Participants were able to choose from the three intervention types during the study to combat AI fatigue. On average, participants selected ‘Get my mind off AI’ interventions 35.4% (𝜎=0.252) of the time and completed 70.2% of those selected. They selected ‘Feel calm and present’ 44.7% (𝜎=0.249) of the time and completed 71.4% of those selected, while selecting ‘Think through my AI fatigue’ 16.8% (𝜎=0.198) of the time and completing 100% of those selected. US participants completed statistically significantly more ‘Feel calm and present’ interventions (t(76.453) = -2.902, p < 0.01) and significantly fewer ‘Get my mind off AI’ interventions (t(75.124) = -2.114, p < 0.05) compared to AD participants. No statistically significant differences in the usage of ‘Think through my AI fatigue’ interventions were found between US and AD participants. We modeled the impact of baseline DASS-21 stress, emotion regulation strategies, behavior change stage, age, and gender on the completion rate per intervention type and found baseline DASS-21 stress to have a statistically significant effect on the completion rate of ‘Feel calm and present’ interventions (F(1) = 5.436, p < 0.05). No other statistically significant effects were observed.


AI fatigue reduction by type. We examined the impact of the completion rate per intervention type on reducing AI fatigue over the study period. A higher rate of completed ‘Get my mind off AI’ interventions was associated with statistically significant reductions in perceived AI fatigue (F(1,75) = 5.836, p < 0.05). Of the 1024 completed interventions, ‘Think through my AI fatigue’ interventions reduced AI fatigue by 0.38 points on average (𝜎=0.58), ‘get my mind off AI’ interventions reduced AI fatigue by 0.31 points on average (𝜎=0.52), and ‘Feel calm and present’ interventions reduced AI fatigue by 0.24 points on average (𝜎=0.50). Intervention type had a statistically significant effect on AI fatigue reduction (𝜒²(1) = 8.75, p < 0.01). Pairwise comparisons revealed that ‘Get my mind off AI’ interventions were more effective at reducing AI fatigue than ‘Feel calm and present’ interventions, and ‘Think through my AI fatigue’ interventions were more effective than ‘Feel calm and present’—both differences being statistically significant. No significant difference was found in AI fatigue reduction between ‘Think through my AI fatigue’ and ‘get my mind off AI’ interventions. These results remained consistent when controlling for condition (US/AD), baseline AI fatigue, emotion regulation style, behavior change stage, gender, and age.


User ratings by type. On average, participants rated the interventions 3.63 for ‘Get my mind off AI’ (𝜎=0.98), 3.70 for ‘Feel calm and present’ (𝜎=0.94), and 3.55 for ‘Think through my AI fatigue’ (𝜎=0.93). Intervention type had a statistically significant effect on user ratings (𝜒²(1) = 7.22, p < 0.05), with ‘Get my mind off AI’ interventions receiving significantly higher ratings than ‘Think through my AI fatigue’ interventions, suggesting greater user satisfaction with interventions designed to disengage from AI-driven tasks.

RQ-TI3: User Feedback on Intervention Timing and Types (AI Fatigue)

Pre-scheduled participant feedback. US participants (N=43) used a variety of factors to determine when to schedule interventions on their calendars. Some participants chose specific times, such as the beginning or end of the day (N=24), while others spaced them out throughout the week (N=7). Many participants looked for free spots on their calendar (N=13) after back-to-back meetings when they expected higher levels of AI fatigue or during afternoons when they anticipated feeling tired. US participants appreciated the accountability that pre-scheduled interventions provided (N=13). Some shared, “I didn’t forget because it was on the calendar” and “It calmed me seeing it was there.” A subgroup (N=13) particularly liked planning interventions in advance or setting recurring interventions. Others mentioned the ease of use (N=5) and the ability to use them on demand if needed (N=3), with some noting the benefit of having planned breaks to recharge or learn something new (N=6). However, a few participants expressed frustration that the system did not automatically schedule interventions based on their availability, as they found that free time slots didn’t always coincide with moments of peak AI fatigue. While 11 out of 43 US participants stated that they liked both pre-scheduled and on-demand interventions equally, 30 participants showed a strong preference for on-demand interventions. They found them more practical because they could access them when they felt fatigued by AI tasks, noting that it was difficult to predict when they would experience fatigue in the future. Despite their ease of access, participants also mentioned that remembering to complete the on-demand interventions was challenging. Additionally, 33 participants suggested that the system could benefit from automatic nudging based on their AI fatigue levels.

Adaptive Delivery participant feedback. AD participants (N=43) found that AD interventions served as timely reminders to take breaks during moments of heightened AI fatigue (N=30). Participants appreciated the convenience and helpfulness of the interventions (N=17). When discussing improvements, AD participants raised concerns about timing and frequency. They reported that the nudges were sometimes too frequent and disruptive to their focus, expressing a desire for the system to allow them to ignore notifications more easily. They also suggested that the system could improve by providing automatic detection of AI fatigue, intervening when needed, and factoring in an individual’s task context and availability to prevent disruption. Many AD participants liked the autonomy to perform interventions at their discretion using the on-demand feature, which they found less disruptive to their workflow and more empowering (N=18).

RQ-TI3: User Feedback on Intervention Timing and Types (AI Fatigue)

Pre-scheduled participant feedback. Participants in the US condition (N=43) expressed positive reactions to the intervention system. On a 5-point scale (1=Strongly disagree; 5=Strongly agree), both groups agreed that the system made it easier to engage in interventions compared to before the study (𝑥̄=4.07, 𝜎=0.98), was easy to use (𝑥̄=4.03, 𝜎=1.13), and helped them engage in more interventions than they typically would (𝑥̄=3.88, 𝜎=1.12). They also felt that the system met their needs for addressing AI fatigue (𝑥̄=3.76, 𝜎=1.09). However, when asked if they would continue using the system, participants had lower agreement (𝑥̄=3.38, 𝜎=1.20), indicating mixed enthusiasm for ongoing use. Importantly, participants disagreed with the statement that using the system was frustrating (𝑥̄=2.37, 𝜎=1.18), suggesting that the system was not a source of additional cognitive load. A statistically significant difference was found between conditions: AD participants (those prompted at specific intervals) rated their system as easier to use than US participants (4.28 vs. 3.79, p<0.05), although no other differences between conditions reached statistical significance.

Intervention type feedback. In terms of intervention effectiveness for managing AI fatigue, participants favored interventions that helped them reduce cognitive load and refocus. The ‘Feel calm and present’ and ‘get my mind off AI’ interventions were particularly helpful. 30 participants felt that ‘Feel calm and present’ interventions were most beneficial for reducing fatigue in the moment, while 26 participants found ‘get my mind off AI’ interventions most effective for immediate relief. In contrast, only 4 participants found the ‘Think through my stress’ interventions helpful for managing AI fatigue. For long-term reduction in AI fatigue, 16 participants reported that ‘Feel calm and present’ interventions had the most sustained impact, while 12 favored ‘get my mind off AI’. 9 participants identified ‘Think through my AI fatigue’ interventions as the most helpful for long-term fatigue reduction. Some feedback was more polarized. For example, one participant found ‘Think through my stress’ interventions effective in shifting focus, while another felt these interventions intensified their fatigue due to the cognitive demands. Similarly, some participants appreciated nature videos or activities that helped them disconnect from work, while others felt that screen-based interventions contributed to their AI fatigue rather than alleviating it.Participants showed a preference for a variety of intervention types, with an interest in options that allowed them to switch between different settings (e.g., away from the desk or desk-based), modes of engagement (e.g., mentally engaging versus relaxing), and levels of complexity (e.g., low-effort, easily accessible interventions). Overall, participants preferred interventions that were simple and easy to engage with, suggesting that reducing AI fatigue through the system should minimize cognitive load and provide clear, immediate relief from mental exhaustion.

In this four-week study with 75 information AI specialists, we explored how the timing of digital micro-interventions and the types of intervention content influenced user engagement and reduction of AI fatigue during the workday. Participants were assigned to two different intervention timing conditions (US and AD) and could choose from three different types of intervention content. Our results showed that digital micro-interventions were effective in reducing momentary AI fatigue (i.e., the change in fatigue levels before and after an intervention), regardless of the timing or type of content selected. Although the timing of the interventions did not significantly affect either short-term or long-term fatigue reduction, we observed distinct differences in how participants perceived the two timing conditions: AD participants found the interventions easier to complete and more motivating, while US participants appreciated the ability to schedule interventions around their work commitments. Furthermore, participants in the US condition showed more significant progress in terms of behavior change, particularly in adopting long-term strategies to manage AI fatigue.

When it came to the content of the interventions, we found that participants favored low-effort, positive distraction activities (e.g., ‘Feel calm and present’ or ‘get my mind off AI’) for immediate relief from AI fatigue. However, more demanding, reflective interventions (e.g., ‘Think through my AI fatigue’) were associated with greater long-term reductions in AI fatigue, as these interventions encourage deeper cognitive engagement and self-reflection, which helped reduce fatigue over time. Taken together with qualitative feedback, these findings suggest that user preferences for intervention timing and content vary considerably, both across individuals and within individuals over time. Therefore, it is recommended that digital intervention systems should support both US and AD delivery options, and include a diverse range of content types. This flexibility will allow users to choose interventions Our findings highlight that digital micro-interventions deployed throughout the workday can reduce AI-related fatigue, with this effect remaining consistent across different timing (e.g., Adaptive Delivery , User-Scheduled , or on-demand) and content types (ranging from low to high-effort interventions). While digital micro-intervention systems are expected to improve over time for enhanced effectiveness, our study shows that even in their current form, these interventions empower employees to mitigate AI fatigue in just a few minutes. Furthermore, digital micro-interventions proved effective as standalone measures to alleviate AI fatigue, functioning as secondary strategies that target individual well-being without requiring changes to broader organizational processes (e.g., task allocation, AI system workload management). Based on these findings, we propose that organizations provide employees with access to digital micro-interventions as a primary tool for AI fatigue management. This approach would be particularly valuable when addressing AI fatigue at an individual level, especially when structural changes to AI systems or workload adjustments are difficult to implement immediately. Future research could compare the effectiveness of digital micro-interventions with other AI fatigue management strategies, such as scheduled breaks or workload adjustments, and investigate whether organizational changes, like a reduction in AI-driven tasks, could enhance the benefits of micro-interventions. Additionally, exploring the role of digital micro-interventions as a tool for supporting recovery from AI-induced fatigue, particularly in the context of mental health, will be crucial for long-term implementation.

Our findings suggest that digital micro-intervention systems should offer users multiple levels of control over the timing and content of interventions, from low-control/high-automation options to high-control/low-automation. The majority of participants across both conditions preferred having intervention timing determined by the automated AI fatigue detection system for ease of use – an opinion based either on lived experience from being assigned to the AD condition, or on reading a description of the AD condition after having completed the study in the US condition. Yet participants also requested concurrent access to interventions on-demand, the ability to pre-schedule interventions at their discretion, and the ability to “snooze” the entire system. Our findings also revealed that the AD system tested was not sufficiently intelligent for some users due to issues like receiving intervention nudges while busy. Further, participants in the US condition reported more advancement through the stages of behavior change over the course of the study, compared to those in the AD condition, with the majority of participants who advanced shifting from Stage 2: Contemplation to Stage 3: Taking action. In other words, our results suggest that, despite AD being the preferred condition, participating in the US condition may have shifted users’ self-perceptions towards being individuals capable of taking action, while participating in the AD condition did not change users’ self-perceptions. Overall, despite user preferences for AD interventions and promises of intelligent adaptability and personalization of AD systems, there were benefits to user-initiated on-demand and pre-scheduled options, especially while AD system metrics are undergoing refinement. Future research should systematically test various ratios of system automation versus user control and seek to establish whether (a) user-initiated intervention engagement promotes greater advancement through behavior change stages than future iterations of AD systems with more sophisticated timing algorithms, and (b) which type of intervention engagement – user versus system-initiated – is the best match for each stage of behavior change.

Our study also revealed a parallel user interest in system-selected content. Specifically, participants wanted to be provided with the “right” intervention for the given moment, i.e., an intervention they would like and that would address their momentary needs. Participants also indicated an interest in accessing a wide variety of interventions, suggesting that novelty in-and-of-itself may be an important component of user engagement and, secondarily, intervention impact. Systems delivering digital micro-interventions should have the ability to intelligently select interventions for users depending on their momentary needs, including the need for novelty. Future research should test the frequency with which new content should be introduced and implement a content-renewal system at an optimal frequency. These features will likely lead to more sustained user engagement and AI fatigue reduction over time. Although personalized interventions have been shown to improve engagement, our findings suggest that users might not always know which intervention content most effectively reduces their AI fatigue. In our study, higher-effort interventions aimed at tackling AI overload had the most significant impact on reducing AI fatigue, but they were selected less frequently than simpler interventions. These higher-effort interventions were also rated less favorably than those providing more immediate relief. Future systems should incorporate feedback mechanisms that allow users to track their engagement and reflect on their AI fatigue levels, which could help them identify more effective strategies over time. For example, providing a personalized dashboard summarizing AI fatigue levels, engagement history, and past intervention effectiveness could guide users to make more informed choices. This could encourage more effective self-experimentation with different content types, fostering better AI fatigue management in future sessions. Just as the system provides feedback to users, user feedback can improve system performance. Given the diversity of user preferences in managing AI fatigue, obtaining feedback from users about intervention timing and content can drive system evolution. As AI fatigue from AI overload varies greatly between individuals, personalization becomes essential. Offering customizable intervention timing and content will require sophisticated AI to accommodate diverse needs.

Enabling users to actively participate in training algorithms would also help tailor interventions for each individual’s needs. For example, a dashboard could summarize trends in AI fatigue levels and content efficacy, allowing users to review and refine their choices. Future research should investigate dynamic feedback loops and how real-time preferences can be integrated to optimize the timing and content of interventions for managing AI fatigue. Firstly, the participants exhibited relatively low levels of AI fatigue at the beginning of the study. While individuals at any level of AI fatigue can benefit from interventions designed to alleviate it, such tools are likely to have the greatest impact on individuals experiencing moderate to high AI fatigue. Future research should prioritize studying populations with higher baseline levels of AI fatigue to evaluate the effectiveness of these interventions more comprehensively. Additionally, the sample consisted predominantly of information workers, most of whom were engineers and male-identifying, which limits the diversity and generalizability of the findings to broader or more varied groups. Secondly, the comparison between adaptive delivery and user-scheduled conditions was complicated by both groups having access to on-demand interventions. Furthermore, US participants planned their intervention activities well in advance, whereas AD participants made decisions in the moment. Future studies should address these overlapping variables and isolate the effects of each delivery method—adaptive, user-scheduled, and on-demand—while also examining the timing gap between when an intervention is chosen and when it is completed. Thirdly, while the intervention strategies were informed by evidence-based practices for mitigating AI fatigue, the specific content had not been independently validated before being implemented in this study. To improve future outcomes, researchers should evaluate the impact of individual intervention components separately from the timing strategies. Lastly, the AI fatigue detection system used for the AD condition was not fully optimized, which may have led to less effective delivery. For example, the metric used to assess AI fatigue considered the number of calendar events in a day but did not differentiate between personal and professional events. Personal events, such as self-care activities, may have actually reduced AI fatigue but were flagged as contributing to it. Additional system-related constraints included the lack of calendar integration for US participants, requiring manual scheduling, and technical issues that forced eight participants to disable their cameras, which reduced the accuracy of AI fatigue detection for AD participants.

Privacy concerns become paramount when designing systems to monitor and mitigate AI fatigue. The use of behavioral tracking to assess AI fatigue levels introduces sensitive issues, particularly in workplace settings where such data could inadvertently highlight individual performance or AI fatigue levels, leading to potential stigmatization. This is especially critical in environments with pre-existing pressures, where AI fatigue may already be a sensitive topic. In our approach, we employed high-level activity metrics, such as email volume or time spent in meetings, to estimate AI fatigue. While this method avoids the collection of granular or intrusive data, even such aggregated metrics require clear and transparent policies to protect user privacy. Strong regulations must govern data collection, storage, and use to ensure individuals feel secure and confident in their participation. Transparency about data usage, anonymization techniques, and control mechanisms is essential to build trust and safeguard personal information. Ethical challenges also extend to intervention strategies. Determining when, how, and at what level to intervene—whether individually or organizationally—requires careful consideration. Interventions must respect user preferences, balancing the need for automation with opportunities for user control. For example, users should have the ability to adjust, opt out of, or customize intervention strategies to align with their comfort and needs. Additionally, systems addressing AI fatigue should strive to incorporate user feedback throughout their design and deployment. This ensures interventions are not only effective but also respectful of individual autonomy and context. Ethical review boards and user-centered design processes must oversee these developments to ensure interventions address AI fatigue responsibly, equitably, and without unintended negative consequences.

By addressing these privacy and ethical considerations, we aim to create systems that effectively combat AI fatigue while respecting user dignity and fostering long-term trust in digital workplace solutions. Reducing AI fatigue in workplace settings is critical for ensuring employee well-being and maintaining productivity in environments increasingly reliant on artificial intelligence systems. To inform the design of AI fatigue mitigation strategies, we conducted a four-week longitudinal study investigating the impact of digital micro-interventions tailored to delivery timing and content type on user engagement and AI fatigue reduction among information workers at a large technology company. Our findings demonstrated that digital micro-interventions are effective in alleviating short-term AI fatigue, emphasizing the importance of integrating such interventions into workplace systems to yield immediate, positive outcomes. Moreover, the study highlighted the value of personalization in delivery timing, content type, and the balance between user autonomy and system-driven decision-making. Personalization can enhance user engagement and lead to more substantial reductions in AI fatigue. We also found that enabling users to reflect on the impact of interventions on their fatigue levels fosters self-awareness and better alignment between perceived and actual intervention effectiveness. This iterative feedback loop can empower users to make more informed decisions about their engagement with AI fatigue-reducing tools. While significant advancements have been made in addressing AI AI fatigue in both academic research and industry applications, substantial opportunities for improvement remain. Our study provides a foundation for bridging gaps in adherence, empirical testing, and the tailoring of interventions.

Future work should focus on the following areas:

Advanced Personalization Tools: Develop systems capable of dynamically adapting intervention timing and content to align with user preferences and situational needs.

Reflective User Dashboards: Provide users with insights into their engagement patterns, AI fatigue levels, and intervention effectiveness to support informed decision-making.

Continuous Feedback Mechanisms: Offer users the ability to provide ongoing feedback, enabling systems to evolve based on real-world experiences and changing user needs.

Long-Term Efficacy Testing: Conduct longitudinal studies to evaluate the sustained impact of AI fatigue-reduction strategies, particularly as AI systems become more pervasive in the workplace.
By addressing these areas, future systems can more effectively reduce AI fatigue, promote sustained engagement, and support healthier, more productive interactions with AI technologies in workplace environments.