Blog / March 21, 2026
The Body Keeps the Score on Digital Transformation: What AI Is Actually Doing to Workers

The conversation about AI and work has stayed almost exclusively on the displacement question — which jobs survive automation, which don't, what the retraining landscape looks like — and while that question is real and worth taking seriously, it has functioned as a distraction from something more immediate and more diffuse that is happening inside organizations right now. Jeong and colleagues, surveying 375 South Korean workers navigating AI transitions, found that the technology's impact on employees doesn't wait for displacement to arrive — it travels through psychological channels that manifest in measurable physical symptoms long before anyone loses a job. Job stress emerged in their data as the complete mediator between AI implementation and declining physical health, which is a way of saying that the body responds to the uncertainty and identity disruption of working alongside AI before the disruption fully lands, and that response is physiological, not just psychological.
What makes this harder to address than the displacement question is precisely its diffuseness. Murire's systematic review of 4,200 studies on organizational AI adoption documents how companies focus intensely on efficiency metrics and productivity gains while systematically underestimating the cognitive load that accompanies working alongside systems whose logic most employees cannot fully grasp or predict — the sustained mental effort of staying relevant, of calibrating which tasks still require human judgment, of adapting continuously to interfaces and workflows that didn't exist a year ago. That effort creates a kind of chronic hypervigilance that the human nervous system wasn't designed to maintain at the pace digital transformation tends to demand, and the costs accumulate in ways that don't show up on the dashboards tracking the transformation's success.
Shang's research with construction managers in Singapore adds the organizational layer to this picture. The managers who navigated AI adoption most successfully weren't necessarily more technically capable than those who struggled; they worked in environments where leadership had invested in what the research calls organizational readiness — the buffer zones between technological change and personal disruption that allow people to adapt at a pace that doesn't require them to sacrifice their sense of professional identity in the process. That investment is not the norm. Most organizations driving AI adoption are not thinking about organizational readiness as a distinct challenge requiring distinct resources; they are thinking about implementation timelines, capability gaps, and the efficiency gains that justified the investment. Murire's broader finding is that AI transforms organizational culture whether leaders plan for it or not, and the metabolic costs of that unplanned transformation are borne by the humans caught inside it.
Hamza and Karadas complicate the picture in a way that deserves attention: even behaviors organizations typically read as counterproductive — social cyberloafing, the digital wandering that managers discourage — can serve protective functions when employees are overwhelmed by the pace and scope of digital demands. The 400 Iraqi SME employees in their study were not failing to adapt; they were developing informal coping mechanisms for conditions that formal organizational support hadn't caught up to yet. Well-being and innovation don't trade off against each other, their findings suggest, but only in environments where digital leadership creates space for human adaptation rather than demanding immediate optimization — which means the organizations that will benefit most from AI are not necessarily the ones deploying it fastest.
What Chronic Stress Actually Costs — and Why the Balance Sheet Doesn't Show It
The physiological cascade that Jeong documents — blood pressure changes, sleep disruption, the kind of persistent fatigue that makes sustainable high performance impossible — is not a side effect that appears in the cost-benefit analyses organizations conduct before AI implementation. It accumulates in the gap between deployment metrics and human reality, and it compounds in ways that are difficult to attribute to any single decision once they become visible as turnover rates, healthcare utilization, and the organizational brittleness that makes companies less capable of adapting to exactly the disruptions they implemented AI to handle.
Murire's systematic review illuminates the mechanism: employees navigating AI transitions are not simply learning new tools. They are engaged in ongoing cognitive work to stay relevant in environments where the criteria for relevance keep shifting, to maintain confidence in their judgment in contexts where algorithms are increasingly performing functions that used to anchor professional identity, and to sustain creative output while simultaneously managing the anxiety that comes with genuine uncertainty about their role. That work is cognitively expensive in ways that drain the mental bandwidth typically available for the kind of problem-solving and innovation that organizations are simultaneously expecting AI adoption to unlock.
Hamza and Karadas make this concrete in resource-constrained organizational contexts, where the luxury of gradual implementation is typically unavailable. The employees in their study of Iraqi SMEs showed measurable decreases in creative output and problem-solving capacity during AI integration periods — not because the technology impaired their abilities but because the cognitive demands of navigating the transition were consuming the resources that creative work requires. A workforce operating under chronic stress doesn't simply perform at reduced capacity; it generates the turnover costs, healthcare expenditures, and organizational fragility that represent the hidden infrastructure costs of digital transformation, costs that rarely appear in the projections that justified the investment but reliably appear in the outcomes that follow it.
What Strandt and Murnane-Rainey's cross-cultural research adds to this is a finding that complicates the assumption that the stress response is primarily about technical anxiety or change resistance. Eastern leaders in their study, who showed lower initial acceptance of AI systems, simultaneously demonstrated more sophisticated awareness of the human costs embedded in implementation — their emphasis on organizational support structures and peer encouragement reflected an intuitive understanding that sustainable AI adoption is fundamentally about people moving through a transition together, not about optimizing a technical rollout. The workers experiencing these transitions are generating signals about what the process actually requires, but those signals get filtered out by metrics designed to track efficiency rather than human sustainability.
Coaching Leadership as Organizational Infrastructure: What Actually Protects Workers
What Jeong and colleagues found operating protectively in organizations that navigated AI transitions without the predicted health deterioration was not primarily a technical intervention or a training program — it was a leadership orientation. Coaching leadership, the kind that prioritizes individual development over directive management, that asks questions rather than issuing commands and frames the transition as a collaborative learning process rather than a unilateral imposition, functioned as a buffer against the physical health consequences that AI adoption typically generates. The mechanism is less intuitive than it might initially appear: coaching leaders change how employees interpret ambiguity during the transition, and ambiguity is the primary driver of the threat responses that accumulate into chronic stress.
Instead of reading every algorithmic decision as a potential signal about their diminishing relevance, employees working with coaching leaders begin to process AI's presence as data — feedback for developing their capabilities, material for an ongoing conversation about where their role is headed and what skills that trajectory requires. Murire's systematic review provides the structural context for why this matters: the organizations that sustain employee engagement through AI adoption are those that make the transition a learning process rather than a performance evaluation, creating psychological safety not by removing uncertainty but by changing its valence. Uncertainty remains, but it no longer reads as threat.
Strandt and Murnane-Rainey's cross-cultural findings add important texture to this: the coaching leadership model requires cultural calibration to function effectively, with Western leaders achieving buffering effects through different specific practices than Eastern leaders, who need more robust organizational support structures for the approach to produce comparable results. What holds across cultural contexts is the underlying orientation — treating employee confusion and resistance as information about training needs and system design rather than as personal failings or change resistance to be managed. Shang's research in Singapore's construction industry reaches the same conclusion from a different direction: organizations with strong leadership support for AI adoption face significantly lower implementation barriers, but only when that support manifests as developmental engagement rather than performance monitoring, because the former builds the adaptive capacity the transition requires while the latter compounds the anxiety it produces.
The most striking finding from Hamza and Karadas is that this reframing can transform even the informal coping mechanisms employees develop under stress — including behaviors that organizations typically identify as counterproductive — into something organizationally valuable. Social cyberloafing, digital wandering, the cognitive breaks that employees take when the demands of the transition become overwhelming, became in coaching leadership environments not evidence of disengagement but spaces where employees mentally integrated new AI tools with existing expertise, processed the identity disruption of working in fundamentally changed conditions, and sustained the creative capacity that the transition was ostensibly designed to enhance. The leadership approach doesn't eliminate the coping mechanisms; it creates conditions where they serve growth rather than just survival.
Scaling the Protection: What Organizational Culture Has to Carry
The protective effect that coaching leadership generates at the individual and team level raises the question that Murire's research points toward but doesn't fully resolve: what does it take to scale that protection across entire organizational cultures, rather than leaving it dependent on whether a particular manager happens to have developed a coaching orientation? Koldovskyi's multi-country analysis of AI intensity and business resilience offers the most direct evidence available, and what it shows is that the organizations scoring highest on sustainability and resilience metrics shared a characteristic that most AI implementation frameworks don't measure or prioritize — their leaders had learned to treat employee health not as a byproduct of good management but as a leading indicator of technological readiness, as data about whether the organization has the adaptive capacity to make the integration durable.
The practical implication of this is not obvious, but it is significant: workforce well-being predicts successful AI integration more accurately than technical infrastructure or budget allocation, which means organizations that are monitoring the wrong variables are consistently being surprised by implementation outcomes that were legible in advance to anyone tracking the right ones. A healthy workforce adapts; a stressed workforce resists, and the resistance is not irrational — it is the appropriate response of a system under chronic load to demands that exceed its current capacity.
What Shang's research in Singapore's construction industry makes concrete is that the health outcomes organizations typically track in HR — blood pressure, sick days, turnover rates — improved measurably when leaders adopted coaching rather than directive approaches during AI transitions, and that improvement showed up in implementation success rates as well. These are not separate outcomes running on parallel tracks; they are related in the specific way that human capacity and organizational performance are always related — the technology runs on people, and people have limits that organizational enthusiasm for digital transformation has difficulty fully accounting for.
The synthesis that emerges across Jeong, Murire, Hamza, Strandt, Shang, and Koldovskyi is not a prescription for slowing down AI adoption, which is neither realistic nor necessarily desirable. It is a reframe of what the adoption actually requires — not just technical readiness, budget, and training programs, but the organizational culture, leadership development, and genuine investment in employee psychological safety that determines whether the humans implementing and working alongside AI have the capacity to do so in ways that compound rather than deplete. Koldovskyi's finding that AI intensity correlates with organizational resilience only when accompanied by adaptive leadership models is, among other things, a data point about what organizations are currently leaving on the table by treating the human infrastructure of digital transformation as secondary to the technical one.
The Ethical Dimension Isn't Separate: Why Human Flourishing Is a Strategic Variable
Murire's observation that AI transforms organizational culture whether or not leaders plan for it is most useful not as a warning but as a design constraint — the transformation is going to happen, the culture is going to change, and the question is whether the change is shaped by deliberate choices about what kind of organization this transformation is meant to produce, or whether it happens by default as the aggregate of decisions made primarily for technical and efficiency reasons. The ethical dimension of AI adoption is often framed as something separate from the strategic dimension, a set of considerations that need to be balanced against performance objectives, but Koldovskyi's research suggests the separation is false — organizations that treat employee health as an afterthought to AI implementation don't simply incur ethical costs. They produce technical outcomes that are less durable, less adaptive, and less capable of compounding over time, because those outcomes run on organizational cultures that are simultaneously depleted by the transformation meant to strengthen them.
What Shang and colleagues found in Singapore's construction industry — that even technically capable organizations with adequate resources saw AI projects stall until leadership shifted focus from deployment metrics to employee psychological safety — is a version of the same finding at the project level. The human infrastructure deficit they describe is not a soft problem that hard-nosed implementation can push through; it is the actual constraint determining whether the technical investment produces the intended returns.
The most direct statement of this comes from Hamza and Karadas' work in Iraqi SMEs, where resource constraints made the choice between gradual, human-centered implementation and rapid technical deployment starkest. The most successful AI adoptions in their study occurred when leaders explicitly prioritized employee well-being as a precondition for technological success rather than a downstream benefit of it — when the behavioral-tech leadership framework they document was operative from the beginning rather than introduced reactively once resistance or health consequences became impossible to ignore. What initially appears as reduced productivity — the time allowed for adaptation, the space created for employee concerns, the investment in psychological support — accelerated sustainable innovation by building the trust networks that make complex technology adoption possible in the first place. The organizations that experienced those costs as inefficiency rather than investment are the ones whose implementation outcomes told the more familiar story of technical sophistication running on human infrastructure too depleted to fully support it.
Share this article