Blog / March 16, 2026
The Open-Source Bet: Why the Future of Enterprise AI Belongs to the Orchestrators

The numbers around open-source AI are striking enough on their own — over five million projects on GitHub, a 40% increase in a single year, more than half of enterprise deployments now running on open-source models — but what Kadıoğlu discovered when Fidelity Investments decided to build their entire AI strategy around modularity rather than proprietary models is more interesting than any headline figure. The financial giant's framework, built around twelve interoperable open-source libraries, cuts against the prevailing narrative about enterprise AI in a way that's worth taking seriously: the competitive advantage in this landscape is accruing not to the organizations with the largest proprietary models, but to the ones that have learned to orchestrate open components into systems capable of learning, adapting, and scaling without fracturing under their own complexity.
What makes this more than a technical observation is what Bakonyi found when he interviewed senior managers across large European companies about their AI implementations. Trust — the variable that ultimately determines whether AI adoption takes root or generates resistance — increased when organizations embraced transparent, modular approaches rather than black-box solutions. The paradoxes he identified, knowledge substitution, task substitution, and the conflicts that emerge when domain experts feel their expertise being absorbed rather than augmented, dissolved more readily in environments where employees could actually see how AI components fit together, where the seams between human judgment and machine pattern-recognition remained visible rather than disappearing into proprietary architecture. Fousiani's research on competitive organizational climates adds a dimension that sits underneath all of this: leaders who experience their authority as a responsibility rather than an opportunity create the conditions where modular AI flourishes, where employees encounter these systems as instruments of enhancement rather than signals of their own obsolescence.
The strategic implications extend further than most organizations have thought through. Herremans' aiSTROM framework, developed in direct response to the 34% failure rate of AI projects, is built on the recognition that successful AI strategy requires interdisciplinary teams and cultures of continuous learning — exactly the collaborative orientation that open-source development both demands and, over time, produces. Rostamzadeh and colleagues, in their analysis of AI's impact on organizational behavior, found the same pattern: the companies succeeding with AI were not uniformly the ones with the most advanced proprietary technology, but the ones that had built organizational cultures oriented toward transparency and shared learning. Fidelity's components being downloaded over two million times by the broader AI community is, among other things, a piece of evidence that the strategic moat comes from orchestrating capabilities better than anyone else, not from hoarding them.
What a Real AI Strategy Actually Requires: Five Pillars Worth Building
What Kadıoğlu and his team at Fidelity arrived at wasn't primarily a technical architecture — it was a framework for how a large organization thinks differently about intelligence itself. The five pillars they identified — learning from offline data, learning from online feedback, intelligent decision-making, automated assistants, and responsible AI practices — didn't emerge from theoretical modeling but from sustained engagement with the gap between AI's promise and what actually happens when you try to deploy it at enterprise scale, where the edge cases multiply and the elegant solutions developed in controlled environments meet the resistance of institutional complexity.
The first pillar, learning from offline data, sounds more straightforward than it is. Herremans' research shows that 34% of AI projects collapse around data quality issues, but the more fundamental problem is conceptual: most organizations have not learned to treat their historical information as a strategic asset with its own architecture and governance requirements rather than as an operational byproduct that happens to be available. When Fidelity's engineers built systems to extract patterns from years of transaction data, the activity was simultaneously technical and organizational — they were teaching the institution to see its own past as a source of competitive intelligence, which requires a kind of cultural reorientation that the technical work alone doesn't produce. Rostamzadeh and colleagues found this pattern consistently: companies that implement AI successfully don't just automate existing processes, they reorganize how knowledge moves through their systems, and the data strategy is where that reorganization either begins honestly or doesn't begin at all.
The shift to online learning, the second pillar, reveals something more consequential about how AI changes the metabolism of organizational decision-making. Traditional business intelligence operates on quarterly cycles and annual reviews, rhythms calibrated to human cognitive capacity and institutional deliberation. AI systems learning from real-time feedback compress that cycle into milliseconds while simultaneously expanding the range of what counts as actionable information, and this acceleration creates what Bakonyi calls "experience paradoxes" — situations where machine learning outpaces human ability to validate decisions, forcing organizations to develop new forms of confidence in automated judgment before the track record that would normally ground that confidence has had time to accumulate.
Intelligent decision-making, the third pillar, is where the psychological challenges for leadership become most acute. Fousiani's research demonstrates that AI acceptance in organizational settings depends substantially on how leaders construe their own authority — whether they frame AI decision-support as extending their capacity for responsible stewardship or as a competitive threat to their control. The difference in employee acceptance between these two framings is significant and durable, which means the third pillar is less a technical achievement than a leadership orientation that either gets cultivated deliberately or defaults to the more defensive posture that organizational anxiety tends to produce.
The fourth pillar, automated assistants, is where the boundary between AI as analytical tool and AI as collaborative partner becomes genuinely complex to navigate. What makes AI assistants work in practice isn't primarily their technical capability but the way they reshape the relationship between human expertise and machine capacity — whether that reshaping feels collaborative or competitive depends on design choices and organizational framing that are, as both Huston and Sherwood on adaptive leadership and Valeras and Cordes on organizational transformation have argued, ultimately about how change gets modeled at the leadership level rather than simply mandated from it.
Responsible AI practices, the fifth pillar, is the one most frequently treated as a compliance requirement to be satisfied separately from the real work. Kadıoğlu's framework refuses that separation, positioning responsibility as the foundation on which the other four pillars rest rather than as a constraint imposed on them from outside. Organizations that treat AI ethics as a distinct concern find themselves managing recurring crises around trust and accountability. Organizations that embed responsibility into their technical architecture from the beginning discover something that initially seems counterintuitive: ethical constraints, when they are genuine rather than performative, tend to produce more robust and generalizable solutions, because they force engagement with the full range of conditions under which the system will actually operate rather than just the conditions that make the system look good.
Modularity as Philosophy: Why Interoperability Is the Real Competitive Moat
The framework Kadıoğlu built at Fidelity rests on an insight that is deceptively simple to state and genuinely difficult to operationalize: AI systems that cannot communicate with each other are not systems in any meaningful sense — they are expensive silos that deliver the appearance of integration while producing fragmentation, and the fragmentation tends to become visible at exactly the moments when integration matters most. The twelve open-source libraries his team assembled don't just share data; they share a common language for describing what intelligence looks like as it moves between different contexts, and that linguistic interoperability turns out to be the difference between AI that compounds in value as it scales and AI that accumulates technical debt as it grows.
The modularity question cuts deeper than architecture. Bakonyi's interviews with European executives locate the breakdown of AI trust at precisely the places where systems become opaque — where handoffs between components disappear into proprietary logic that no one can inspect or interrogate. When a recommendation engine passes a decision to a risk assessment model, which passes its output to an automated assistant, the chain of reasoning either remains visible at every link or trust begins eroding somewhere in the middle, often without anyone being able to identify exactly where the erosion started. The organizations that sustain trust in their AI deployments, Bakonyi found, are those that treat each module as accountable for its piece of the larger decision — transparent about its inputs, its logic, and its limitations — rather than treating accountability as a property of the system as a whole that somehow distributes itself automatically across components.
Transparency without adaptability creates its own problem, though, and this is where Herremans' contribution to the modularity conversation becomes important. The 34% of AI projects that fail don't fail primarily because individual components stop working; they fail because the systems can't adapt when business conditions shift, when regulatory requirements change, when the competitive landscape reorganizes itself around new possibilities that the original architecture didn't anticipate. The aiSTROM framework she developed addresses this brittleness directly by insisting that every AI module be designed for replacement rather than permanence — that the standard for a well-built component isn't whether it currently works but whether it can be swapped out without requiring reconstruction of the entire system around it.
The financial services industry has become an unlikely laboratory for modular thinking precisely because the stakes of getting it wrong are so immediate and so legible. A fraud detection system needs to share pattern recognition with a customer service interface; a portfolio optimization engine needs to coordinate with compliance monitoring; each of these conversations between specialized intelligences requires standardized interfaces that only exist reliably when modularity is a design principle from the beginning rather than something layered on afterward when the scaling problems become unavoidable.
The Ethics Aren't Separate: Trust, Power, and What Actually Breaks AI Deployments
What happens when integrated systems start making decisions that matter — not in controlled environments or carefully selected pilot programs, but in the full complexity of organizational life at scale — is that the trust paradox Bakonyi identified asserts itself with a force that technical elegance cannot absorb. The most consequential of the seven trust-undermining patterns he documents, what he calls the error paradox, reveals something about AI adoption that most organizations learn too late: when an AI system makes a mistake, users don't merely lose confidence in that particular decision. They begin questioning the entire framework, and that erosion is faster and deeper than anyone building these systems typically anticipates, because it isn't really about the error itself — it's about whether the organization's relationship to the technology can survive encountering its limits.
The psychological dynamics around this become more complicated in competitive organizational climates, which are precisely the environments where AI should theoretically thrive but where the conditions for trust are most fragile. Fousiani's research with 237 employees shows that competition can drive AI adoption, but only when leaders frame their authority as responsibility rather than opportunity — a distinction that sounds like an abstraction until you see what it produces in practice. Leaders who approach AI as a mechanism for consolidating their own influence create what Fousiani describes as a threatening atmosphere, one in which employees read the technology as a weapon aimed at their expertise and autonomy rather than a tool that might expand their capacity. Leaders who approach AI as a way to better serve their teams and their organization's actual purpose see adoption rates that climb steadily over time and remain durable under pressure. The difference isn't in the technology, the training programs, or even the governance frameworks — it's in how power is wielded in the environment surrounding implementation.
Rostamzadeh and colleagues found the deeper organizational layer of this dynamic in their meta-synthesis of eighteen studies on AI's impact on organizational behavior. AI doesn't just automate tasks or improve the efficiency of decisions; it reshapes how employees think about fairness, autonomy, and their own place in the institutional order, and that reshaping happens whether or not anyone in leadership intended it. Transparent AI systems can actually enhance perceptions of organizational justice — when people can see how a decision was made, they are better positioned to evaluate whether it was made fairly — but bureaucratic organizations consistently struggle to maintain employee satisfaction during AI transitions, creating a structural tension that no technical sophistication resolves, because the tension isn't about the technology. The employees resisting aren't resisting AI; they're resisting what AI represents about their organization's values and their own future within it.
This is what makes the most consistent finding across this literature simultaneously obvious and difficult to act on: the organizations succeeding with AI ethics are not uniformly the ones with the most comprehensive policies or the most elaborate oversight mechanisms. They are the ones that recognize AI deployment as a cultural transformation with technical dimensions rather than a technical implementation with cultural side effects, and they structure their approach accordingly — treating responsible AI as one of five foundational strategic pillars, as Kadıoğlu does, or building value-based metrics into evaluation from day one, as Herremans' framework insists, rather than managing ethics as a compliance layer applied to systems that were designed without it.
Where This Is Heading: The Organizational Capacity Question
The trust paradox Rostamzadeh identified doesn't attenuate as organizations scale their AI deployments — it compounds, because the decisions being made by integrated modular systems at scale are more consequential, more opaque in their cumulative effects, and more difficult to attribute to any single component or choice. What Kadıoğlu's work at Fidelity previews about where enterprise AI is heading isn't primarily a picture of more sophisticated algorithms or more extensive automation; it's a picture of organizations that have developed genuine capacity to continuously reconfigure both their technical and human architectures, treating the two as interdependent rather than as parallel tracks that occasionally need to be coordinated.
The modular future he maps — twelve open-source libraries working in concert, components that can evolve independently without destabilizing the system, capabilities shared openly with a broader community rather than hoarded as proprietary advantage — requires exactly the kind of adaptive leadership that Valeras and Huston describe as essential in disrupted environments: the capacity to model change rather than merely mandate it, to remain oriented toward purpose while the specific means of pursuing that purpose shift continuously beneath organizational feet. When an AI strategy is capable of evolving at the speed of open-source development, the leadership approach has to develop comparable flexibility, which is a significantly more demanding requirement than most leadership development frameworks are currently designed to produce.
Herremans' finding that 34% of AI projects still fail despite all available technical sophistication is, in this light, less a statement about AI and more a statement about organizations — about the gap between the capacity to deploy intelligent systems and the capacity to integrate them meaningfully into the human decision-making processes they are meant to support. The aiSTROM framework's insistence on starting with value-based metrics rather than technical benchmarks is an attempt to close that gap from the beginning rather than discovering it after deployment, but it assumes an organizational stability and leadership clarity that remain genuinely rare. What separates the organizations that merely implement AI from those that transform through it is not, in the end, a technical question. It is a question about whether the humans leading these institutions have developed the capacity to work alongside systems that are themselves continuously learning — and whether they have built the organizational cultures that make that kind of collaborative, ongoing adaptation not just possible but normal.
Share this article