Why Integrators Belong Nowhere
Integration expertise exists in universities and policy work, so why can’t institutions recognize or resource it?
The Integrator’s Paradox
My job title is “institute manager” at KU Leuven. In practice, I bring 50+ professors together — from engineering, sociology, politics, economics, and governance — and have them collaborate on the energy transition.
Not “collaborate” in the proposal-writing sense. I mean: translating between disciplinary languages, sequencing conversations so that conflicts surface productively, designing processes that allow a political scientist and a grid engineer to recognize that they’re describing the same bottleneck in different vocabularies.
There are 22 of us doing this work at KU Leuven alone. Yet institutionally, we’re classified as technical staff. Support staff. Not academic staff, and certainly not researchers. Our work enables mission-driven research, yet the system lacks a category for it.
Then I found two papers published in Nature that described what we actually do.
Hoffmann et al. (2022) studied integration experts in academia and documented their career challenges — including one participant who described pursuing this path as “academic suicide.”
Mennes, J. (2025) studied integration expertise outside universities and found it everywhere: in policy work, urban projects, civic coalitions.
The pattern is unsettling: integration expertise exists and is practiced daily. However, the systems that depend on it most have no means of recognizing it.
What follows is not a personal account, but a structural comparison across academic and extra-academic systems, and what it reveals about institutional design.
What Integration Expertise Actually Is (and Isn’t)
Integration expertise is not the same as being interdisciplinary. It’s not facilitation. It’s not “having good people skills.”
Integration expertise is the practiced capacity to make heterogeneous forms of knowledge, values, constraints, and interests cohere enough that groups can decide and act. This builds on recent scholarship examining the factors that enable cross-domain integration in both academic and nonacademic settings.¹ ²
This involves specific, learnable skills: translation across epistemic and professional languages; managing tensions at cognitive, social, and emotional levels; designing processes and structures that enable collaboration; and knowing when synthesis is possible versus when trade-offs must be made explicit.
This is learned, practiced expertise. It develops through repeated engagement with cross-domain problems. And like any expertise, it requires institutional support to thrive.
Integration expertise is a real, learnable, and indispensable capability — but our institutions are structured as if it were incidental.
Academia’s Integration Problem Isn’t Cultural — It’s Structural
Universities increasingly position themselves as problem-solvers. They create research institutes, interdisciplinary centers, and mission-driven programs. Integration across disciplines is framed as essential and “core” to the mission.
Yet academic career structures remain organized around fundamentally different principles: disciplinary excellence as the coin of the realm, individual authorship determining recognition, and narrow metrics (h-index, grant capture, PhD supervision) driving evaluation.
This creates a structural misalignment.
Integration experts in academia are expected to lead complex, collaborative processes; absorb cognitive and social friction among team members; ensure coherence across projects; translate between disciplinary worlds; design integrative frameworks; and still perform as disciplinary scholars using traditional metrics.
At KU Leuven, my 21 fellow institute managers and I do work that makes inter- and transdisciplinary research operationally possible. We identify where a sustainability economist and a power systems engineer are unknowingly working on connected problems. We design joint workshops where philosophical assumptions about “transition pathways” can surface and be negotiated. We create frameworks that allow legal scholars and technical designers to contribute to shared outputs. We manage the emotional labor when collaboration gets difficult.
This is intellectual work. It’s knowledge work. But it’s classified as support work.
The consequences are predictable: role ambiguity (are we researchers or service providers?), responsibility without authority (accountable for outcomes without formal decision-making power), evaluation invisibility (our work doesn’t produce traditional outputs), and burnout and exit (skilled integrators leaving after years of “molding their own mold”) — patterns documented across institutions.¹
I want to be clear: this isn’t a moral failure of academia. It’s an institutional design failure. Mission statements say integration matters. Incentive structures say it doesn’t.
Integration in the Wild: Where Failure Is Visible and Costly
Prior to KU Leuven, I spent a decade as a consultant working on policy for public authorities and sector organizations—primarily on the energy transition. In that world, integration expertise was everywhere. It just wasn’t called that.
I worked as a policy broker, coordinator, and mediator, shaping societal policy by integrating many practical knowledge domains: engineering (grid capacity, heat pumps, insulation), legal (building codes, subsidy frameworks), financial (investment models, payback periods), social (tenant-owner dynamics, vulnerable households), and governance (municipal vs. regional authority).
Consider neighborhood renovations — deep energy retrofits of entire streets or blocks. This requires technical experts who understand building physics, financial experts who can structure long-term funding, legal experts who can navigate ownership complexities, social workers who understand vulnerable populations, governance experts who can coordinate across administrative levels, and community organizers who can build resident trust.
No single discipline can solve this. But putting all these people in a room doesn’t automatically produce integration.
Someone needs to map where different actors’ definitions of “success” overlap and diverge, design a decision process that doesn’t privilege one form of expertise over others, translate between technical feasibility (”thermodynamically possible”) and political feasibility (”acceptable to stakeholders”), identify which conflicts are solvable through creative synthesis and which require explicit trade-offs, and build structures that maintain momentum across multi-year implementation.
That “someone” is the integrator. In policy contexts, that role is more legible than in academia; even if it goes by different names, such as mediator, program manager, policy broker, or intendant.
The key difference: In extra-academic settings, integration failures have immediate, visible consequences. Projects stall. Budgets are wasted. Political capital evaporates. Communities lose trust. So integration expertise gets recognized instrumentally, even if it’s not fully professionalized.
In practice, integration requires capacities that academic discussions often overlook. There’s translational expertise: not just understanding different domains but actively translating between them in real time, often while managing competing interests. There’s system design, which means creating structures (working groups, decision forums, feedback loops), not just processes. There’s know-who expertise: identifying who has the right knowledge, authority, and legitimacy to contribute, and when. There’s normative expertise; making values and political constraints explicit early, because pretending they don’t exist only leads to later implosion. And there’s negotiation capacity that distinguishes between integration as synthesis (creating something new) and integration as reconciliation (managing persistent tensions).²
The neighborhood renovation work I focused on illustrates both success and failure. When a skilled integrator coordinates a pilot neighborhood, bringing together all the domains, the results can be remarkable: deep retrofit, tenant protections, community buy-in, replicable models, ... When policymakers try to scale without investing in integration capacity, initiatives fragment. Technical programs don’t connect to social programs. Financial instruments don’t match legal structures. Implementation stalls.
How Integrators Survive in Systems Not Built for Them
Integrators survive today by adopting one of three strategies: portfolio survival (juggling multiple roles, funding sources, and professional identities), camouflage (presenting integrative work as something else, such as disciplinary research or project management), or exit (leaving academic structures for consulting, policy organizations, or NGOs where the work is more explicitly valued).¹
I’ve used all three strategies. So have most integrators I know.
But here’s the pattern: The capability is real and indispensable. The institutional categories to recognize it simply don’t exist.
When a skilled engineer moves to a new position, their expertise is legible. Their credentials, publications, and technical contributions travel with them. When a skilled integrator moves, what travels? How do you evaluate someone whose impact is that 50 professors started collaborating more effectively? How do you assess someone whose contribution is that a policy initiative didn’t collapse?
Integration expertise exists. The metrics to recognize it don’t.¹
This is fundamentally a category error in how institutions classify and evaluate contributions.
Institutions classify integrators as “support” because they evaluate contribution through individualized output metrics: papers authored, grants won, and students supervised. Integration, by contrast, produces relational, systemic, and counterfactual value: “this collaboration produced genuine synthesis rather than parallel monologues,” “this conflict didn’t explode,” “this project didn’t fragment.”
The mismatch isn’t about merit. It concerns what the measurement system can observe.
Traditional academic metrics capture disciplinary knowledge production. They’re blind to the work of making that knowledge coherent across domains. Policy metrics capture delivered projects and policy adoption. They’re blind to the integration work that prevented failure.
Until institutions develop evaluative categories for relational and systemic contributions, integration expertise will remain simultaneously indispensable and invisible.
But let’s be clear about what’s actually happening: Institutions currently benefit from integration expertise while externalizing its costs onto individuals. The system works because integrators absorb risk, ambiguity, and emotional labor without formal recognition or compensation commensurate with impact. They bear the burden of ensuring collaboration succeeds, while others receive credit for the outputs. This is not accidental. It is a stable equilibrium — functional for institutions, extractive for individuals — and it persists until enough people leave.
One institutional response to this invisibility is particularly revealing. In many universities, integrators are encouraged—sometimes implicitly, sometimes explicitly—to seek external support to survive: from philanthropic foundations, endowed chairs, or corporate partners willing to fund integrative roles. At my own university, I regularly encounter the message that support for mission-driven integration should be secured externally rather than justified institutionally. We’re told to find companies to sponsor academic chairs, to court foundations that might fund coordination capacity, and to seek donors who understand the value of what the institution itself cannot seem to afford.
This is not a critique of philanthropy or industry engagement. Both can enable valuable work. This is a diagnosis. When an institution relies on integration expertise for its core missions—climate action, health transformation, urban sustainability—but can resource it only through exceptional external funding, that expertise has effectively been deemed optional. The result is a fragile ecosystem in which integration survives through patronage rather than being embedded as a normal organizational capability.
This financial externalization mirrors the human externalization described above. Institutions extract integrative labor from precarious individuals while extracting integrative funding from external actors. The pattern is consistent: integration is treated as mission-critical in rhetoric, but extraneous in resource allocation. That discrepancy is not an oversight. It is the structural expression of a category error that treats integration as enhancement rather than infrastructure.
Why Integration Failures Are Getting More Expensive
This isn’t just about career fairness. It’s about institutional competence in an era of complex, urgent challenges.
Consider what universities and governments are claiming to tackle: the energy transition and climate adaptation under tightening timelines; health system transformation amid legitimacy crises; urban development facing housing emergencies; food system redesign; digital governance; and AI policy.
These aren’t disciplinary problems. They’re integration problems.
Across sectors, the failure mode is the same: knowledge exists, but coherence does not. Integration failures show up as research collaborations that produce parallel outputs rather than synthesis, policy initiatives that fragment across domains, and strategies that are technically sound but socially or politically infeasible.
And these failures are expensive. Failed integration shows up as multi-year project delays, duplicated research efforts, abandoned pilot programs, stakeholder backlash, and erosion of public trust. A neighborhood renovation initiative that fragments across technical, social, and financial domains doesn’t just “underperform”; it wastes millions in public funds and burns community goodwill that takes years to rebuild. A research consortium that produces parallel disciplinary reports rather than integrated frameworks doesn’t just miss the mark; it squanders the very funding intended to address complex problems. These costs rarely appear on institutional balance sheets, but they are real, cumulative, and avoidable.
Mission-oriented funding is rising. Horizon Europe, the European Green Deal, and national climate strategies — all demand cross-domain integration. Yet the integration capacity required to deliver on these missions remains an unfunded mandate — essential to success, invisible in budgets, and borne by the most precarious people in the system.
Public legitimacy is fragile. Growing distrust in expert systems and institutions makes the quality of integration—how we bring together knowledge, values, and interests—more critical than ever.
Systems that depend on integration but refuse to professionalize it systematically underperform. The expertise to address these failures exists. We’re choosing not to invest in it.
Why Integration Can’t Remain an Unfunded Mandate
What would it look like if institutions were actually designed for integration expertise? Not a utopia — just institutional design that aligns with stated missions.
Consider what’s already emerging in fragments: some universities have created integration specialist positions (though often temporary), some funders now require integration plans (though rarely fund them adequately), some networks connect integration experts across sectors (though they remain marginal).
The challenge isn’t invention. It’s legitimation and alignment.
Integration could be a recognized role family, just as universities have teaching roles, research roles, and technical roles — with clear job descriptions, evaluation criteria, and career paths. Funding could explicitly resource integration work rather than treating it as the “unfunded mandate” of interdisciplinary projects. Evaluation could assess integrative quality alongside traditional outputs: Did the collaboration produce genuine synthesis or parallel monologues? Did the framework enable action or create another document?
We could design shared responsibility models rather than dumping all integration onto one person (often junior, often precariously employed). Integration could be distributed across team members with different specialized roles—facilitation, translation, synthesis, and evaluation. Cross-sector communities of practice could connect integration experts in universities with those in policy, design, urban planning, and civic organizations. The expertise is adjacent; institutional silos keep it fragmented.¹
None of this is radical. Much of it already exists in fragments. What’s missing is not innovation—it’s institutional will to legitimize what already functions informally.
Seeing the Pattern (and Deciding What to Do About It)
I opened with the paradox of my role: essential but invisible, valued but unrecognized, depended upon but unsupported.
That paradox isn’t unique to me. It’s structural. And it shows up everywhere integration expertise is practiced without institutional recognition.
Integration expertise already exists. The question is whether our institutions will continue to rely on it informally—burning through talented people as they exit for more legible careers — or finally design for it deliberately.
This essay is an invitation — not just to share stories, but to recognize patterns:
If you’re an academic who feels “in-between”: You’re not failing to fit existing categories. The categories fail to capture genuine expertise.
If you’re a practitioner who recognizes this work instantly: Your expertise is not “soft” or “supportive.” It’s strategic and intellectually demanding.
If you’re a leader who relies on integrators: The people making your mission-driven initiatives possible need more than acknowledgment. Continuing to rely on their vocational commitment as a substitute for institutional recognition is a choice — one with predictable consequences.
If integration work exists between organizational charts in your institution, under a different title, or distributed across informal roles, I want to hear about it. Not just individual experiences, but the structural patterns you’re seeing.
Where does integration expertise live in your context? What do you call it? How is it recognized, or not? What evaluation challenges does it face?
Let’s stop treating integration as invisible glue. Let us recognize it as the expertise it is.
Comments open. Let’s map this territory together.
References & Notes
This essay draws on two recent papers published in Nature’s Humanities and Social Sciences Communications:
¹ Hoffmann, S., Deutsch, L., Klein, J.T., & O’Rourke, M. (2022). Integrate the integrators! A call for establishing academic careers for integration experts. Humanities and Social Sciences Communications, 9, 147. https://doi.org/10.1057/s41599-022-01146-4
² Mennes, J. (2025). Not all who integrate are academics: zooming in on extra-academic integrative expertise. Humanities and Social Sciences Communications, 12, 333. https://doi.org/10.1057/s41599-025-01046-9
Key concepts referenced:
“Responsibility without authority”: Hendren, S., & Ku, M. (2019), cited in Hoffmann et al.
“Molding their own mold”: Lyall, C. (2019), cited in Hoffmann et al.
Integration expertise types (translational, system design, know-who, normative, negotiation): Mennes (2025)
Oosterweel Link case study: Mennes (2025)
The institutional design recommendations in Section 7 adapt suggestions from Hoffmann et al. (2022). The analysis of extra-academic integration draws on Mennes (2025), particularly her study of the Oosterweel Link urban development project in Antwerp, Belgium.
Personal reflections on policy work and the KU Leuven context are based on my experience working on energy transition policy for public authorities and sector organizations in Belgium, and on my current role as an institute manager coordinating research by 50+ faculty members across engineering, social sciences, governance, and economics.


Thanks for writing this. It has been my world for many many years and I regcognize so much of the observations and frustrations.