"AI systems are being deployed in vulnerable communities by actors with minimal accountability. Digital infrastructure is reshaping power in ways governance frameworks can't capture. The gap between where technology gets developed and where its consequences land keeps widening".
If you're reading this feeling professionally illegible—too technical for policy work, too political for technical roles, too critical for implementation, too practical for research—you might be positioned to do something valuable.
The sector's real challenges sit in the spaces between professional specializations.
"Truth is a matter of who you're speaking to and who's financing what."
A journalist in Bangui, Central African Republic, said this during a briefing where I was presenting counter-disinformation strategies. I had arrived with frameworks developed in Brussels—sophisticated tools for detecting false information and building institutional resilience.
He wasn't saying truth doesn't exist. He was pointing out something I'd missed: communities in conflict don't distrust information because they can't spot lies. They distrust it because they've learned that whoever controls the money and institutions also controls what counts as "truth." I was treating disinformation as a technical problem while ignoring that information itself is a tool of power.
This hit differently than typical feedback. It wasn't "your approach needs cultural adaptation"—it was "your entire framing misses how power actually works here."
I made similar mistakes in India. I believed expanding internet access and social media would strengthen democratic participation and civil society. More connectivity meant more information flow, which would help communities organize and make better decisions.
Instead, I watched the same platforms enable sophisticated radicalization networks—coordinated campaigns weaponizing connectivity to mobilize violence and accelerate polarization. The infrastructure I'd championed as democratizing was simultaneously being used to tear communities apart.
The pattern was clear: technologies aren't neutral tools that work the same way everywhere. They enter existing power structures and get shaped by them. The question isn't whether something works in theory, but whose interests it serves in practice and what happens beyond what you intended.
These weren't implementation failures that better project management could fix. The problem was deeper: arriving with predefined solutions to problems defined through external frameworks, then measuring success through metrics that reflected external priorities.
This is the default mode for most international development, diplomatic engagement, and tech-for-good work. The people closest to problems rarely get to define what those problems are, let alone design the solutions.
My current work in AI governance grew from wrestling with this dynamic. When powerful actors deploy algorithmic systems in vulnerable communities—for predictive policing, resource allocation, content moderation—they're not just introducing new tools. They're making decisions about how society should be organized, whose knowledge matters, and which outcomes deserve optimization. And those decisions get encoded into systems that are hard to challenge or change.
Read on to also find out what Marine calls the "Jobs that don't have names yet" where she highlights roles that would be relevant for the future of AI governance and policy in this sector.