On the Ethics of Deploying AI in Public Institutions
Over the past year, I've been asked this question several times — by students, by journalists, and by a government office thinking about using machine learning for a welfare-allocation problem. The question is always some version of the same thing: how much should we trust the algorithm?
It sounds like a technical question. It isn't. It's a question about accountability — about who is responsible when a system gets it wrong, and who can meaningfully contest a decision made by a model no one in the room fully understands.
The efficiency trap
Public institutions are attracted to AI for understandable reasons. Algorithmic systems can process far more information than a human caseworker, apply rules consistently, and in principle eliminate some forms of bias that creep into human judgment. These are real benefits. But they come with a particular kind of risk that is easy to underestimate: opacity.
When a human caseworker denies a benefit, there is a face. There is a reason that can be challenged. When a model does it, there is a score.
This is not just a philosophical problem. In several countries, courts have found that administrative decisions made by algorithmic systems failed to meet basic due process requirements — not because the decisions were wrong, but because no one could explain them.
What I think accountability actually requires
I don't think the answer is "don't use AI." I think it requires three things that are rarely discussed together: explainability (can a decision be narrated to the person affected?), contestability (is there a real path to challenge it?), and auditability (can external parties verify that the system is doing what it claims?).
These are hard engineering and governance problems simultaneously. And they can't be solved by the data scientists alone. This is why I've become increasingly convinced that the most important research frontier in AI isn't capability — it's the design of institutions that can govern it.
More on this in future notes. I'm working on a paper about this with colleagues, and I expect the thinking here will evolve.