Everyone agrees AI will change insurance. The conference panels have moved on from whether to when. But most of the conversation still orbits around the wrong question.
The question isn't when AI replaces underwriters. It's where AI actually earns its keep.
There's a persistent fantasy — popular at conferences, irresistible to investors — that AI will waltz in and automate the hard stuff: pricing, claims adjudication, risk selection. Entire job categories dissolved overnight. It makes for great slide decks. It doesn't match how insurance works.
The durable wins won't come from replacing judgement. They'll come from clearing away the busywork that surrounds it.
Insurance runs on trust. A policy is a promise with financial teeth, and when something goes wrong, you can't hand a regulator a confidence score and call it a day. The whole chain — who decided what, based on which information, under what authority — has to hold up under scrutiny. Any AI that obscures that chain is a liability, not an asset.
So you start where the risk is low and the friction is high.
Intake is the obvious entry point, and for good reason.
Insurance still runs on an absurd volume of messy input. Submissions arrive as PDFs, spreadsheets, email threads, scanned documents — sometimes all of the above for a single risk. Someone has to open each one, figure out what's relevant, re-key it into the system and flag what's missing. It's tedious, error-prone, and it eats hours that experienced people shouldn't be spending on data entry.
AI can compress that dramatically. A system that ingests a broker submission, extracts the key fields, structures them cleanly and surfaces gaps before a human ever touches it — that's not a moonshot. We can build that today. And the downstream effect is real: your underwriters spend their time underwriting instead of decoding attachments.
From there, the work moves into triage.
Routing, prioritisation, chasing missing information, flagging anomalies — all the coordination overhead that keeps an operation moving but doesn't require deep expertise. AI handles this well because the decisions are structured and the stakes of any single action are low. Get it wrong and you've mis-prioritised a queue, not mis-priced a catastrophe layer. Meanwhile, the cumulative time savings across a team are enormous.
Decision support is where it gets genuinely interesting — and where people get sloppy in their thinking.
The replacement narrative is seductive here, and wrong. An underwriter evaluating a complex risk doesn't need a model to make the call for them. They need the relevant information surfaced fast, comparable cases pulled up without a twenty-minute search, and a clean view of what they're actually deciding on. AI as a research assistant with perfect recall and no ego — that's the near-term picture, and it's plenty valuable. The human still owns the decision. The human should still own the decision.
Documentation is the sleeper.
Insurance generates mountains of explanatory output: endorsements, summaries, coverage confirmations, claim narratives, bordereaux notes. Most of it follows predictable patterns. AI can draft, adapt and translate this content inside a governed workflow — emphasis on governed. Unconstrained generation is a parlour trick. Generation inside a system with proper guardrails, templates and human review? That's operational leverage.
And governance is where most AI-in-insurance pitches fall apart.
The systems that survive in this industry won't be the ones with the best demo. They'll be the ones deployed with clear boundaries around what the model can and can't do, genuine human accountability at every decision point, and an honest assessment of where errors are tolerable versus catastrophic. If you can't explain your AI's role to a Lloyd's auditor in plain English, you're not ready to ship it.
AI is only as good as the system it sits inside.
A model plugged into a messy operating environment will amplify the mess. Scattered data, unclear states, weak workflow boundaries — add AI to that and you get confident-sounding chaos. But give a model clean infrastructure to work with — structured data, well-defined process boundaries, clear system states — and it becomes a genuine multiplier. The bottleneck in most insurance organisations isn't the model. It's the plumbing.
Adoption will be uneven, and that's fine. High-volume, low-ambiguity tasks — intake processing, document generation, routine triage — are natural starting points. High-stakes, low-frequency judgement calls need more caution and tighter controls. Anyone telling you it all happens at once is selling something.
The biggest AI contributions in insurance probably won't make headlines. No chatbot, no flashy feature. Just less manual handling, cleaner information flow, faster preparation, more consistent execution on routine work. Boring gains that compound quietly until the operation looks nothing like it did three years ago.
The real opportunity is simple: help serious operators do serious work with less friction around it.
Respect how hard the work actually is, and use better tools to strip away everything around it that doesn't need to be.