Benchmarking begins
Benchmarking begins with MVP 0.1
It’s the fifth week in the alpha and we’ve reached an important point: Extract 0.1 MVP is in the hands of users, with our first cohort of LPAs. A small step for Extract but a massive leap for product validation. If you want the backstory, the previous weeknotes cover it. Here’s what happened this week.
MVP 0.1 release and testing
Finally, after weeks of anticipation and preparation, we’ve put Extract into the hands of our users with the release of Extract 0.1 MVP. Earlier in the week, we began our first research sessions. We’re looking to understand how LPAs are digitising planning documents at the moment, without Extract, to get benchmark timings for the current process. We’re also carrying out research sessions with stripped back features so we can validate our core feature set and prioritise future features.
3’s the magic number
The team also began looking into the newly released Gemini 3 and Segment Anything 3 models. We think there may be meaningful improvements throughout the extraction process.
We’re currently experimenting with how to integrate these into Extract as soon as possible and get their maximum performance. From initial experiments it should lead to a significant boost as well as speeding up the process.
Below are examples of how precise the tools have been. The current version of Extract struggles with precision of results from these harder maps, so these images show how the new models can really boost our accuracy even further.



Collaboration and new ways of working
As part of our alpha ways of working, the MHCLG and i.AI teams spent time refining how we plan and deliver together. We’re shaping a process that lets both teams test and research independently while staying aligned on priorities. This includes shared release planning, clearer and optimised agile ceremonies, and a coordinated research, delivery and release plan.
Next steps
We’re still in alpha, and much of our work continues to focus on the benefits model and understanding the problem space. Testing the 0.1 MVP is already giving us insight into what LPAs need, and will help us prioritise the backlog and identify the core functionality for a 1.0 release next year.
Our focus next week is preparing the 0.2 MVP release and deepening our experiments with the new models. We’ll also continue research sessions with LPAs to validate what’s working in the 0.1 MVP, what isn’t, and what this means for our priorities as we build towards 1.0.