The starting point: arriving with something already built
It's a scenario we're seeing more and more. Someone - a founder, a product person, a non-technical entrepreneur - has used AI tools to build a mobile app MVP. They've made real progress. The app exists, it has functionality, and there's genuine effort behind it.
But then they hit a wall. The design doesn't look right across devices. There are bugs they can't track down, let alone fix. And when they try to publish on the App Store or Google Play, the process feels like a completely different world with its own opaque rules. This isn't failure, it's one of the most common situations we encounter, and it has a clear way forward.
What we actually received
Not long ago, a Flutter mobile app landed on our desk. The client had built it themselves using AI tools: screens, navigation, core logic, all there. It worked. But the moment we opened the project, the accumulated debt was obvious: inconsistent spacing, components that broke on certain screen sizes, typography that had never been properly adapted for mobile. Key user flows were blocked by bugs that weren't immediately visible but surfaced the moment you pushed past the main screens.
And then there was the store submission process: certificates, provisioning profiles, metadata requirements, review guidelines which was entirely uncharted territory for the client. In short: a product that was 70% of the way there but stuck at the hardest 30%.
How we approached it
Diagnosis first. Before touching anything, we mapped out exactly what was there, what was missing, and what needed to change. That meant going through the codebase to understand decisions that had been made - some intentional, some inherited from AI-generated code - and identifying which ones were causing problems versus which were simply unconventional but harmless.
Design and bug fixes second. With a clear picture of the issues, we worked through the design inconsistencies systematically: layout fixes, component standardisation, responsive behaviour across devices. In parallel, we resolved the bugs blocking critical flows - not by rewriting everything, but through targeted corrections at the actual root causes.
Store publishing last. This phase is genuinely underestimated. Getting an app into the App Store and Google Play involves developer accounts, signing certificates, build configurations, store listings, screenshots, privacy policies, and a review process with real requirements. We handled the full pipeline from first submission to approval.
AI in modern development: what it enables and what it doesn't guarantee
AI has changed how digital products are built, and more projects are arriving that were born with its help. That's a real shift in the industry and it's not going away.
The issue is that generating code with AI is easy. Understanding what you've generated, spotting where it fails, adapting it to Apple and Google's publishing requirements, and making sure what you have is maintainable long-term that's a different matter. AI doesn't know whether your architecture will cause problems in six months. It doesn't know the specific requirements of the app stores. And it can't guarantee that what it produces is correct just because it works on first run.
This isn't a problem with the tool. It's a context problem: without real development experience, it's hard to know which questions to ask and how to evaluate the answers. And that has concrete consequences when a project has to go out into the world.
What you should take away from this
If you have an MVP - built with AI, with freelancers, with whatever tools you had available - and something isn't quite working, or you simply don't know how to take the next step: there's a solution that doesn't involve starting over.
You don't need to throw everything out and begin from scratch. Sometimes you just need someone who knows where to look.