I was asked this the other day. My answer was: it depends on the context; it helps to have an example. And the contextual element is probably why this remains an evergreen question. While I have a fresh example in mind I thought I’d quickly plant a stake in the ground for reference.
First, it helps to understand the similarities and differences between Minimum Viable Product (MVP) and some related concepts. Skip ahead to the example if this is insufficiently minimal.
MVP and MVP-adjacent
Wikipedia’s definition is as good as any:
A minimum viable product (MVP) is a version of a product with just enough features to be usable by early customers who can then provide feedback for future product development.
I often add–possibly redundantly–that it’s minimum, in that it’s the smallest thing you can do, it’s viable, in that it can sustainably be used for the intended purpose, and it’s a product, in that it meets some need or solves some problem for the people who use it.

There are also many MVP-adjacent concepts, which can muddy the waters in conversations about MVPs, as they aim to solve different problems through different methods. The following may be more or less relevant to any conversation that starts around MVPs:
- 🐀 Riskiest Assumption Test (RAT) – pick the element of the product you know least about and create an experiment to better quantify the risk. This may inform a stage gate, but it won’t result in a product.
- ⛔️ Stage gate – a go/no-go decision. Used to explore options while limiting investment in low-value opportunities.
- 📦 Prototype – A initial version of a product that demonstrates the intended form, but may be different in substance. It may be a paper prototype of an app, a manual process that will later be automated, a digital twin of a real object, etc, but it’s unlikely to be viable, especially more than once.
- 🧪 Proof of Concept (PoCs) – Usually a technical demonstration that a certain algorithm, data set, integration, etc is feasible and delivers reasonable results (or the converse!) Similar to RAT but not necessarily the riskiest assumption, as we might not have tested the market.
- 🧑🔬 R&D project – A program of work that creates knowledge (research) and builds things (development) based on that knowledge. Would typically precede and inform an MVP. RATs, prototypes and PoCs may all be run as R&D projects, where there is a large knowledge gap to begin with.
- 🔁 Iterative and 📶 incremental approaches – explained by this product development flashcard. In a very broad generalisation, we typically iterate to MVP and increment from there.
We talk about many of these concepts and their relationships in more detail in chapter 2 of Effective Machine Learning Teams.
MVP example
I’m using trippler – my interactive planning app for resilient EV road trips – as an MVP example. I wore customer, product, engineering and data hats on the journey, and these opinions take each of those perspectives. I’d love any feedback on whether you’d categorise things differently!
I’ve identified 6 major stages in the trippler journey to MVP and beyond, illustrated and described in more detail below.

Each stage was gated, in that I wouldn’t have proceeded if any stage had failed. Let’s look at the stages in more detail:

- 🧪 Proof of Concept (PoC). Engineering hat. Is it possible to plan EV road trips that are resilient to charger failures using Operations Research techniques? See the article Solving EV charger anxiety and the notebook.
- 🐀 Riskiest Assumption Test (RAT). Data hat. Will the approach work with real world data? 🔁 Iterative R&D. What is the best approach to work with real data? See the article Data complications.
- 📦 Prototype. Product hat. Do the planning features work interactively in an app? (NB. I don’t need to prototype fining a route – this is a solved problem.) Try the trippler-lite prototype. I started the write-up A resilient charging planner in parallel with the prototype, but then updated it with the MVP releases.

- 🔋 MVP (Vic). Customer & product hat. The prototype isn’t viable, because to really see if it’s useful, I need to test my favourite or planned road trips. I can’t expect anyone to give feedback otherwise. This required further R&D to handle large numbers of chargers (resulting in 🪄 selection magic). For some time, I’ve been trying to learn more about the traditional owners of the lands I visit, so at this stage I also integrated Native Lands Digital features. The end result of this was my Minimum Viable Product. Try the trippler-vic version.
- 🔋 MVP (Aus). Customer & data hat. Of course, I immediately had feedback from a wider customer segment (Australians living outside of Victoria) that they would like to try their own road trips. A viable product for travelling in Victoria isn’t viable for travelling in other parts of Australia. This was a very quick update, simply requiring more charger data (I must have done something right!) This MVP remains the primary version of trippler, prioritising availability & stability.
- 📶 Incremental features post-MVP. Multiple hats. Finding gaps in the experience through using the app and from other people’s feedback. Addressing those in thin slices including R&D. See trickle charging, elevation, etc, in the trippler-beta version.
Postscript on product viability
My measure of viability is whether I can actually use trippler as part of the planning mix for EV road trips, and by that measure, it’s viable. It doesn’t cost me anything to run, and maintenance is minimal/fun.
Based on feedback from other users, however, missing charger data and imprecise geolocation might impact trippler’s viability. I’m aware these are imperfect, but they won’t affect all users, and filling the gaps may need a new approach. This is the tension between minimum and viable, and the line might shift.
Illusory progress and technical debt
As above, learning about customer needs doesn’t stop post-MVP, and we may re-assess viability of solution fit. We’ve also undoubtedly amassed technical debt in reaching an MVP, and in this environment, the effort to keep the lights on or make incremental changes may render the product economically unviable. (I’ve now paused new feature development on trippler to seclude new changes.)
A higher bar
I’d also need to set a higher bar for viability on other product and economic measures if trippler were to be more than a side project. Maybe it will be one day–provided the next RAT (do enough people find trippler useful?) passes–watch this space!