Mistral’s Pablo Srugo: What is the Sean Ellis Test?

Emily HaywardEmily Hayward
Emily Hayward

Published April 8th, 2022, updated April 18th, 2022

11   Pablo Srugo11   Pablo Srugo

Summary - Product-market fit: Every company aims to reach this crescendo. The Sean Ellis Test is a formulaic way to measure product-market fit.

Product-market fit: Every company aspires to reach this crescendo in their journey. And the Sean Ellis Test is a formulaic way to measure if you’ve reached this inflection point. 

Pablo Srugo is a Principal at Mistral Venture Partners and the host of the Product-Market Fit Show, a podcast that shares real stories and examples of the journey to finding product-market fit. This makes Pablo the perfect guest to talk about the Sean Ellis Test on the Metric Stack podcast

In this episode, Allan and Lauren sit down with Pablo to dive into this metric: How do you measure it? What additional metrics can you pair with it for a full picture? Is there a certain company size that should focus on this metric? How often should you distribute the survey?

If you’re short on time, here are a few takeaways:

  • The Sean Ellis Test measures product-market fit and is most effective for small start-ups with a customer base. It’s measured by asking one question: “How would you feel if you could no longer use our product?” with only three potential answers: Very disappointed, somewhat disappointed, and not disappointed.
  • It’s as much a measure of product-market fit as it is where and with which types of customers is there the most product-market fit?
  • Consider how you’ll segment your audience to ask the question. We’re in a world of survey and email overload, so how do you break through the noise to ask your active customers and get the most value from your responses. 
  • Test, test, test! The metric will change based on how your customer base evolves and how your product changes. Consider how often you ask the question.

Listen to the full episode on Apple Podcasts, Spotify, or YouTube. If you enjoyed this episode, check out all the episodes of the Metric Stack podcast

Allan: Welcome back to the Metric Stack podcast. I'm really happy to be joined by Pablo Srugo, Principal at Mistrial Venture Partners, an early stage VC. Prior to Mistral, Pablo co-founded fitness technology startup Gymtrack, and was also the founder and CEO of MyTutor.ca, which was acquired in 2014.

Pablo is also the host of the Product-Market Fit Show, a podcast where he shares real stories and examples of the journey to finding product-market fit. I'm also joined by my colleague, Lauren Thibodeau and my name is Allan Wille. Pablo, super happy to have you today. 

Pablo: Alan, Lauren, it's a pleasure to be here. Thanks for having me. 

Lauren: Thanks to you as well, Pablo and let's get started. Today we're going to talk about the Sean Ellis test. Before we dive into the details of that. Can you just set the stage for us? What's the context we should have in mind as we're listening to this conversation?

Pablo: For sure. At Mistral, we're seed stage investors, we work with companies that are pre-product-market fit, all the way from idea, pre-seed, seed. They tend to be companies that have 5, 10, 20 employees. They tend to have anywhere from no revenue to maybe a million dollars in revenue.

But I think the most important thing. They're pre-product market fit because if they weren't they'd be able to raise a Series-A. We tend to lead these $3 million rounds. And so this metric that we'll talk about today is really in that context: Let's think of a 10 or so person startup that has customers, but doesn't have product-market fit yet, or doesn't know if they have product-market fit yet and wants to measure how much product-market fit they have, that's where this comes in. 

Allan: So the story is, I mean, you guys see a lot of pitches all the time. I mean, I can't even imagine what  a seed stage VC's inbox looks like. But you know, they're coming and presenting things with lots of confidence.

So you need some sort of a formulaic way of understanding if this idea has some fit in the market.  I know that the metric that we're talking about is called the Sean Ellis test. Sean Ellis is actually kind of a big deal and maybe we can talk about him as well, but describe to me, Pablo, what is this metric?

Pablo: Yeah, definitely. Maybe just backing up. We see probably a thousand companies a year, we meet with 200 or so founders, the founders we meet with, again, they don't have product-market fit, so it'd be great if they track this metric but they probably don't. I would argue my objective would be to get founders that are past our stage, like in between that seed to series A, and their whole goal is okay. If you just think about, before I dive into the metric, like the stages of a company, right. From inception to pre-seed. And I like to think of it in terms of fundraising, even though of course you can bootstrap a company, but it just kind of sets some clear milestones. If you think pre-seed, you've got a team, a small co-founding team.

You've got a market and an audience that you're going after, and you're starting to build a product. You think about Series A, that's where you have clear product in a large or really fast-growing market and the in-between is kind of no proven value prop is how we'd like to think of seed. So yeah, the team’s there, the product's there, you have this value prop, you thought you were going to, they had a hypothesis around delivering.

You're starting to deliver that value prop. Now you need to go from this. I've got some customers and they're getting value. So I'm delivering on the value prop. I have clear product-market fit. The symptoms of which are extremely fast growth, extremely great retention, et cetera, et cetera.

“I have clear product-market fit. The symptoms of which are extremely fast growth and extremely great retention.”

The question: How do you measure product market fit? And that's where the Sean Ellis test comes in. And it's really, really simple. I think that the best use-case of it was Superhuman and the new kind of email and they wrote a whole thing about it. At the very basic level, it's a one question survey that you ask your customers.This is really good for direct to consumer B2B SaaS, because the reality is if you're enterprise, even maybe mid-market, at the stage that we're talking about, you have one customer, you know, for customers, it's a really small dataset, right? And so it's much better if you have, it is a better fit, a better or worse for companies that are more long tail, you know, even though you only do 10, 20, 40K MRR, you have hundreds of customers.

So the question is very simply, “How would you feel if you could no longer use our product?” and there's only three potential answers: Very disappointed, somewhat disappointed, and not disappointed. And of course this is not a science, but you know, we kind of think product-market fit is that more than 40% of respondents say they are very disappointed, the higher, the better, of course.

Allan: I remember that Superhuman article. And that's probably the first time that we learned about this concept. And I actually didn't even know that it was called the Sean Ellis Test, but it was that article— think it was 2018—that actually got a lot of buzz. And I think a lot of people started thinking about this really simple. Now, I just want to pause on a few things that you said, because it's actually pretty important. So you do need to be beyond that really initial stage. You need to have customers. And as you said, preferably, we have an environment where you have more customers as opposed to a few very highly paying customers.

So we need some statistical relevance as we're asking that question. I suspect that this is something that as you guys start making investments early on, you're saying, “Okay, make sure that you guys are running the Sean Ellis test to gain valuable insight.”

Pablo: Well, that's exactly right. And it depends on what sort of company, I mean, we've done companies that are enterprise software. They'll raise a series A with two customers or three customers because they just signed a deal that's worth seven figures. And now the question is, and that's product-market fit, right. And you really only have three, four more times to raise a B. And so this is not the right test for that situation.

We have other companies, you know, we have direct to consumer coming up in our portfolio right now that we just doubled down on. They're still early, just seed stage, but they have 150. Okay, you can start, you can really close a hundred customers active. You've got obviously some that were not active before, so you've got enough to start running this. SMB size is another good place where you've got companies paying you a few hundred dollars a month.

Again, it's not that it's not that uncommon that you'll have hundreds of those. So those are good places to think about using this. 

Lauren: I know as we were kind of prepping and reading more about the Sean Ellis Test, there are a couple other criteria that it's really important that I think we understand to have, right. That customers have used the product recently. They've gone in a couple of times. Can you talk about that a little bit and why that's important? 

Pablo: Absolutely. So the test is really simple. How you execute it is where it gets more nuanced. And the thing is you're trying to figure out product-market fit.

And so that has to do with your product. It has to do with the market. So when you talk about who you send that to, you're kind of segmenting that market. Now you might start, especially if you're early on and you let's say have 200 customers, you might send it to all of them, but then you start to drill down and you can do it in different ways.

I mean, you could set it to the subset that meets this criteria, or you could send it to all of them and just get the data anyway. Either way you then start segmenting and saying, okay, let's say my 200 customers only have 20%. They'll be very disappointed. But if I look at the set of customers that was active last month, that's actually, you used it a few times, et cetera, et cetera.

Oh, that's only 50 customers in that. I'm at 40%. So you have product-market fit within one bucket and that's the key thing. I mean, if you think about it simply: If somebody is not using your product, they're probably not going to be very disappointed. And then you also have to ask, does your opinion even really matter?

I mean, that's like asking anybody, would you care? My product went away by the way. Here's what my product is. You want people who really are using your product. I'm not saying every day, depends on the frequency of use of your product, but who are interacting with the product who are seemingly getting value out of it.

Genuine customers, new users, and it's fresh enough in their mind that ideally they know they remember it. And though that's really the set you're going after it. And that's why it becomes pretty valid. If you've got people using it this month and there's a hundred of them and only 10% will be very disappointed, something's up right.

How come 90% don't really care if your product went away and yet they're using it. What does that say?

Allan: I can definitely see that we would want to constrain that control. So every time you run this test and presumably you would want to run this more than once, you want to sort of see how you're trending.

I think you'd probably want to be relatively precise about the group that you're actually measuring, because I can see if you include users that have only signed in the past month, as opposed to the users that have signed in the past week. Or users that have only signed in once, as opposed to 10 times, I can imagine that skewing the numbers quite a bit.

So I don't know exactly what the right answer is. It probably depends a little bit on, as you said, the use case, but I think companies that start measuring it should be consistent every time that they measure it so that they can compare this value. 

Pablo: I agree. I think part of it comes down to what question is answering, right? If the question is just, do I have product-market fit? That's one thing, but typically it's not so much like black and white. Do I have product-market fit or do I not? It's more. Where and with which types of customers do I have the most product-market fit, right? Where's the poll really coming from?

“Part of it comes down to what question you’re answering. If the question is just, do I have product-market fit? That’s one thing, but typically it’s not so black and white. Where and with which types of customers do I have the most product-market fit?”


So give me an example of a company selling into vertical X and they have some SMBs. They have some mid market, they have some large mid market, and then they've got different types, maybe geographies, and then you have like demographic type stuff, but then you have how people actually engage with the product, right?

People at companies that are truly using it every day, companies that are really distributed to the entire workforce, others that don't, it's just the managers using it so on and so forth. Right. So again, you might send it to all of them and then start dicing and saying, and trying to in-part of the exercise, not so much, do I have, but where do I have it? 

And then you find out, okay, actually it's been in the market in this load, in this geography, that's using it in this way, at least however many times per day or per week or whatever. And that's where my very disappointed is up to 50, 60%. Well, that's interesting. How do I get more of those?

Or how do I get that for everybody else? These are the sorts of questions you start to ask.

Allan: So right away, I started thinking about this is, this is really the next segue to kind of the positioning exercise, right? Because if you're coaching your companies to then really segment, you want to find that group where 80%, 90% would be very disappointed. And that may be a small group, but maybe it's the tip of an iceberg that really has massive product-market fit.

They're the best fit customers. You start positioning, messaging and getting more and more of those. So I think being able to segment that is almost as important as getting the number in the first place. 

Pablo: It's totally right. I mean, if you look at the Superhuman example, it's exactly what they did.

Even to this day, if you sign up for Superhuman you go through pretty heavy onboarding and they start asking all these qualifying questions around, you know, how many emails do you get per day? How do you use your email technology today? What problems do you have and so on. And so they have a pretty clear objective, like they want the email platform that's premium, that's for this type of use case. And the idea is that there's enough of those to build a massive business on top of it. But if you're not in that use case, they don't want to start, you know, dealing with you on a customer success side, on a product side and so on and so forth.

So it is that part of the exercise that this sort of question leads you to and helps you with. 

Lauren: I want to pick up on that and maybe a bit of a zig away from this discussion just into the mechanics. This is a survey based metric and people get a lot of surveys these days.

Are you seeing any change, either increase, decrease, staying flat in response rate to asking a question like that? And do you have any thoughts around, should we always ask this question in isolation? Should it be part of a sequence of questions? What are your thoughts on that? Are people getting surveyed out?

Pablo: I think people are getting surveyed out, emailed out, everything out, right? There's so much noise out there. And that's where, again, I think in an ideal world, you might as well send it to everyone, like in an ideal world with no constraints, send it to everyone all the time and you have a lot of data and then you slice and dice it however you want. Right? In a real world with constraints, you end up sending less and to those who are most into it, because those are the most likely who haven't logged into the product in a month, get an email from you with this question. And they probably ignore it, right. Somebody who was on the product today and loves it, gets this question and they're like, okay, I'll answer them because I'm getting value from them. 

And so that starts to also hone you in on realistically, these are the type of people you can go after which your customers that are using it that are active that get the value. And then, you know, the other questions in general, the less, the better, that's why you do want to pick out to the extent that you're getting data by actually asking your customers to do something it's like, what is the one question or three questions that I need to ask them?

Allan: We've experimented, we do surveys via emails. We do surveys in app, as you said, there's so much noise out there. I think finding somewhere where that survey journey is super easy, it's in context. And then again, applying that consistently, because for every email that you're sending, there may be a different email that you want to get down the road that again like that fatigue is something real. I've also read articles that say exactly that, send more email, right? So who knows. 

Pablo: That's right. You have to test stuff, but this is the kind of metric that I would argue is not something you need to measure every day. You don't need to measure this every week. Like this is changing relative to your customer base changing and it's changing relative to meaningful product changes. Most of which are happening, I think at most on a quarterly basis. 

Allan: Yeah, and I think what's really nice about it is it's quick. It's super, super simple. Like how disappointed would you be? If this product wasn't right anymore—that's it. It's not a multi page form here. This is super simple. And I think that always wins.

Lauren: Is there a point at which you would recommend companies stop asking that? Or is that a valid question throughout the customer life? 

Pablo: I don't know that I would necessarily stop. I think you might readjust who you're asking it to. I mean, you obviously want to ask questions for a reason, right?

And once you are scaling to the point where you have product-market fit, there may be other situations where you're, for example, going after a new vertical, right. So do I have product-market fit in that new vertical, new geography? Do I have a program in the new geography? So do I have an use case?

“I don’t know that I would stop asking the question. Adjust who you’re asking it to. And once you’re scaling to the point where you have product-market fit, there may be other situations: you’re going after a new vertical, new geography, new persona or even major product releases.”


Do I have that new persona or use case and even new major like product releases, right? And so you launch this new product and you kind of want to get a test for now. The beauty of it is the more customers you have, the more you can just sample size down and you don't need to send it to everybody and bother everyone.

And so on. I think it is definitely most critical when you're searching for the initial product-market fit. But I think you can keep leveraging this throughout because product-market fit isn't a one and done sort of thing. As things evolve and markets change, in product changes, you may or may not have product-market fit in different parts of your business.

Allan: I wonder how effective it would be for major feature releases as well. So you've introduced a new feature. You want to find out, it's not really the product-market fit plus the value, I guess it is. Could you ask that exact same question? To your existing users, freemium customers, you know, if this feature didn't exist, how disappointed would you be in it?

And I don't know if people are doing that, or if they're largely thinking about this more as a holistic approach to product-market fit, but I wonder how specific you could start getting with this strategy

Pablo: I think you could get specific. I do think one of the values of this, and we'll talk later, I think about different metrics.

It's a really non-hypothetical question, or even if it is hypothetical, it's very easy to wrap your head around. Right. So as a user, if you just think about yourself, and you use a product like Gmail or Google, let's say Google and you ask, how disappointed would you be if Google went away, the answer right away. You don't have to think about it. 

Like NPS where it's one to 10, and I know you have to think about how likely are you to recommend Google one to 10. You're like, maybe to seven, maybe to nine. And that thinking loses some of the validity of, in my opinion of the question, right?

The fact that it's a gut reaction, gut answers. Gives it more credibility. And so the more specific you get about a specific feature, the more careful you have to be, you know, who you're asking that question to, because it should be you launch a new feature and maybe it's the set of users that's really heavy using that feature.

So that the question makes sense to them and has a gut reaction answer versus thinking, oh right, that feature. Oh yeah. I don't guess I would be very disappointed, well, you know, it should have just come to you right away. 

Allan: So, let's dive into those related metrics. What would you say are the ones that provide context and people should be looking at as well to really understand this Sean Ellis test.

Pablo: So I think the obvious one to talk about what you just alluded to is net promoter score NPS.So that's on a range of one to 10. How likely are you to recommend this product? And you have promoters, which are like nine or tens and detractors like six or below. And then you subtract your promoters by your tractors and that's your NPS zero to a hundred or whatever.

And that question is fine. I think what you're really getting at there is not so much product-market fit, but it's kind of word of mouth thing, which is also important and related. You would want both of those to trend the same way. I mean, generally speaking, to have more people be very disappointed as your NPS goes up, where they differ is they diverge on a few points. 

And this is why I think, even though NPS is probably more known for an early stage founder, the Sean Ellis test is more important because product-market fit is more important than word of mouth, even though they're both important, they diverge in a few places, right?

Like you take NPS. where would you have high NPS, but not product-market fit just as a thought experiment? Well, one thing that affects NPS a lot is customer success, customer service. So you might imagine you go through an airline, let's say WestJet just to put a name on it and they treat you super well.

And then you walk out, you get an email: “Hey, how likely would you be to recommend Western like 9 or 10?” Because they just treated me super well, the reality is. There's no product-market fit there. Next time you go and pick a fight, you're going to pick the shortest time and money. And that's the one you're going to go with.

It was when we probably wouldn't be that disappointed. Right. But they just give you great customer service, so you would recommend it.

Allan: That's a really good example. Now you've picked another one that is survey based. What about behavioral metrics?Somebody comes to the website, they read your positioning and your messaging, there's a ratio there about how many people then start a trial or an account. Right. So is that something as well, that should sort of indicate whether there's initial interest in the promise of what you're talking about? And then you go further down the funnel and you look at activation. Then you look at engagement and even conversion. Are you also looking at those things or even retention, and expansion. Are those also valuable ones or does that come later in the journey?

Pablo: I think in terms of data that you don't have to survey out, churn is the obvious one that we often think about in the context of product-market fit: Are the customers because product-market fit is taking away the marketing side of it to an extent, right? It's once people have the product and they're paying you and using it, how strong is that fit?

“In terms of data [collection] that you don’t have to survey out, churn is the obvious one that we often think about in the context of product-market fit.” 


How strong is really that need, and there are symptoms of, typically, if you have high product-market fit, you will get a lot of inbound because you have so many people desperate for the solution to that pain, but churn is an obvious one. So those should very much be correlated.

I mean, churn, it'll be very weird for you to or if they don't, you have to understand why? Maybe you're selling to the wrong audience, right? So that's where we talked about segmenting and you say, okay, these are the people where my fit is strongest. You should also see retention be strongest within that cohort.

So that's a good one. Then also, top of the top of the funnel is interesting as well. Like whether it's just even as simple, again, these are small companies, right? So the tracking ability is probably limited even by the number of customers. There is a mismatch there to think about, which is a really great product-market fit. 

And like in terms of the channel, okay, number of customers not increasing fast or the reverse, right. Number of customers is increasing fast. And yet you're feeling Sean Ellis. I don't know the answer, depending on the use case, but that's something interesting to dig into.

Lauren: Really fascinating. And coming up to wrapping up. But before we do that, can you think of a specific example, maybe from one of your portfolio companies or a company you've seen where the kind of the data they got from the Sean Ellis test enabled them to pivot? To change direction, to double down on a segment and actually find product-market fit faster for better.

Pablo: Well, there's an example of a company that's right now. Without naming it, just for the sake of it, they have been growing. So they're a post seed company. They've been, I would say doubling every year, but they're still small, right? They're still in this 1-2 million ARR range. They sell B2B SaaS. And so they have a few hundred customers. And the big question has been like, how come mainly outbound led, so how come, we converted decent ratios. We grow every single month, pretty high retention, but we have, we have not found a way to really take off, we have not found a way to really explode.

Doubling is good, but we want to triple four times per year, just because we're still small. That's where this is interesting. And they have a lot of users in history of let's say two or three of having sold into this same vertical, they're starting to use this metric in order to figure out what do we do here?

Is it, what is it? Think about moving up market, for example, is there a segment of customers where the fittest strongest that, and we should go after the or are we just failing on product market fit? And we need to make some changes to the product to really get there, to that very disappointed zone in order for us to go from this doubling to tripling and really feel like there's crazy pull from the market. 

Allan: Pablo, thank you so much. This has been a journey of fit and wisdom and segmenting. I think this is something that everybody should take a deep look at and likely multiple times over the course of their product evolution and maturity. So thank you very much. And everybody check out the Sean Ellis test.

Related Articles

Level up your decision makingTrack all your metrics in one place.Get PowerMetrics Free