How To Make It as a
First-Time Entrepreneur

How to Make it as a First-Time Entrepreneur

Archive for the ‘ data analysis ’ Category

Vinicius Vacanti is co-founder and CEO of Yipit. Next posts on how to acquire users for free and how to raise a Series A. Don’t miss them by subscribing via email or via twitter.

One of the hardest decisions you have to make, as an entrepreneur, is deciding when to give up on your current struggling project. It’s made especially difficult because you always seem to have an exciting new idea rattling in the back of your head.

The difficulty of the decision is further exacerbated by:

  • Conflicting accounts from previous startups. On the one hand, you hear about how AirBnB struggled for years before finally making it. But, then you also hear how the founders behind Stickybits, after struggling for almost a year, dropped the project and built
  • Sunk cost. You’ve spent so much time on your current project, can you really walk away now? Do you want to tell your friends and family that you’re, yet again, starting on a new project? Do you want to admit you “failed” again?
  • New ideas seem better than they are. Your new idea is in the “informed pessimism” stage. It probably has all sorts of complications you haven’t thought about.

What if you quit right before your startup was about to take off? What if that other idea in your head is your

You Need a Framework

With such an emotional decision, it’s best to try to be as systematic and rational about it as possible.

The key to making this decision comes down to: (1) quickly iterating the product based on learnings and (2) consistently measuring each iteration based on a defined success metric.

Since things like frameworks work better with examples, I’m going to imagine I had the idea for TaskRabbit, the app where you can post tasks for people to do at a price, and that I was just getting started with the idea and struggling.

Defining a Success Metric

There are many possible success metrics and it depends on your business.

One of the more popular generic success metrics is the net promoter score. Basically, the score tells you how likely your users are to recommend your app. If the majority of your users aren’t “promoters”, you’re not going to make it. For more on this metric, see Eric Ries’s excellent post.

Here’s the net promoter score of some well-known apps:


But, you can use almost any metric like:

  • Conversion rate of people who, after clicking on a facebook ad, create an account after demo-ing your product
  • Percentage of people that invite friends on your “invite” friend step after they’ve used your product
  • Percentage of people that return to the site a week after signing up
  • If you are selling something to consumers or businesses, then your success metric is percentage of people who agree to pay for the product

For my fictitious case study (TaskRabbit), I would go to craigslist and find people asking for help and tell them they should put their request on my new site TaskRabbit. My success metric would be the percentage of people who added the task and perhaps, after putting the task up, I would give them a net promoter score survey.

With my success metric defined, I’d be ready to start iterating.

Why You Need to Iterate

Where some entrepreneurs stumble is they are never willing to iterate their product. They have an idea for the product, they put it out, people don’t like it and they throw in the towel. It’s incredibly hard to get it right the first time. You have to iterate.

Other entrepreneurs believe their product will magically work after a while. Unless they are iterating, it’s very unlikely it’s just going to magically take off. You have to iterate, you have to do so quickly, and you have to make sure the iterations are significant.

The changes you need to make must address the fact that your project must not be delivering enough value to the user at the cost you are requesting of them. For example, your emails aren’t valuable enough for them to bother opening. They don’t like it enough to recommend to their friends. They didn’t like it enough to take time out of their day to check it again a week later. In my fictitious TaskRabbit case, it could be that just a few people on craigslist are actually putting up their tasks on TaskRabbit and my net promoter score is negative.

At this point, people will tend to think they have a marketing problem. Not enough people know about it. But, it’s almost certainly that you have a product problem.


Before you go trying to fix your product, you should diligently note down where you currently rank your chosen success metrics. You need to make sure you get a big enough sample size to make sure your stats are accurate which can be hard to do by just talking to people in person.

Once you’ve noted your metrics, your goal is to find out why you have a product problem. You need to dig deep into the user psyche and pull out that reason.

You can give people online surveys to take, you can pay people $10 to get on the phone with you and, even better, you can bring in people to talk to you in person (via craigslist, buy their coffee at starbucks, etc.).

In my fictitious case, I finally muster up the courage to find out why TaskRabbit isn’t working. It turns out that potential users don’t trust the people who will be doing the tasks.

So, you go back to your apartment, implement a fix to the problem and release it again.

Compare Success Metrics for Each Iteration

Whatever methodology you used previously, you should do it again with new users and measure where your new product ranks on your chosen success metric.

You may find that your success metric is hugely improved and the product is taking off.

More likely, you’ll find that your success metric improved a little bit, stayed flat, or depressingly gone down.

Regardless, you need to get in touch with these new users and find out why they still don’t really want/need your product. You may find that you didn’t actually solve the problem you thought you were solving.

In the TaskRabbit example, I could have added a small bio next to TaskRabbits to make them seem more trustworthy. But, if the metrics didn’t improve, then I was wrong. I either didn’t actually address their concerns or there were other concerns that I didn’t know about. So, I could do something more dramatic like stating that all TaskRabbits have undergone a thorough background check.

Back to your apartment to iterate again.

Target States

With your various iterations, you’re trying to get to two potential states:

  • Success metric takes off. Your product will take off along with it. For net promoter score, it becomes positive.
  • Your success metric improvement stalls. You’ve iterated several times and, while they slightly improved your metrics, it hasn’t been enough. You’ve also run out of ideas on how to continue iterating. It may be that people don’t really have the problem you’re trying to solve. Or, the way you’re solving the problem is too demanding on the user and you don’t know how to make it easier for them.
The key takeaway is that if every iteration is improving your success metrics, keep iterating. You only stop iterating when you can’t seem to improve it any further.

It’s Okay to Start Something New

Hopefully your various iterations will take your project to where it needs to be to take off with your audience.

However, if you are iterating, your metrics aren’t improving and you’ve run out of ideas on how to address your users’s concerns about the product, it’s okay to throw in the towel and start working on a new project. You’ll have learned a tremendous amount from your first go around and will be in a much better position to make your next idea successful.

See discussion on Hacker News.

Vinicius Vacanti is co-founder and CEO of Yipit. Next posts on how to acquire users for free and how to raise a Series A. Don’t miss them by subscribing via email or via twitter.

Searches For “mcdonalds jobs” Alarmingly Accelerate

November 13, 2008 | Comments Off on Searches For “mcdonalds jobs” Alarmingly Accelerate | data analysis

As if there weren’t enough signs the economy is hurting, Google Insights shows us that number of searches for “mcdonalds jobs” is alarmingly accelerating.

Google Searches for "mcdonalds jobs"

In fact, November 9th, the last data point recorded represents the highest number of searches google has seen for “mcdonalds jobs” in the last 5 years (which is as far as the data goes).

Which states are doing this search the most (Ohio, Illinois, Wisconsin, Michigan and Florida)?  I guess the auto-industry’s bailout isn’t coming quickly enough.

Map of google searches for "mcdonalds jobs"

Facebook Users and Sarah Palin Are No Longer Listed in a Relationship

October 22, 2008 | Comments Off on Facebook Users and Sarah Palin Are No Longer Listed in a Relationship | data analysis

Facebook members used to think pretty positively about Republican Vice Presidential Nominee, Sarah Palin.  However, over the last few weeks, they have started thinking 50% more negatively.  How do I know?  Facebook started previewing the next generation of it’s obscure Lexicon service.

Essentially, the service combs through wall posts by Facebook members and performs aggregate analysis on the data similar to Google Trends.  One of the more interesting analysis Facebook performs is “sentiment” tracking.  Whenever a certain term is used in a wall post, it identifies whether the term was positive or negative.

The preview version only allows you to review a small set of topics though, fortunately, Sarah Palin is one of them.  On August 29th, wall posts mentioning Sarah Palin were 80% positive or 20% negative.  Now, wall posts are 30% negative, a 50% increase in negativity.

Are Early Movie Reviews Rigged?

October 1, 2008 | Comments Off on Are Early Movie Reviews Rigged? | data analysis

Based on an analysis of movie review data for 26 movies currently in theaters, early movie reviews are 25% more likely to be positive than later movie reviews.  All movie critics see the same movie, so they shouldn’t be giving different opinions based on when they see them.  That means something smells rotten in movie review land.

The analysis shows that 78% of these “early movie reviews” were positive while total movie reviews were just 62% positive.  I based my analysis on movie review data from 26 movies off of RottenTomatoes.  I defined early as a movie review published at least a week before the movie was released.  The difference may not seem large but it’s the difference between Batman Begins and Hulk or the Royal Tenenbaums and Sisterhood of the Traveling Pants 2.

Why are early reviews coming out positive?  There’s a classic conflict of interest:

  • Early movie reviews get many more readers which means more money for their sites / publications
  • Movie studios and their PR agencies, who want positive reviews, decide which critics review the movie first
  • If a critic dishes out a negative review, they are probably much less likely to get picked next time around

I hope I’m wrong, but the data is troubling.  RottenTomatoes and MetaCritic should both perform this analysis on their full data set.  They will be able to confirm my findings and potentially identify specific movie critics that may be giving biased opinions.

Your Twitter Followers Aren’t Real

September 18, 2008 | Comments Off on Your Twitter Followers Aren’t Real | data analysis, technology

Based on a random sampling analysis of twitter accounts I conducted, 6 out of 10 twitter followers aren’t actually following you.  That would imply that Barack Obama, who has the most twitter followers at 80K, really only has 30K “real” followers.

I decided to take a closer look at the top three twitter tech-heavyweight (figuratively speaking) bloggers based on Twitterholic’s top 100: Mahalo’s Jason Calacanis (#7, 34K followers), Scobleizer’s Robert Scoble (#8, 34K followers), and TechCrunch’s Michael Arrington (#13 at 25K followers).  Even though Calacanis has a slight edge on Scoble, looking at their “real” followers was a completely different story.  Robert Scoble has significantly more “real” twitter followers (13.6K) than Arrington (8.6K) and Calacanis (7.5K).  On average, they were reaching 68% less twitter accounts than their follower counts indicated.  This isn’t a comment about them, they are fantastic.  It’s a comment about how twitter follower numbers are misleading.

Twitter users are pretty proud of their follower counts and they put it on their blogs next to their RSS reader counts.  I’m pretty proud of my twitter account and I only have 57 followers.  Twitterholic even puts up a leader-board of the top 100.  But, the not-surprising truth is that like RSS reader counts, not that many people are actually reading what you are tweeting.

As Twitter continues its impressive expansion and twitter accounts start to become businesses, it will be important to have a more accurate view of the reach of specific twitter accounts.

Several services are making progress on this front (Twitter Grader, Twitterholic) but there’s a lot more to do.

Note:  For the purposes of this sampling, I defined a “real follower” as someone who follows less than 300 twitter accounts and is active as measured by having a status update submitted in the last 3 days.  It’s definitely not a perfect definition but I hope it was good enough for the purposes of this demonstration.

Shake Shack Without the Wait

September 9, 2008 | Comments Off on Shake Shack Without the Wait | data analysis, new york city

For those not in the know, new yorkers have been relaying shake shack line lengths using Twitter through a shake shack account a.k.a Shake Shack Flash Mob.  I decided to take the data created over the last four months and try to answer the question:  When should we try to grab lunch at Shake Shack? After all, no one wants to be in a 60-minute or “third tree” line behind 50 tourists.

Shake Shack Lunch Time

As a quick disclaimer:  While the mob is active, the sample size is too small to time it down to the minute, but the data does seem to point towards the following conclusions:

  • Pre-noon lunch:  Hit-or-miss.  You’d think by scrambling down there before noon you’d be okay, but there’s no guarantee.  Make sure to check the shake shack webcam
  • Post-noon lunch:  Not much data collected but not surprising.  The flash mob knows better than to insult the Shake Shack gods by irreverently trying to grab lunch during lunch-hour
  • Post-3 pm lunch:  This is the ticket. Either starve yourself till then or get a job that allows you to wake up at 11 am
  • General tip:  If it’s raining, cold or really hot, the line will be shorter than usual but don’t be surprised to still find people braving the elements
  • Funniest shake shack tweet goes to ceonyc:  “line very short….swarm! Swarm!”

Who’s graciously tweeting away line lengths?

Here’s the full list of shake shack tweeters:

Shake Shack Twitter Contribution

Update: Eater reblogged this post and pitched the shake shack flash mob.  Nice.

Update2: Thrillist reblogged the post referencing the shake shack flash mob as “vigilante nerds”

Hurricane Tracking FAIL

September 4, 2008 | Comments Off on Hurricane Tracking FAIL | data analysis, humor

XTRP is one of the models used for Hurricane tracking.  While I agree projecting a hurricane’s future path is complicated, it seems like XTRP has thrown in the towel and just started drawing a line up and to the left.  (XTRP is the model with the black triangles and dotted lines).

Most recent models for Hannah:

Hannah Models

Most recent models for Ike:

Ike Models

Doing some further research it appears that XTRP is just the Hurricane’s most recent movements projected forward. In other words, a very simple model that takes no future variables into account.