aka: Screw you Joel Spolsky, We're Rewriting It From Scratch!
This is a guest post by Dan Milstein (@danmil), co-founder of Hut 8 Labs.
Disclosure: Joel Spolsky is a friend and I'm an investor in his company, Stack Exchange (which powers the awesome Stack Overflow) -Dharmesh
So, you know Joel Spolsky's essay Things You Should Never Do, Part I? In which he urgently recommends that, no matter what, please god listen to me, don't rewrite your product from scratch? And lists a bunch of dramatic failures when companies have tried to do so?
First off, he's totally right. Developers tend to spectacularly underestimate the effort involved in such a rewrite (more on that below), and spectacularly overestimate the value generated (more on that below, as well).
But sometimes, on certain rare occasions, you're going to be justified in rewriting a major part of your product (you'll notice I've shifted to saying you're merely rewriting a part, instead of the whole product. Please do that. If you really are committed to just rewriting the entire thing from scratch, I don't know what to tell you).
If you're considering launching a major rewrite, or find yourself as the tech lead on such a project in flight, or are merely toiling in the trenches of such a project, hoping against hope that it will someday end... this post is for you.
Hello, My Name is Dan, and I've Done Some Rewrites
A few years back, I joined a rapidly growing startup named HubSpot, where I ended up working for a good solid while (which was a marvelous experience, btw -- you should all have Yoav Shapira as a boss at some point). In my first year there, I was one of the tech leads on a small team that rewrote the Marketing Analytics system (one of the key features of the HubSpot product), totally from scratch. We rewrote the back end (moving from storing raw hit data in SQLServer to processing hits with Hadoop and storing aggregate reports in MySQL); we rewrote the front end (moving from C#/ASP.Net to Java/Tomcat); we got into the guts of a dozen applications which had come to rely on that store of every-hit-ever, and found a way to make them work with the data that was now available. (Note: HubSpot is now primarily powered by MySQL/Hadoop/HBase. Check out the HubSpot dev blog).
It took a loooong time. Much, much longer than we expected.
But it generated a ton of value for HubSpot. Very Important People were, ultimately, very happy about that project. After it wrapped up, 'Analytics 2.0', as it was known, somehow went from 'that project that was dragging on forever', to 'that major rewrite that worked out really well'.
Then, after the Analytics Rewrite wrapped up, in my role as 5 Whys Facilitator, I led the post-mortem on another ambitious rewrite which hadn't fared quite so well. I'll call it The Unhappy Rewrite.
From all that, some fairly clear lessons emerged.
First, I'm going to talk about why these projects are so tricky. Then I'll pass on some of those hard-won lessons on how to survive.
Prepare Yourself For This Project To Never Fucking End
The first, absolutely critical thing to understand about launching a major rewrite is that it's going to take insanely longer than you expect. Even when you try to discount for the usual developer optimism. Here's why:
- Migrating the data sucks beyond all belief
I'm assuming your existing system has a bunch of valuable data locked up in it (if it doesn't, congrats, but I just never, ever run into this situation). You think, we're going to set up a new db structure (or move it all to some NoSQL store, or whatever), and we'll, I dunno, write some scripts to copy the data over, no problem.
Problem 1: there's this endless series of weird crap encoded in the data in surprising ways. E.g. "The use_conf field is 1 if we should use the auto-generated configs... but only if the spec_version field is greater than 3. Oh, and for a few months, there was this bug, and use_conf was left blank. It's almost always safe to assume it should be 1 when it's blank. Except for customers who bought the Express product, then we should treat it as 2". You have to migrate all your data over, checksum the living hell out of it, display it back to your users, and then figure out why it's not what they expect. You end up poring over commit histories, email exchanges with developers who have long since left the company, and line after line of cryptic legacy code. (In prep for writing this, when I mentioned this problem to developers, every single time they cut me off to eagerly explain some specific, awful experience they've had on this front -- it's really that bad)
Problem 2: But, wait, it gets worse: because you have a lot of data, it often takes days to migrate it all. So, as you struggle to figure out each of the above weird, persnickety issues with converting the data over, you end up waiting for days to see if your fixes work. And then to find the next issue and start over again. I have vivid, painful memories of watching my friend Stephen (a prototypical Smart Young Engineer), who was a tech lead on the Unhappy Rewrite, working, like, hour 70 of an 80 hour week, babysitting a slow-moving data export/import as it failed over and over and over again. I really can't communicate how long this takes.
- It's brutally hard to reduce scope
With a greenfield (non-rewrite) project, there is always (always) a severe reduction in scope as you get closer to launch. You start off, expecting to do A, B, C & D, but when you launch, you do part of A. But, often, people are thrilled. (And, crucially, they forget that they had once considered all the other imagined features as absolutely necessary)
With a rewrite, that fails. People are really unhappy if you tell them: hey, we rewrote your favorite part of the product, the code is a lot cleaner now, but we took away half the functionality.
You'll end up spending this awful series of months implementing all these odd edge cases that you didn't realize even existed. And backfilling support for features that you've been told no one uses any more, but you find out at the last minute some Important Person or Customer does. And, and, and...
- There turn out to be these other system that use "your" data
You always think: oh, yeah, there are these four screens, I see how to serve those from the new system. But then it turns out that a half-dozen cron jobs read data directly from "your" db. And there's an initialization step for new customers where something is stored in that db and read back later. And some other screen makes a side call to obtain a count of your data. Etc, etc. Basically, you try turning off the old system briefly, and a flurry of bug reports show up on your desk, for features written a long time ago, by people who have left the company, but which customers still depend on. This takes forever all over again to fix.
Okay, I'm Sufficiently Scared Now, What Should I Do?
You you have to totally own the business value.
First off, before you start, you must define the business value of this rewrite. I mean, you should always understand the big picture value of what you do (see: Rands Test). But with rewrites, it's often the tech lead, or the developers in general, who are pushing for the rewrite -- and then it's absolutely critical that you understand the value. Because you're going to discover unexpected problems, and have to make compromises, and the whole thing is going to drag on forever. And if, at the end of all that, the Important People who sign your checks don't see much value, it's not going to be a happy day for you.
One thing: be very, very careful if the primary business value is some (possibly disguised) version of "The new system will be much easier for developers to work on." I'm not saying that's not a nice bit of value, but if that's your only or main value... you're going to be trying to explain to your CEO in six months why nothing seems to have gotten done in development in the last half year.
The key to fixing the "developers will cry less" thing is to identify, specifically, what the current, crappy system is holding you back from doing. E.g. are you not able to pass a security audit? Does the website routinely fall over in a way that customers notice? Is there some sexy new feature you just can't add because the system is too hard to work with? Identifying that kind of specific problem both means you're talking about something observable by the rest of the business, and also that you're in a position to make smart tradeoffs when things blow up (as they will).
As an example, for our big Analytics rewrite, the developers involved sat down with Dan Dunn, the (truly excellent) product guy on our team, and worked out a list of business-visible wins we hoped to achieve. In rough priority order, those were:
Cut cost of storing each hit by an order of magnitude
Create new reports that weren't possible in the old system
Serve all reports faster
Serve near-real-time (instead of cached daily) reports
And you should know: that first one loomed really, really large. HubSpot was growing very quickly, and storing all that hit data as individual rows in SQLServer had all sorts of extra costs. The experts on Windows ops were constantly trying to get new SQLServer clusters set up ahead of demand (which was risky and complex and ended up touching a lot of the rest of the codebase). Sales people were told to not sell to prospects with really high traffic, because if they installed our tracking code, it might knock over those key databases (and that restriction injected friction into the sales process). Etc, etc.
Solving the "no more hits in SQLServer" problem is the Hard kind for a rewrite -- you only get the value when every single trace of the old system is gone. The other ones, lower down the list, those you'd see some value as individual reports were moved over. That's a crucial distinction to understand. If at all possible, you want to make sure that you're not only solving that kind of Hard Problem -- find some wins on the way.
For the Unhappy Rewrite, the biz value wasn't perfectly clear. And, thus, as often happens in that case, everyone assumed that, in the bright, shiny world of the New System, all their own personal pet peeves would be addressed. The new system would be faster! It would scale better! The front end would be beautiful and clever and new! It would bring our customers coffee in bed and read them the paper.
As the developers involved slogged through all the unexpected issues which arose, and had to keep pushing out their release date, they gradually realized how disappointed everyone was going to be when they saw the actual results (because all the awesome, dreamed-of stuff had gotten thrown overboard to try to get the damn thing out the door). This a crappy, crappy place to be -- stressed because people are hounding you to get something long-overdue finished, and equally stressed because you know that thing is a mess.
Okay, so how do you avoid getting trapped in this particular hell?
Worship at the Altar of Incrementalism
Over my career, I've come to place a really strong value on figuring out how to break big changes into small, safe, value-generating pieces. It's a sort of meta-design -- designing the process of gradual, safe change.
Kent Beck calls this Succession, and describes it as:
"Design changes are usually most efficiently implemented as a series of safe steps. Succession is the art of taking a single conceptual change, breaking it into safe steps, and then finding an order for those steps that optimizes safety, feedback, and efficiency."
I love that he calls it an "art" -- that feels exactly right to me. It doesn't happen by accident. You have to consciously work at it, talk out alternatives with your team, get some sort of product owner or manager involved to make sure the early value you're surfacing matters to customers. It's a creative act.
And now, let me say, in an angry Old Testament prophet voice: Beware the false incrementalism!
False incrementalism is breaking a large change up into a set of small steps, but where none of those steps generate any value on their own. E.g. you first write an entire new back end (but don't hook it up to anything), and then write an entire new front end (but don't launch it, because the back end doesn't have the legacy data yet), and then migrate all the legacy data. It's only after all of those steps are finished that you have anything of any value at all.
Fortunately, there's a very simple test to determine if you're falling prey to the False Incrementalism: if after each increment, an Important Person were to ask your team to drop the project right at that moment, would the business have seen some value? That is the gold standard.
Going back to my running example: our existing analytics system supported a few thousand customers, and served something like a half dozen key reports. We made an early decision to: a) rewrite all the existing reports before writing new ones, and b) rewrite each report completely, push it through to production, migrate any existing data for that report, and switch all our customers over. And only then move on to the next report.
Here's how that completely saved us: 3 months into a rewrite which we had estimated would take 3-5 months, we had completely converted a single report. Because we had focused on getting all the way through to production, and on migrating all the old data, we had been forced to face up to how complex the overall process was going to be. We sat down, and produced a new estimate: it would take more like 8 months to finish everything up, and get fully off SQLServer.
At this point, Dan Dunn, who is a Truly Excellent Product Guy because he is unafraid to face a hard tradeoff, said, "I'd like to shift our priorities -- I want to build the Sexy New Reports now, and not wait until we're fully off SQLServer." We said, "Even if it makes the overall rewrite take longer, and we won't get off SQLServer this year, and we'll have to build that one new cluster we were hoping to avoid having to set up?" And he said "Yes." And we said, "Okay, then."
That's the kind of choice you want to offer the rest of your larger team. An economic tradeoff where they can chose between options of what they see when. You really, really don't want to say: we don't have anything yet, we're not sure when we will, your only choices are to keep waiting, or to cancel this project and kiss your sunk costs goodbye.
Side note: Dan made 100% the right call (see: Excellent). The Sexy New Reports were a huge, runaway hit. Getting them out sooner than later made a big economic impact on the business. Which was good, because the project dragged on past the one year mark before we could finally kill off SQLServer and fully retire the old system.
For you product dev flow geeks out there, one interesting piece of value we generated early was simply a better understanding of how long the project was going to take. I believe that is what Beck means by "feedback". It's real value to the business. If we hadn't pushed a single report all the way through, we would likely have had, 3-4 months in, a whole bunch of data (for all reports) in some partially built new system, and no better understanding of the full challenge of cutting even one report over. You can see the value the feedback gave us--it let Dan make a much better economic choice. I will make my once-per-blog-post pitch that you should go read Donald Reinertsen's Principles of Product Development Flow to learn more about how reducing uncertainty generates value for a business.
For the Unhappy Rewrite, they didn't work out a careful plan for this kind of incremental delivery. Some Totally Awesome Things would happen/be possible when they finished. But they kept on not finishing, and not finishing, and then discovering more ways that the various pieces they were building didn't quite fit together. In the Post-Mortem, someone summarized it as: "We somehow turned this into a Waterfall project, without ever meaning to."
But, I Have to Cut Over All at Once, Because the Data is Always Changing
One of the reasons people bail on incrementalism is that they realize that, to make it work, there's going to be an extended period where every update to a piece of data has to go to both systems (old and new). And that's going to be a major pain in the ass to engineer. People will think (and even say out loud), "We can't do that, it'll add a month to the project to insert a dual-write layer. It wil slow us down too much."
Here's what I'm going to say: always insert that dual-write layer. Always. It's a minor, generally somewhat fixed cost that buys you an incredible amount of insurance. It allows you, as we did above, to gradually switch over from one system to another. It allows you to back out at any time if you discover major problems with the way the data was migrated (which you will, over and over again). It means your migration of data can take a week, and that's not a problem, because you don't have to freeze writes to both systems during that time. And, as a bonus, it surfaces a bunch of those weird situations where "other" systems are writing directly to your old database.
Again, I'll quote Kent Beck, writing about how they do this at Facebook:
"We frequently migrate large amounts of data from one data store to another, to improve performance or reliability. These migrations are an example of succession, because there is no safe way to wave a wand and migrate the data in an instant. The succession we use is:
Convert data fetching and mutating to a DataType, an abstraction that hides where the data is stored.
Modify the DataType to begin writing the data to the new store as well as the old store.
Bulk migrate existing data.
Modify the DataType to read from both stores, checking that the same data is fetched and logging any differences.
When the results match closely enough, return data from the new store and eliminate the old store.
You could theoretically do this faster as a single step, but it would never work. There is just too much hidden coupling in our system. Something would go wrong with one of the steps, leading to a potentially disastrous situation of lost or corrupted data."
Abandoning the Project Should Always Be on the Table
If a 3-month rewrite is economically rational, but a 13-month one is a giant loss, you'll generate a lot value by realizing which of those two you're actually facing. Unfortunately, the longer you solider on, the harder it is for people to avoid the Fallacy of Sunk Costs. The solution: if you have any uncertainty about how long it's going to take, sequence your work to reduce that uncertainty right away, and give people some "finished" thing that will let them walk away. One month in, you can still say: we've decided to only rewrite the front end. Or: we're just going to insert an API layer for now. Or, even: this turned out to be a bad idea, we're walking away. Six months in, with no end in sight, that's incredibly hard to do (even if it's still the right choice, economically).
Some Specific Tactics
Shrink Ray FTW
This is an excellent idea, courtesy of Kellan Elliot-McCrea, CTO of Etsy. He describes it as follows:
"We have a pattern we call shrink ray. It's a graph of how much the old system is still in place. Most of these run as cron jobs that grep the codebase for a key signature. Sometimes usage is from wire monitoring of a component. Sometimes there are leaderboards. There is always a party when it goes to zero. A big party.
Gives a good sense of progress and scope, especially as the project is rolling, and a good historical record of how long this shit takes. '''
I've just started using Shrink Ray on a rewrite I'm tackling right now, and I will say: it's fairly awesome. Not only does it give you the wins above, but, it also forces you to have an early discussion about what you are shrinking, and who in the business cares. If you make the right graph, Important People will be excited to see it moving down. This is crazy valuable.
Engineer The Living Hell Out Of Your Migration Scripts
It's very easy to think of the code that moves data from the old system to the new as a collection of one-off scripts. You write them quickly, don't comment them too carefully, don't write unit tests, etc. All of which are generally valid tradeoffs for code which you're only going to run once.
But, see above, you're going to run your migrations over and over to get them right. Plus, you're converting and summing up and copying over data, so you really, really want some unit tests to find any errors you can early on (because "data" is, to a first approximation, "a bunch of opaque numbers which don't mean anything to you, but which people will be super pissed off about if they're wrong"). And this thing is going to happen, where someone will accidentally hit ctrl-c, and kill your 36 hour migration at hour 34. Thus, taking the extra time to make the entire process strongly idempotent will pay off over and over (by strongly idempotent, I mean, e.g. you can restart after a failed partial run and it will pick up most of the existing work).
Basically, treat your migration code as a first class citizen. It will save you a lot of time in the long run.
If Your Data Doesn't Look Weird, You're Not Looking Hard Enough
What's best is if you can get yourself to think about the problem of building confidence in your data as a real, exciting engineering challenge. Put one of your very best devs to work attacking both the old and the new data, writing tools to analyze it all, discover interesting invariants and checksums.
A good rule of thumb for migrating and checksumming data: until you've found a half-dozen bizarre inconsistencies in the old data, you're not done. For the Analytics Rewrite, we created a page on our internal wiki called "Data Infelicities". It got to be really, really long.
With Great Incrementalism Comes Great Power
I want to wrap up by flipping this all around -- if you learn to approach your rewrites with this kind of ferocious, incremental discipline, you can tackle incredibly hard problems without fear. Which is a tremendous capability to offer your business. You can gradually rewrite that unbelievably horky system that the whole company depends on. You can move huge chunks of data to new data stores. You can pick up messy, half-functional open source projects and gradually build new products around them.
It's a great feeling.
What's your take? Care to share any lessons learned from an epic rewrite?
Article has 2
comments. Click To Read/Write Comments
Ben Yoskovitz is the co-author of Lean Analytics, a new book on how to use analytics successfully in your business. Ben is currently VP Product at GoInstant, which was acquired by Salesforce in 2012. He blogs regularly at Instigator Blog and can be followed @byosko.
We all know metrics are important. They help report progress and guide our decision making. Used properly, metrics can provide key insights into our businesses that make the difference between success and failure. But as our capacity to track everything increases, and the tools to do so become easier and more prevalent, the question remains: what is a worthwhile metric to track?
Before you can really figure that out it's important to understand the basics of metrics. There are in fact good numbers and bad numbers. There are numbers that don't help and numbers that might save the day.
First, here's how we define analytics: Analytics is the measurement of movement towards your business goals.
The two key concepts are "movement" and "business goals". Analytics isn't about reporting for the sake of reporting, it's about tracking progress. And not just aimless progress, but progress towards something you're trying to accomplish. If you don't know where you're going, metrics aren't going to be particularly helpful.
With that definition in mind, here's how we define a "good metric".
A good metric is:
A ratio or rate
A good metric is comparative. Being able to compare a metric across time periods, groups of users, or competitors helps you understand which way things are moving. For example, "Increased conversion by 10% from last week" is more meaningful than "We're at 2% conversion." Using comparative metrics speaks clearly to our definition of "movement towards business goals".
A good metric is understandable. Take the numbers you're tracking now--the ones you think are the most important--and show those to outsiders. If they don't instantly understand your business and what you're trying to do, then the numbers you're tracking are probably too complex. And internally, if people can't remember the numbers you're focused on and discuss them effectively, it becomes much harder to turn a change in the data into a change in the culture. Try fitting your key metrics on a single TV screen (and don’t cheat with a super small font either!)
A good metric is a ratio or a rate. Ratios and rates are inherently comparative. For example, if you compare a daily metric to the same metric over a month, you'll see whether you're looking at a sudden spike or a long-term trend. Ratios and rates (unlike absolute numbers) give you a more realistic "health check" for your business and as a result they're easier to act on. This speaks to our definition above about "business goals"--ratios and rates help you understand if you're heading towards those goals or away from them.
A good metric changes the way you behave. This is by far the most important criterion for a metric: what will you do differently based on changes in the number? If you don't know, it's a bad metric. This doesn't mean you don't track it--we generally suggest that you track everything but only focus on one thing at a time because you never know when a metric you're tracking becomes useful. But when looking at the key numbers you're focused on today, ask yourself if you really know what you'd do if those numbers go up, down or stay the same. If you don't, put those metrics aside and look for better ones to track right now.
Now that we've defined a "good" metric let's look at five things you should keep in mind when choosing the right metrics to track:
Qualitative versus quantitative metrics
Vanity versus actionable metrics
Exploratory versus reporting metrics
Leading versus lagging metrics
Correlated versus causal metrics
1. Qualitative versus Quantitative metrics
Quantitative data is easy to understand. It's the numbers we track and measure--for example, sports scores and movie ratings. As soon as something is ranked, counted, or put on a scale, it's quantified. Quantitative data is nice and scientific, and (assuming you do the math right) you can aggregate it, extrapolate it, and put it into a spreadsheet. Quantitative data doesn't lie, although it can certainly be misinterpreted. It's also not enough for starting a business. To start something, to genuinely find a problem worth solving, you need qualitative input.
Qualitative data is messy, subjective, and imprecise. It's the stuff of interviews and debates. It's hard to quantify. You can't measure qualitative data easily. If quantitative data answers "what" and "how much," qualitative data answers "why." Quantitative data abhors emotion; qualitative data marinates in it.
When you first get started with an idea, assuming you're following the core principles around Lean Startup, you'll be looking for qualitative data through problem interviews. You're speaking to people--specifically, to people you think are potential customers in the right target market. You're exploring. You're getting out of the building.
Collecting good qualitative data takes preparation. You need to ask specific questions without leading potential customers or skewing their answers. You have to avoid letting your enthusiasm and reality distortion rub off on your interview subjects. Unprepared interviews yield misleading or meaningless results. We cover how to interview people in Lean Analytics, but there have been many others that have done so as well. Ash Maurya’s book Running Lean provides a great, prescriptive approach to interviewing. I also recommend Laura Klein’s writing on the subject.
Sidebar: In writing Lean Analytics, we proposed the idea of scoring problem interviews. The basic concept is to take the qualitative data you collect during interviews and codify it enough to give you (hopefully!) new insight into the results. The goal of scoring problem interviews is to reduce your own bias and ensure a healthy dose of intellectual honesty in your efforts. Not everyone agrees with the approach, but I hope you'll take a look and try it out for yourself.
2. Vanity versus Actionable metrics
I won't spent a lot of time on vanity metrics, because I think most people reading OnStartups understand these. As mentioned above, if you have a piece of data that can't be acted upon (you don't know how movement in the metric will change your behavior) then it's a vanity metric and you should ignore it.
It is important to note that actionable metrics don't automatically hold the answers. They're not magic. They give you an indication that something fundamental and important is going on, and identify areas where you should focus, but they don't provide the answers. For example, if "percent of active users" drops, what do you do? Well, it's a good indication that something is wrong, but you'll have to dig further into your business to figure it out. Actionable metrics are often the starting point for this type of exploration and problem solving.
3. Exploratory versus Reporting metrics
Reporting metrics are straightforward--they report on what's going on in your startup. We think of these as "accounting metrics", for example, "How many widgets did we sell today?" Or, "Did the green or the red widget sell more?" Reporting metrics can be the results of experiments (and therefore actionable), but they don't necessarily lead to those "eureka!" moments that can change your business forever.
Exploratory metrics are those you go looking for. You're sifting through data looking for threads of information that are worth pursuing. You're exploring in order to generate ideas to experiment on. This fits what Steve Blank says a startup should spend its time doing: searching for a scalable, repeatable business model.
A great example of using exploratory metrics is from Mike Greenfield, co-founder of Circle of Moms. Originally, Circle of Moms was Circle of Friends (think: Google Circles inside Facebook). Circle of Friends grew very quickly in 2007-2008 to 10 million users, thanks in part to Facebook's open platform. But there was a problem--user engagement was terrible. Circle of Friends had great virality and tons of users, but not enough people were really using the product.
So Mike went digging.
And what Mike found was incredible. It turns out that moms, by every imaginable metric, were insanely engaged compared to everyone else. Their messages were longer, they invited more people, they attached more photos, and so on. So Mike and his team pivoted from Circle of Friends to Circle of Moms. They essentially abandoned millions of users to focus on a group of users that were actually engaged and getting value from their product. From the outside looking in this might have been surprising or confusing. You might find yourself at a decision point like Mike and worry about what investors will think, or other external influencers. But if you find a key insight in your data that’s incredibly compelling, you owe it to yourself to act on it, even if it looks crazy from the outside. For Mike and Circle of Moms, it was the right decision. The company grew their user base back up to 4 million users and eventually sold to Sugar Inc.
4. Leading versus Lagging metrics
Leading and lagging metrics are both useful, but they serve different purposes. Most startups start by measuring lagging metrics (or "lagging indicators") because they don't have enough data to do anything else. And that's OK. But it's important to recognize that a lagging metric is reporting the past; by the time you know what the number is, whatever you’re tracking has already happened. A great example of this is churn. Churn tells you what percentage of customers (or users) abandon your service over time. But once a customer has churned out they're not likely to come back. Measuring churn is important, and if it's too high, you'll absolutely want to address the issue and try to fix your leaky bucket, but it lags behind reality.
A leading metric on the other hand tries to predict the future. It gives you an indication of what is likely to happen, and as a result you can address a leading metric more quickly to try and change outcomes going forward. For example, customer complaints is often a leading indicator of churn. If customer complaints are going up, you can expect that customers will abandon and churn will also go up. But instead of responding to something that's already happened, you can dive into customer complaints immediately, figure out what's going on, resolve the issues and hopefully minimize the future impact in churn.
Ultimately, you need to decide whether the thing you're tracking helps you make better decisions sooner. Remember: a real metric has to be actionable. Lagging and leading metrics can both be actionable, but leading indicators show you what will happen, reducing your cycle time and making you leaner.
5. Correlated versus Causal metrics
A correlation is a seeming relationship between two metrics that change together, but are often changing as a result of something else. Take ice cream consumption and drowning. If you plotted these over a year, you'd see that they're correlated--they both go up and down at the same time. The more ice cream that's consumed, the more people drown. But no one would suggest that we reduce ice cream consumption as a way of preventing drowning deaths. That's because the numbers are correlated, and not causal. One isn't affecting the other. The factor that affects them both is actually the time of year--when it's summer, people eat more ice cream and they also drown more.
Finding a correlation between two metrics is a good thing. Correlations can help you predict what will happen. But finding the cause of something means you can change it. Usually, causations aren't simple one-to-one relationships--there’s lots of factors at play, but even a degree of causality is valuable.
You prove causality by finding a correlation, then running experiments where you control the other variables and measure the difference. It's hard to do, but causality is really an analytics superpower--it gives you the power to hack the future.
So what metrics are you tracking?
We’ve covered some fundamentals about analytics and picking good metrics. It's not the whole story (to learn more see our presentations and workshops on Lean Analytics), but I'd encourage you to take a look at what you're tracking and see if the numbers you care the most about meet the criteria defined in this post. Are the metrics ratios/rates? Are they actionable? Are you looking at leading or lagging metrics? Have you identified any correlations? Could you experiment your way to discovering causality?
And remember: analytics is about measuring progress towards goals. It's not about endless reports. It's not about numbers that go constantly "up and to the right" to impress the press, investors or anyone else. Good analytics is about speeding up and making better decisions, and developing key insights that become cornerstones of your startup.
Article has 0
comments. Click To Read/Write Comments
If someone had told me a few months ago that I'd be spending more hours in PowerPoint than PyCharm (an IDE for programming in Python) I'd have laughed at them (not out loud though). Sure, I've been known to create some slides — and I do some occasional public speaking, but I don't usually spend crazy amounts of time on a slide deck.
Except this time. I've now clocked well over 200 hours on a single deck (including thinking/discussion time). It's the HubSpot Culture Code deck (available for your viewing pleasure below, or http://CultureDeck.com).
I've been reading, thinking and talking a lot about culture lately. A couple of years ago, I started a simple document for use within my startup, HubSpot, that talked a bit about culture. The document described the “people patterns” of HubSpot — what kinds of people were likely to do well at the company. Said differently, if I were to write a grading algorithm to predict the likelihood of success of a given employee, what would the parameters of that function be? We identified things like being humble and analytical (2 of the 7 things). That document turned out to be relatively useful — and well worth the time. We've used it during the interview process, we use it during reviews.
I continued to get feedback from the HubSpot team that the original culture deck at HubSpot was starting to get a little dated — and it didn't go far enough. It talked about the kind of people that were a match — but it didn't talk at all about beliefs or behaviors. Meanwhile, the company is growing like wildfire. We're 460 people now and adding 25+ people every month.
So, I thought to myself: “Self, maybe it's time to update the deck…” I set out on a quest to talk to a bunch of folks, run some surveys, get some feedback, read a bunch of stuff, etc. One thing led to another…and another…and another, and here I am.
If one needs 200 hours and 150+ slides on culture, is something wrong?
Maybe. But, this is likely more a function of my neuroses than a reflection on HubSpot. And, all things said and done, I don't really regret having spent the time. The result, I think, is really good. People I trust to tell me the truth and that I respect immensely have told me the deck is good. It's not the same level as the Netflix culture deck, but it's not terrible.
Speaking of the Netflix culture deck, you've read it right? Right? If you have time to read only one 100+ slide deck about company culture, you should read the Netflix culture deck (convenient URL: Netflix.CultureDeck.com). If you have time to read two 100+ slide decks on company culture — read the Netflix deck twice. It's that good.
Why I'm thankful to Jason Fried and 37signals
Jason is a brilliant thinker and a brilliant writer. He's got some great posts on culture, like “You Don't Create a Culture”. Which is why I was a little worried when I sent him a preview (private beta) of the deck I was working on to get his reaction. I was fearful. My thought was “He's going to think I'm an idiot. Or worse, clueless.” Turns out, he was gracious. He acknowledged that 37signals and HubSpot are different companies, pursuing different paths. I could have been brave and dug into this comments a bit more, but I decided not to push my luck because it would have been somewhat crushing.
I also enjoyed “When Culture Turns Into Policy” by Mig Reyes of 37signals. He's right But, I feel like I'm in the correct side of truth and justice on this particular front.
The more sobering article was “What Your Culture Really Says” (not by 37signals, but by @shanley — someone I don't know). Well written and biting in its criticism of what she calls “Silicon Valley Culture”, it was something I read a couple of times and circulated around to a few folks on my team. I recommend it. It's dark, but worth reading.
Why I think it was worth it, and why I'd do it again.
1. Culture is super-duper important, and it's worth spending time on. Check out my recent post “Culture Code: Creating a Company YOU Love”. I think it makes a pretty good case.
2. Already, the deck is being used internally within HubSpot. I've gotten both physical high fives and virtual high fives from people on the team. That makes me happy.
3. Even before posting the HubSpot culture code deck to public beta today, I had already started sending it to people that I was trying to recruit to HubSpot. Though ideally, I'd get to meet everyone and tell them about our culture code in person, that's just not possible.
4. Going through the exercise was one of the most challenging and revealing things I've ever done since starting the company 6+ years ago.
5. Working on our culture code project caused me to talk to a bunch of people that I didn't otherwise know and would probably not have been able to connect with. People like Patty McCord (co-author of the Netflix culture deck).
6. It's been therapeutic. Now, if HubSpot ends up going down in crashing, burning flames (which is totally not the plan) — at least I'll know that we tried to design and defend our culture.
7. We're about 4 hours into the public beta release of the HubSpot culture code deck. It's already gotten 16,000 hits and is going strong. This is gratifying. My hope is that a few of those people found the deck useful. (And maybe a few of them will join our merry band of misfits at HubSpot someday).
If you're getting started, spend 20 hours, not 200.
One of the common questions I get from my startup friends is how much time they should be spending on culture — given everything else going on (like you know, building a business). I'm not sure what the optimal number is — but I can say with confidence that the number is not zero. I'd suggest 20 hours. Just enough time to think about it, talk to your team, read some stuff and describe it. You don't need to put posters up on the wall. Just something — even if it's a one-pager that captures your current thinking on the kind of company you want to be.
Quick hint: You want to build a company that you love working for. The rest will work itself out.
What do you think? Have you scanned through the deck? Was it useful? Lame? Interesting? Would love to hear your thoughts. I think of the deck as being in “public beta”, so I'll be iterating on it and updating it regularly.
Article has 0
comments. Click To Read/Write Comments
This is a guest post by Alex Turnbull. Alex is a serial SaaS entrepreneur and the CEO of Groove, a customer support software platform for startups and small businesses. Alex was previously a co-founder of Bantam Live, acquired by Constant Contact in 2011.
After many, many months of long hours, take-out and cheap beer, launch day is finally here.
Your eyes are sore from not having looked up from your computer in what seems like ages, and every part of your body is screaming at you to get some sleep, but you’re too hopped up on coffee and adrenaline to listen.
This is it. This is what we’ve been working our asses off for. To reveal ourselves to the world in all of our disruptive glory. Silicon Valley will kneel before us.
It’s like the slow, painstaking ride to the top of the first drop on a roller coaster; you just know it’s going to be absolutely exhilarating, but first you have to trudge all the way to the peak of a steep climb. Tired of waiting but itching with anticipation, you finally reach the top, and then…
Not a damn thing.
Scoble isn’t billing you as the next Instagram. You’re not showing up on Techmeme with a dozen stories about your launch. And the traffic. That sweet, traction-building traffic that you’ve been awaiting — the traffic that was going to prove that people were interested. That they wanted you. It never comes.
Who’s to blame for all of this?
That’s easy. TechCrunch. Those bastards.
If only they had read your press release, they would’ve seen that your story needs to be told! Your product is unique and compelling, dammit! How could they do this to you? How could they crush your dreams of a successful launch by totally ignoring your pitch?
Of course, you’re a startup. Bouncing back is in your DNA, and you get right back to work. But the experience is discouraging, and I've seen this story play out way too many times with friends and founders I’ve spoken to. And know that I’m speaking from experience: I've absolutely made this mistake before, too.
Here’s the reality: pitching TechCrunch is not a launch strategy.
It seems obvious, but it takes more than one hand for me to count the number of times a founder has told me that they expect their launch traction to come from getting picked up by TC (or Mashable, or VentureBeat, or AllThingsD, or any one of a number of similar outlets).
What every single hopeful founder with a similar plan doesn't realize (or doesn't take seriously enough) is that there are hundreds of other founders doing the exact same thing, and hitting the exact same “Tips” email account with their pitches.
Don’t get me wrong, here. Press is good, startup bloggers tell important stories and press outreach should be a part of your launch strategy. But it’s not enough.
So what’s a startup to do?
Let’s get this out of the way: a lot of folks will tell you that the first thing you should be focused on is building a great product that improves people’s lives. And they’re absolutely right. Nobody wants to hear about a crappy product, and more importantly, nobody wants to share your crappy product with their friends.
But let’s assume you've got something amazing. How do you get the world to notice?
First of all, shift your thinking. F*ck the world. It’s “tell everyone” approaches like this that lead to launch strategies like the one above. You don’t need the world to notice. You need highly qualified potential users to notice, and there’s a huge difference.
At Groove, we spent twelve months in beta, rigorously testing and iterating our HelpDesk and LiveChat apps to get them ready to launch.
But here’s something else we did, that you can do, too: we spent that time rabidly collecting email addresses of potential users. We asked our most engaged beta users to share our website (and lead collection portal) with their networks, we blogged about topics that were interesting to a customer support audience, and we wrote content for external outlets that brought value to readers, and loads of inbound leads to us.
When launch day came, we were ready: press release, pitch list, product video, blog post, email blast, the works. Here’s how it played out:
We pitched our press list.
The good people at TheNextWeb covered our beta launch a year ago, so they were interested in how far we've come. They wrote a great piece about us, and the inbound traffic got us about a few hundred signups. It was awesome.
Like everyone else, we also wanted to get Crunched. Or Mashed. Or Beaten.
But what hurt even more, is that like almost everyone else, we didn't get covered by any of them.
I have no doubt that a barrage of press coverage would've gotten us even more new users, but we knew that the odds were against us, so we planned for it.
Taking our carefully nurtured list of email addresses, we sent out an announcement about our launch, with clear calls to action to sign up and get in on the fun.
Double the signups, at nearly four times the conversion rate of visitors coming from the TNW piece.
Note that we didn't email this list cold: we had spent months giving away content for free, nurturing the relationships, before asking for anything. I can’t stress the importance of this enough.
We also sent an email out to beta users, announcing the launch and asking them to share Groove with friends who might find it useful. That email netted us another 120 users, at a conversion rate nearly double that of the TNW traffic.
It shouldn't be surprising that the most valuable traffic we got came from qualified leads we had already nurtured. But the problem is that most startups won’t make the effort to build that audience until after launch. I know, because as I've mentioned, I've made that mistake, too.
Look, I know that as an early-stage team, the chances that you have a full-time content person are nonexistent. But the chances that someone on your team has a modicum of writing chops are pretty damn good, and getting them to invest a couple of hours a week in this exercise can pay off in spades when the time comes.
At a loss for what to write about? Every startup should know how their customers think, and knowing what’s interesting to them is a major part of that, and it’s absolutely okay to ask them what they’d like to read about from you. Email them, survey them, chat with them. They'll appreciate it. Trust me.
In the meantime, here are a few ideas:
- Write about your startup experiences - be honest and transparent (check out Balsamiq-founder Peldi’s blog, where he captures this masterfully)
- Stir the pot. Share your thoughts on controversial topics with your audience.
- Offer best practices for your space.
- You’re probably an expert in whatever it is that you do — share your knowledge.
- Everyone likes a success story. Or one about failure. Tell yours.
- Show off case studies and interviews with your customers. This clues your audience in to what others using your product are doing well, and makes the featured customers feel good about themselves (and their relationship with your company).
Summary: Getting Crunched is not a launch strategy, and you shouldn't bet on it to make your startup blow up. Reach out to the press, but diversify your launch plan to reach qualified leads that you've already been nurturing. Invest in content. Profit. The end.
Article has 0
comments. Click To Read/Write Comments
This article is available at http://CultureCode.com -- the slides and content will be updated periodically. I'm working on a really big project on the topic of culture. Follow me on twitter (@dharmesh) to get an update on March 20th when it comes out in public beta.
This article represents the notes and slides related to a talk I'm about to give (in less than 60 minutes) at the #LeanStartup event at #SxSW 2013.
Here are my notes on the talk. Note: I'm writing these roughly 90 minutes before I go on stage, so they're a bit rough.
1. Posted the historical recurring revenue numbers of HubSpot. Rationale: Transparency is one of our core cultural values at HubSpot. So, every year, we post our financial deck with details
2. Entrepreneurs don't spend many calories thinking about or working on culture. There are several common reasons for this:
a) Culture? We don't need no stinkin' culture! We're putting a dent in the universe. That's our f*!#ing culture!
b) Culture? Relax. We got this one covered. We have free beer and a ping poing table.
c) Culture? You can't really create that. It has to be built organically. It just comes from the behaviors and example of the founders.
All of those are reasonable positions to take. They're misguided, but they're reasonable.
a) Most of the startups that did end up putting a dent in the universe didn't really know that they were going to succeed at it. And, one of the few common characteristics of super-successful companies is that they have a distinct culture. Google. Facebook. Zapps. Netflix. The list goes on and one.
b) Maybe you can't create a culture -- but you can certainly destroy it through neglect. The 2nd Law of Thermodynamics applies here. Left alone, most things degrade to crap. In the early days, it's OK to rely on the behavior of the founders and early team to set the culture. That works great. The problem with this model is that as you start to grow, there's a fair amount lost in translation.
3. Convention over Configuration. Yes, you could just let people make decisions organically based on their best interpretation of whatever they think the right model/framework is. But, I generally favor convention over configuration. Why not just have a convention (i.e. culture) that makes a large body of easy decisions and a small body of hard decisions easier? The result is more efficient and more consistent decision-making.
4. product:marketing :: culture : recruiting
product is to marketing as culture is to recruiting. Yes, you might be able to do amazing marketing -- but it's not going to matter if the product isn't amazing. It's a tough slog. Similarly, if you're looking to recruit amazing people (who isn't), you're going to need to a great culture. The kind of culture that will appeal to the right kinds of people and get them to self-select.
5. The interest on culture debt is really high.
You've heard about technology debt. That's when you take short-cuts today, because you *need* to get something out the door. You willingly take these short-cuts, because time is suer-valuable (just like cash is valuable when you take on financial debt). But, you understand that there will be a time to pay off that debt. And, the debt carries an interest rate. Culture debt is when you take a short-cut -- hire someone now because they have the skills you need and you're *hurting* for people -- but they're not a good culture fit. You let the "culture bar" down. You might do this for logical reasons. For the same reason you might incur technology debt or financial debt.
I'm going to posit that the effective interest rate on the culture debt you take on is often higher than that of technology debt. That is, when it comes time to pay off the debt -- a lot of damage is done. There are a couple of reasons for this: 1) When you incur technology debt (like not adding sharding to your database), you generally will start feeling pain at some point, and you'll then decide to pay off that debt. It's a *known* problem and when you solve it, you'll sort of know you did. That's not the case with cultural debt. Culture debt is insidious. It creeps in slowly. It's hard to measure. 2) Technology debt is often "forgiven". This happens when a short-cut you took ends up not being a bad thing anyways. An example might be that you hacked together an MVF (minimum viable feature) for something in the app. The code is crap. You're not proud of it. Then later, you decide to abandon that particular feature. Guess what, your tech debt on that feature was just forgiven. That almost never happens in cultural debt. If you bring on people that aren't a fit, they'll infect other parts of the organization, and will be really hard to get back to where you want to be.
6. Create the culture you want, not the one you think you should have.
There's a lot of content out there regarding "winning" startup cultures. Some will advocate for an open/transparent culture. Some for a design-focused culture. Some for a service and customer-centric culture. Fact is, any of these will likely work. The key is to understand what it is that defines your culture (and importantly, what makes it differetn from other companies) -- and to build alignment around that culture. And, in order for the culture to survive long-term, you need to love it. You need to believe in it. If you simply try to tweak the culture based on what you think the right answer is, you'll lose steam and lose conviction. Game over.
Summary: You can nudge your culture. It's worth it. You're going to have a culture anyways -- might as well build one you want.
Article has 0
comments. Click To Read/Write Comments
The following are some hypothetical classes that I'm thankful they don't teach at places like Y Combinator, TechStars and 500 Startups.
11 Classes They Should't Teach Founders
1. Dress To Impress VCs: The Art of Wearing A Tie
2. Click, Drag, Extrapolate: How to Use Excel For Startup Financial Projections
3. How to Win Friends and Influence People by Writing a Business Plan.
4. My Parking Spot: A Founder's Guide To Executive Benefits
5. The Care and Feeding Of a Tradeshow Booth Babe(*)
6. How To Design Software Systems For Infinite Scale on Day Zero
7. You Win, They Lose: Brass-Knuckled Tactics To Use Against Your Team
8. Ego Marketing: How To Buy A Superbowl Ad
10. How To Be a Patent Troll For Fun and Profit
11. Selling On Stage: Hocking Your Wares To An Unsuspecting Conference Audience
* For the record, I completely detest the whole idea of a booth babe. Reprehensible.
What are some of the classes you're thankful they don't teach? Please share in the comments.
Article has 0
comments. Click To Read/Write Comments
In a few weeks, I'm going to write a $25,000 check to invest in a company that currently does not exist. There is no company. There's no team. And I have no idea what the company will do or hopes to do. I'm investing almost completely blind. More on this craziness a little later in this article.
To understand why I would do something so crazy, let me first catch you up a bit on my angel investment history and “strategy” (and I use the word strategy very loosely). It's not your typical story.
I first started angel investing while I was a graduate student at MIT. I had recently sold my last company, made some money and went back to graduate school to figure out what I wanted to do next. I had promised my wife it wouldn't be another startup (startups are hard) so my plan was to do angel investing. It was a way for me to scratch my entrepreneurial itch by vicariously living through other entrepreneurs. Lots of fun, and almost no pain. Seemed like a great idea. And it was.
The first entrepreneur I invested in (not counting myself) was Brian Shin — his company was Visible Measures. He was a classmate of mine in “New Enterprises” at MIT. Brian was literally one of the smartest people I met during my time at MIT. And, he could hustle like nobody's business. So, I invested $50,000 despite not really knowing Brian and not really liking the original idea (they have since pivoted). And, not really knowing what the heck I was doing It turns out, to be an angel investor there is only one requirement: You have to have to be accredited (i.e. have the money to be able to afford the risk). You don't have to go to angel investment school, take any tests or otherwise prove your mette. You just need cash and be willing to write checks.
I continued making investments all through graduate school and then post-graduation, as I was building my own startup, HubSpot. I've now made 35+ investments. You can see most of my AngelList profile. What makes my approach unconventional is that I have a few “rules”:
All of the rules are based on one simple constraint: I have no time. I have no time to spend on/with startups because I'm maniacally committed and focused on my own company (HubSpot), which is doing very well. That's where all available time goes. If I didn't have these rules in place, I wouldn't be able to angel invest at all.
So here are my rules:
1. No due diligence. Seriously, almost none. In over half the deals I've done, I've never met the entrepreneurs or talked to them on the phone. Generally just exchanged an email or two. My rationale here is two-fold: I'm optimizing for my time (my biggest constraint) not magnitude of outcome. Also, I think at the very early stages, most diligence that typical investors spend time on is “undue”. There's just not that much that's knowable. Either you like the people, or you don't. You like the idea, or you don't (which is irrelevant, because the idea's likely going to change anyways).
2. No follow-on investments. This one's controversial. Many would argue that it's economically stupid for me not to “double down” on the deals that I have a right to maintain my pro-rata investment. They might be right (but I don't think they are). The reason I don't do follow-ons is that it requires spending time (which I don't have) and for the deals that I don't invest in, I might create a signaling problem for the entrepreneur. By unilaterally not doing any follow-on investments, all signaling issues go away. This has worked brilliantly for me so far. I take the money that I would have invested in deals I'm already in, and just channel it to new startups. In the grand scheme of things, I think this works out well for everyone.
3. No advisory board positions or official involvement. Once again, this goes back to the lack of time. I don't have time to commit, so I don't commit it. Occasionally, I'll make an email introduction, or see entrepreneurs in my portfolio for a nice dinner — but other than that, they almost never see me or hear from me.
Overall, my unconventional approach seems to be working OK. I'd put my angel investment portfolio up against any early-stage investor (angel or VC). After all is done, I'm going to make a fair amount of money. If you don't think so, just check out my portfolio.
And, to those that might criticize my unconventional approach and classify me as "part of the problem" (the problem being, the "Series A Crunch"), I have a simple response/position: There's no such thing as too many companies starting up. But, there is such a thing as not enough companies shutting down...but that's a different problem.
Important Note: If you are seeking angel investment, just about all of my investments these days are through AngelList (Disclosure: I'm not just a member of AngelList, I'm also an investor). And, I focus exclusively on Internet/software companies.
So, back to my crazy $25,000 investment. A few weeks ago, I heard about the upcoming LAUNCH Festival hosted by Jason Calacanis. Jason sent an email out announcing that as part of LAUNCH, he was putting together the best hackathon in history. Jason was going to angel invest $25,000 into the winning team. When I saw that email, I thouht “that's a pretty good idea, and I've done stupider things”. So, I volunteered to match Jason's $25k with $25k of my own. Secretly, I'm a major, major believer in hackepreneurs. If I can buy into someone that manages to get in to the LAUNCH hackathon and then wins -- I think it's a pretty good bet.
Hope you get a chance to attend LAUNCH. It promises to be an amazing event. And, if you're the hackepreneur type, hope you'll participate in the hackathon and take my money.
Article has 0
comments. Click To Read/Write Comments
Anyone can have a killer startup idea, but in order to make that idea succeed you’ll need an unbeatable team. Crafting the perfect team is an art -- one we're constantly trying to refine at my startup, Boundless.
We’ve found that a structured process yields the best new hires. This starts with first understanding the skills we need to fill. But we don’t just try to fit anyone with the right experience into a role - we go further and search for the right personality for the position as well. Throughout the entire hiring process, we’re constantly looking for signs of the four most important startup personalities: The Beast, Lara Croft, The Architect, and The Most Interesting Man in the World.
Our initial process is probably quite similar to many other startups. First, Boundless job candidates need to have a presence online. If we can’t find you online, you don’t exist, which means we’re not going to start the interview process. Next, candidates go through a phone screen to determine basic experience and qualifications. Those that survive the phone call visit with multiple team members on-site, where they’re assessed on skill and personality.
However, the final step is a little different. Before securing a job at Boundless everyone gives a 20 minute presentation on your personal or professional passion. We like to give the entire team a chance to see the candidate, and give the candidate an opportunity to impress the team with anything they want. We’ve seen people present on Tai Chi, cupcakes, coffee, how to build an art collection on a budget - all kinds of interesting, quirky and funny topics. And, of course, by this point in the process we have a strong idea of the type of a person the candidate is.
The Four Critical Startup Personality Types
The Beast, Lara Croft, The Architect, and The Most Interesting Man in the World. When filling a role at your startup, you need to find a candidate that embodies characteristics from each of these personalities if you are going to create a culture that changes the world. I firmly believe that a large part of my company’s success is driven by employees with characteristics strongly matching these personalities.
Here’s how to identify these four startup personalities:
The startup Beast, modeled after the X-men character, possesses a “get shit done” mentality. A Beast’s raw animal output ensures they get more done in a day than even the most caffeinated worker bee. These people strive to be the very best in their profession, and doing more than seems humanly possible helps them get there. Look for people with high levels of productivity at their last positions and ridiculous amounts of drive and energy.
When hiring, look for adventurers with an entrepreneurial spirit. These Lara Croft types create goals and projects for themselves to enhance the company values or goals. People who are self starters, self motivated, who have built things on their own time to scratch their own itch are Croft. Their adventurous minds dream big to help inspire the team.
The Architect, inspired by the character from The Matrix, understands the big picture and can still focus on the details. These are the people who have a productivity hack for nearly all aspects of their life. Being productive and organized with the details helps The Architect keep the big picture in mind. You can spot Architects as people who have taken pride in a craft or know the intricate details of their previous position plus can clearly articulate the high-level strategy.
The Most Interesting Man in the World
At any fast-growing startup, you’ll spend a lot of time collaborating and hanging out with your colleagues. To make your office lunches or happy hours more enjoyable for all involved, hire people with character and charm for your team. The Most Interesting Man in the World, seen in the Dos Equis commercials, adds depth to your company culture. And in tough times, the Interesting Man (or woman) is the person you want fighting on your team and who help keep you going during the tough time. Don’t just look for goofballs - find people who have overcome difficult challenges and kept a positive attitude.
By hiring based on these four personalities, Boundless has built a team that not only has the capacity to build the best learning platform possible, but a team that continues to attract other top-notch people to share the journey with us.
We recently had the pleasure of welcoming Healy Jones to Boundless as our new Vice President of Marketing. The Beast in Healy helped our open textbooks initiative get written up in TechCrunch, and his wine tasting team presentation won him a nod in the Most Interesting Man in the World category. He joins Boundless from OfficeDrop where he was VP of Marketing, where he helped grow the user base 120 times in two years.
Whether you’re hiring a new team member as a VP or entry-level, remember that killer personalities help make the journey from idea to strong startup possible.
This is a guest post from Ariel Diaz. Ariel is the CEO and co-founder of Boundless, which creates free textbooks for college students.
Article has 0
comments. Click To Read/Write Comments
A few minutes ago, I came across this tweet from my friend and co-founder at HubSpot, Brian Halligan.
This got me to thinking (which is often a dangerous thing), am I taking enough risks? Am I being daring enough? Am I being a hero? Answer: Not often enough.
So, here's advice to my future self and all of you: *DO* be a hero.
1. Be a hero. Go after that big, powerful incumbent that doesn't delight its customers enough.
2. Be a hero. Hire that awesome, amazing person -- even though they don't fit any of the roles you're currently looking for.
3. Be a hero. Make that sacrifice that will negatively impact your profits but completely aligns with your passions.
4. Be a hero. Make that really, really hard decision that even the smartest people you know can't seem to agree on.
5. Be a hero. Say no to that accomplished, super-successful person that your team interviewed, loved and convinced to join -- but doesn't fit your culture.
6. Be a hero. Kill that stupid company policy that nobody can recall the rationale for, but you suspect was because someone (maybe you) had a friend who knew a guy that had read about a startup that didn't have that policy and that company failed.
7. Be a hero. Launch that super-secret project you've been working on even though it's more likely to fail than succeed.
8. Be a hero. Admit that you've changed your mind on the decision you so passionately advocated for a few months ago
9. Be a hero. Confess to your team that sometimes you take the safer path out of fear and rationalize that you're doing it for the good of the company.
Article has 0
comments. Click To Read/Write Comments
The Lean Startup method strongly advocates experiments -- and for good reason. It's critically important for a startup to acquire validated learning as quickly as possible. How quickly can you get through a learning cycle? How efficiently can you get to the answers to crucial questions?
You might run experiments that will answer some of your most pressing questions:
1. Will adding this feature cause more people to start paying for the product?
2. If we increase our prices, will our overall revenue increase or decrease?
3. If we make this feature that was previously free part of our premium offering, will users be upset?
Experiments are great -- but one word of warning. Be mindful of how much data you need and how "clean" your experiment needs to be in order to yield the learning you are seeking. A mistake we often make is looking at the "early evidence" from a particular experiment -- and then, in the interests of time and/or money (both of which are in short supply), use that early evidence to make an "educated guess" and move on.
This "educated guess" based on some early evidence is often "good enough". There are lots of questions for which you don't need perfect answers. All you need is something reasonably better than random -- or something that validates a strong "instinct" you already had.
But, be careful. The rigor of your experiment should match the importance of the issue at hand. If it's a big, important decision that will shape your company for a long time, don't just rely on the "early evidence" and use it to rationalize whatever it is that you wanted to do in the first place. Take the time to let the experiment run its course. For big, important, critical issues -- the extra rigor is worth it.
Example: You want to know whether taking a particular feature *out* of your product is going to have a major impact on your users. The feature didn't work out as well as you had hoped, and it ended up being very expensive to maintain. So, you send a survey out to your 5,000 users. Of the first 500 responses that come back, 80% of the people ranked the feature as "Super-duper important, if you take it out, I'll use another product". So, you could just take this early evidence, extrapolate and say -- "Hey, if 80% of our users really want this feature, we should just keep it in." In reality, what might be happening here is that the users that were most passionate about the feature, and thought that you might cut it are the ones that first responded to the survey. Users that were kind of "meh" (or didn't even know the feature was there) might take a while to respond, if it all. Basically, the early responses are not representative of your overall user-base. If you let more of the evidence come in, you might find that the actual number of users that care is much smaller than the "early evidence" showed.
The Danger of the Self-Fulfilling Prophecy
Another thing to be careful of when it comes to "early evidence". If this early evidence leaks into the organization, you often will trigger a self-fulfilling prophecy and wind up with a potentially misguided decision.
Example: You ask your sales team to start selling a new offer (could be a feature/product/promotion). Understandably, the first few attempts don't work out very well -- the sales team hasn't quite figured out yet how to position the offering. It will likely take a few weeks. In the meantime, word starts to spread that this "new thing" isn't selling all that well. As a result, the team pulls back a bit and reverts to selling the "old thing" (change is hard). This of course, causes even fewer sales of the new thing -- and it ultimately gets abandoned. Now, that might have been the right decision. Perhaps the early evidence was right -- but you don't know for sure. What if just a couple of weeks of training and tweaking would have fixed the issue. Perhaps it would have been awesome.
In summary: Don't confuse early evidence with compelling evidence. Avoid letting early results of an experiment taint the rest of the experiment. And, match the rigor of your experiment to the importance of the decision on hand.
Any examples you can think of when early evidence is misleading?
Article has 0
comments. Click To Read/Write Comments