I am one of the many thousands of raving Paul Graham fans
out there. I’ve read most of his content (Paul doesn’t write
blog articles, he writes essays).
is clearly a very gifted writer. He is also very, very smart (and I
rarely use two verys). But, at least on one point, I humbly submit that
he is very wrong.
In the most recent essay, titled “The 18 Mistakes That Kill
”, Paul identifies (as you might expect from the title) the
common causes of startup failure.
I’d like to focus on point is #17: Choosing The
I agree with Paul that picking a wrong platform can indeed sometimes
kill a startup, but I’m not yet convinced that this is always the case. History
is replete with startups that picked what were widely considered to be the “wrong”
platform and still survived to tell the story (and make a ton of money in the
process). One example would be MySpace and their use of ColdFusion (not
that Cold Fusion is a bad
platform, but most hacker-types – and particularly those that follow
Paul, would likely categorize it as a sub-optimal platform). There are other
examples of startups that succeeded (some modestly, some spectacularly),
despite having chosen the “wrong” platform. One additional example
that comes to mind is eBay’s early use of Microsoft’s platform
(ISAPI DLL written on top of IIS).
But, this is not my primary point of contention with the
article. Little harm is done by identifying wrong platform selection as a
potential mistake that startups should try and avoid (in fact, I think it helps
to raise awareness of the importance of this decision). My issue is with
how Paul advises startup founders go about actually picking
Paul Graham: “How do you
pick the right platforms? The usual way is to hire good programmers and let
them choose. But there is a trick you could use if you're not a programmer:
visit a top computer science department and see what they use in research
I agree with the first half. A great way to pick a
platform (if you’re not a programmer yourself) is to hire great
programmers (not just good ones) and let them choose. But, I don’t
think visiting a computer science department and seeing what they use in
research projects is an effective strategy. Here are my issues with this
- Being a prior computer science student
myself, I have a bit of a feel for how platforms get picked for research
projects. Rarely do these coincide with how startups in the real
world work. People in academic research projects are often solving
for a very different problem with very different motivations than a startup.
Lots of research projects are a learning
exercise. Most startups are a building
exercise. The desired outcomes are often vastly different.
- The platform selection process
is sometimes domain and/or user specific. For example, though Python
is a cool language (and I’m sure there are many academics that like
it), if you are seeking to build the next big killer desktop application
to run on Windows, it will likely prove to be a fatal choice. The
reason is simple. From a user’s perspective, they expect a
Windows application to look and feel like a Windows application. Chances
are, your Python desktop app won’t quite feel “just right”
(the user’s dog will bark at it). This is a case where the
users do care about the
platform choice because it actually impacts what they experience. Similar
arguments can be made for other target areas like mobile applications.
- There may be other dependencies
(i.e. integration points) that influence your decision. As a
startup, if you are building an application that will be an extension of
an existing application (or consume its services somehow), it often helps
to pick a platform that is conducive to that integration. For
example, if you’re building an Outlook plug-in, you probably don’t
want to use Ruby for that (even though it might support COM).
Basically, it seems that Paul thinks that all startups are
going after “change the world” strategies and don’t need to
concern themselves with user preferences, business domains or the need for
integration with existing systems. Though it would be great if this were
true, it’s really not.
What do you think? Am I off-base here? Are all
of you writing world-changing software applications that need to use the higher-end
languages and platforms from computer science research groups? Or, are at
least a few of you taking a less glamorous (but practical) approach?