programming | Advice and Insights for Entrepreneurs | OnStartups

About This Blog

This site is for  entrepreneurs.  A full RSS feed to the articles is available.  Please subscribe so we know you're out there.  If you need more convincing, learn more about the site.



And, you can find me on Google+

Connect on Twitter

Get Articles By Email

Your email:


Blog Navigator

Navigate By : 
[Article Index]

Questions about startups?

If you have questions about startups, you can find me and a bunch of other startup fanatics on the free Q&A website:

Subscribe to Updates


30,000+ subscribers can't all be wrong.  Subscribe to the RSS feed.

Follow me on LinkedIn


Current Articles | RSS Feed RSS Feed

Disagreeing With Paul Graham: How Not To Pick A Platform

Posted by admin_halligantravel admin_halligantravel on Tue, Oct 17, 2006

I am one of the many thousands of raving Paul Graham fans out there.  I’ve read most of his content (Paul doesn’t write blog articles, he writes essays).  He is clearly a very gifted writer.  He is also very, very smart (and I rarely use two verys).  But, at least on one point, I humbly submit that he is very wrong.

In the most recent essay, titled “The 18 Mistakes That Kill Startups”, Paul identifies (as you might expect from the title) the common causes of startup failure.

I’d like to focus on point is #17:  Choosing The Wrong Platform

I agree with Paul that picking a wrong platform can indeed sometimes kill a startup, but I’m not yet convinced that this is always the case.  History is replete with startups that picked what were widely considered to be the “wrong” platform and still survived to tell the story (and make a ton of money in the process).  One example would be MySpace and their use of ColdFusion (not that Cold Fusion is a bad platform, but most hacker-types – and particularly those that follow Paul, would likely categorize it as a sub-optimal platform).  There are other examples of startups that succeeded (some modestly, some spectacularly), despite having chosen the “wrong” platform.  One additional example that comes to mind is eBay’s early use of Microsoft’s platform (ISAPI DLL written on top of IIS).

But, this is not my primary point of contention with the article.  Little harm is done by identifying wrong platform selection as a potential mistake that startups should try and avoid (in fact, I think it helps to raise awareness of the importance of this decision).  My issue is with how Paul advises startup founders go about actually picking a platform.

Paul Graham:   “How do you pick the right platforms? The usual way is to hire good programmers and let them choose. But there is a trick you could use if you're not a programmer: visit a top computer science department and see what they use in research projects.” 
I agree with the first half.  A great way to pick a platform (if you’re not a programmer yourself) is to hire great programmers (not just good ones) and let them choose.  But, I don’t think visiting a computer science department and seeing what they use in research projects is an effective strategy.  Here are my issues with this particular approach:
  1. Being a prior computer science student myself, I have a bit of a feel for how platforms get picked for research projects.  Rarely do these coincide with how startups in the real world work.  People in academic research projects are often solving for a very different problem with very different motivations than a startup.  Lots of research projects are a learning exercise.  Most startups are a building exercise.  The desired outcomes are often vastly different.

  1. The platform selection process is sometimes domain and/or user specific.  For example, though Python is a cool language (and I’m sure there are many academics that like it), if you are seeking to build the next big killer desktop application to run on Windows, it will likely prove to be a fatal choice.  The reason is simple.  From a user’s perspective, they expect a Windows application to look and feel like a Windows application.  Chances are, your Python desktop app won’t quite feel “just right” (the user’s dog will bark at it).  This is a case where the users do care about the platform choice because it actually impacts what they experience.  Similar arguments can be made for other target areas like mobile applications.

  1. There may be other dependencies (i.e. integration points) that influence your decision.  As a startup, if you are building an application that will be an extension of an existing application (or consume its services somehow), it often helps to pick a platform that is conducive to that integration.  For example, if you’re building an Outlook plug-in, you probably don’t want to use Ruby for that (even though it might support COM).  

Basically, it seems that Paul thinks that all startups are going after “change the world” strategies and don’t need to concern themselves with user preferences, business domains or the need for integration with existing systems.  Though it would be great if this were true, it’s really not.  

What do you think?  Am I off-base here?  Are all of you writing world-changing software applications that need to use the higher-end languages and platforms from computer science research groups?  Or, are at least a few of you taking a less glamorous (but practical) approach?

Article has 36 comments. Click To Read/Write Comments

The Most Important Feature Missing In The Google Search API

Posted by on Thu, Oct 05, 2006

At my startup, HubSpot, we have been working with the Google Search API to implement some of the features we think would help our customers.

The Search API is reasonably robust in that it supports the various features of the Google search engine (finding related links, approximating the number of results, etc.)

But, there is one critical feature that the brainiacs at Google either forgot to include (which is bad) or intentionally left out (which is really bad).

Outside of normal “search” type stuff, I think one of the most common reasons people would use the API is to answer one simple question:

Most common question:  For a particular search phrase, where does my site rank on Google?

The reason this question is common should not be surprising (most webmasters, bloggers and SEO consultants care about this issue).  It’s also difficult to answer this question via the regular search engine (without manually entering the search term, and paging through the results looking for a “match”.  There are web utilities out there (that let you enter your API key and run a query), but they’re just doing a brute-force iteration over the result set too.

Here are some thoughts on the topic:
  1. As it stands, there is no way to answer the above simple question without making repeated calls to the Search API (basically retrieving a page at a time and checking the results until a match is found).

  1. This is even more annoying because Google only allows you to retrieve 10 result items at a time.  So, to figure out if you are in the top 100 hits for a search phrase, you have to hit Google 10 times.

  1. This is made yet more annoying because Google limits the number of calls you can make to their API to 1,000 (with no clear way of increasing this limit – even by paying money).

  1. It seems (at least from my perspective), extremely easy to implement this feature.  All they would have to do is include a separate method call that took a search query and a site name as parameters and returned the position of the first “match”.  This way, I could figure out that when searching for “software startups”, that this site ( is the #5 hit.

Given how smart the Google folks are and how common this particular need likely is, I have only two theories about why they left this feature out:
  1. Google intentionally left this feature out for some “strategic” reason.

  1. Google doesn’t realize how important this missing feature is.

For the Google API experts out there:  Am I missing something simple?  Is there a work-around to this, or have I stumbled into something that is already widely known and has already been discussed to death?  If you have insight, please leave a comment.  All help is appreciated.

Article has 10 comments. Click To Read/Write Comments

Roadmapping: How Your Product Finds Its Way

Posted by on Tue, Sep 26, 2006

Note:  This is a guest article written by Andy Singleton.  Andy is a career software professional and knows a thing or two about building and launching successful software products.  He is currently the president of Assembla which brings open source processes and applications to the world of enterprise software.

Roadmapping:  How Your Product Finds Its Way

A startup will often live or die based on its first product release.  Did it get released?  Did people find it useful?  Good roadmapping dramatically improves your chance of getting to “Yes!” on these critical questions.

A roadmap is just a list of the features that you want to build, sorted in priority order.  It might be easy for one person to make a list like this.  However, it gets a lot harder if you are trying to get an organization, even a small one, to agree on the roadmap.  It gets even harder when the stakes are high – when you have only one chance to release a version 1.0 of your product, and you need to make the right decision about what goes in, and what doesn’t go in.  You can’t risk bloat, delay, mistakes, cloak-and-dagger politics, or bloody civil war.

I have developed some roadmapping techniques to get through the process smoothly and make the right decisions.  I have used this with individual entrepreneurs, my own products, venture funded startups with 50 people, and big companies with separate divisions for product development and marketing.

I believe in releasing early and often.  This allows you to minimize risk, and to collect customer suggestions for rapidly improving the product.  My goal is to find a small set of the most important features to go into the next release.  If we have the discipline to do that, our chance of success is high.  If it is a new product, I am looking for the minimum useful release, which will start a process of incremental improvement.  

In order to be free to apply this much discipline, you need to be politically aware, and you need to make sure that everyone feels that his/her request will eventually be accommodated.  If people believe they will get what they want eventually, they will let you cut down the next release to get the most gain in the shortest time.

I recommend three stages:

1)      Brainstorming.  The goal of this stage is to collect all of the outstanding requests, develop ideas, and make sure that no one feels left out.  In this stage, we expand our list as much as possible.

2)      Categorizing, Voting, and estimating.  In this stage, you try to get some consensus on what you should do and can do.  Put a time bound around this – for instance, one hour of discussion, or two days for emailed comments.

3)      Sort by priority.  You may need a benevolent dictator to make the final sort.  Then, go down the list and draw a line under the minimum set of features that will make a useful next release.  If this is a first release, you want to make it lean.  Make sure that everything you have above the line really is needed for the product to be useful.  If there are any complicated features on the top of the list, try to break them apart into steps.  In this stage, you apply discipline to shrink the next release.

Voila!  Your shining prize, your next release, is now at the top of the list.

If you are making software, you now have a roadmap that you can drop directly into the ticketing system for an agile development process, complete with milestones or “Iterations”.

You can create a more complete roadmap by drawing more lines, representing additional future releases.  It’s a good idea to assign dates to the releases.  Once you pick the dates or the release frequency, you should always release on time.  This fixed-time, variable-feature strategy is an important part of agile methodologies.  It reduces risk and makes it easier to gather feedback that improves the product.  If you can’t complete everything that is scheduled, just move the bottom items to the following release.

* Break features down into smaller components.  Often you can grab something simple out of what seems to be a more complex feature, include it in your first release, and move the less obvious stuff to a later release.  That is a big time-to-market win.

* Consider specific “use cases”, or “scenarios” – really specific stories about how the product will be used, with the names and jobs of actual people you know.  This will be most helpful in expanding the list of feaures.

* Define and prioritize themes, which are groups of related features.  It’s easier to get consensus on themes than on individual features, and it helps everyone focus on the big picture.  This is especially helpful if you are working on a mature product that already has a long list of customer requests.

* Use voting, and give everyone 10 votes.  A voter can use all 10 votes on one feature, 1 vote on each of 10 features, or any other allocation of 10 votes.

ALWAYS schedule a follow-on release shortly after the upcoming release.  It’s much easier to bump things out of the coming release and get it out on time, if you know the things that are getting bumped will be in another release soon after.

What are your thoughts?  Have you used product roadmaps within your startup?  If so, what worked and what didn’t?  Any tips you’d like to share with the rest of us?

Article has 8 comments. Click To Read/Write Comments

Business Geeks: Automated Software Testing as Competitive Advantage

Posted by on Wed, Sep 13, 2006

This blog’s audience can be simplistically divided into two types of people: 

1.  technology geeks (folks with a technology background, and more specifically a software development background) that have an interest in the business issues because they’ve founded or are thinking of kicking off a startup.

2.  business geeks (folks with a business/sales/strategy background) that have an interest in technology because they’ve founded a software startup.  For more on my thoughts on business geeks, read “Business Geek:  Not An Oxymoron

A number of my articles address one group or the other (like my “Presentation Tips for the Technically Gifted”.

This one looks at the value of automated software testing from the perspective of the business-side.  The reason for the focus is that most programmers I know and respect already understand the upside to automated testing and know way more than I do.  If this is you, feel free to stop reading.  I won’t be offended.

Business Thoughts On Automated Software Testing
Automated software testing is a large and relatively complex area that takes a while to understand.  But, let’s work with a simple definition:  It is the process of using computers (instead of humans) to run repeated tests to determine whether the software does what it is supposed to do.  It is important to note that most automated software testing still involves humans in the beginning (to design and develop the tests), but it’s the repeatability that makes it so powerful.  Once the tests are developed, the payback is continuous because the costs of running the tests are near zero.

In order to better illustrate my points, I’ll use Pyramid Digital Solutions (the first software company I started).  Pyramid ran successfully for 10+ years and was recently sold, but I like to use it as an example because I actually lived a lot of these lessons and I find it helpful to have a real-world example to talk about.
  1. Build Better Software:  This one is obvious, but is at the core of the value so needs to be said.  By building a library of automated tests, you are generally going to ship better software that at least, at a minimum works when used in certain, predictable, preconceived ways (the use cases that have been accounted for in the tests).  This is a good thing.

  1. Test Continuously:  As noted, once you have tests automated, there is very little cost to running the test.  As such, once you’ve made the investment in building automated test scripts, there is no good reason not to run them frequently (and lots of good reasons to do so).  In my prior startup, we eventually got to over 20,000+ test scripts that run for several hours.  We ran them every night.  Each night a process would fire off that would retrieve the latest source code the programmers had checked in, build our product (automated builds) and then run our test scripts.  Every morning, the results of the test scripts got emailed to management and the development team.  

  1. Cheaper To Fix Bugs:  Most software has bugs.  From the business perspective, the questions are:  which bugs do you know about, when do you “find” them and how much does it cost to fix them?  As it turns out, when you find them and how much it costs to fix them are highly correlated.  Lets take an example.  From my prior (real-world) example, lets say a programmer inadvertently makes a code change and checks it in.  The code has a bug.  In the old way we used to operate, it often be days, weeks or months before that big got caught (based on what part of the product the code was in, whether it was caught internally, or made it out into the “wild” to be found by customers).  The more time that elapsed from the when the code actually changed, to when the bug was actually found, the more expensive the bug became to find and fix.  We’re talking major (orders of magnitude) increase in costs.  Now, in the new world (where we had automated tests running every night), this bug may be caught by the automated test scripts.  If so, the very next morning we would know there was a problem and we could go fix it. The reason it was so must cheaper to find and fix the bug was because the “surface area” of change was so small.  A limited number of things got changed in the prior 24 hours (since the last test), so the bug could more easily be discovered.  I cannot emphasize enough how much money you can save by catching bugs within hours (instead of days) of the bug being introduced.

  1. Freedom To Change:  As software systems get bigger, it becomes harder and harder to make changes without breaking things.  Development teams do what they can to refactor the ugly bits of code as they can (and as time allows), but even then, a sufficiently large-scale codebase that has been around for a while will almost always have “corners” of it that nobody wants to touch (but are important).  The business risk to this situation is that you may find yourself in a situation where customers are asking for things or the market shifts in some way that causes the need for change (this should not come as a surprise).  If the programmers are fearful of changing core parts of the system for fear that they’ll break something, you’ve got a problem.  If you’ve got a large battery of automated test scripts, it frees the programmers to do lots of cool things.  They can refactor the code (the automated testing is a “safety net”), they can add/change features, etc. with a lot less loss of sleep.  What you will find, by investing in automated testing is that your organization actually moves faster than it did before.  You can respond to market change quicker, you roll out features quicker and you have a stronger  company.

  1. Clients Are Happier:  At Pyramid, we had quarterly meetings with our clients (and an annual conference where a bunch of clients got together).  At each of these, one of the key metrics we shared was how large our automated tests were.  This gave clients some comfort.  This comfort translated into a higher likelihood that they would install newer versions of the software (when the became available).  Since we were in the high-end, enterprise software space, this was a big deal.  If we could get 20% more of our customers to move to Version 5 of our software (instead of staying stuck on Version 4), we had advantage.  Less support costs and higher retention rates.

I like to think of technology strategy in the form of technology debt (take short cuts now, but payback comes later – with interest).  Read “Short-Cuts Are Not Free” if you’re curious about this.  Similar to financial debt, this is often necessary (as is technology debt) but it has a cost.  The reverse of this is Technology Investment (in the classic sense).  This too has an interest rate – the rate you get “paid” (i.e. ROI) on that investment.  I think investment in automated testing is one of the best interest rates you can find.  The payback period is a little longer, but it is worth it.  If you have competition (which you likely will), you will find that having a strong investment in automated testing will give you advantage.  You’ll add features quicker, fix bugs cheaper and ship better software.

Of course, as is always the case, situations vary.  Pyramid was in a different market than my current startup HubSpot – but I’m still passionate about automated testing.  Will continue to share experiences as I progress.

Article has 13 comments. Click To Read/Write Comments

Python vs. C#: Frameworks, Libraries and Ecosystems

Posted by on Fri, Jul 28, 2006

This is the third in a series of articles in looking at some of the tradeoffs between Python and C#, particularly for startups.  The first article is here (which is not necessary reading, as it provides some background).  The second article is here, which is likely worthwhile if you’re interested in the topic.  

In this third installment, I’ll take a look at the issue from the perspective of frameworks, libraries and ecosystems for the languages.  In response to the prior two articles, there have already been some great comments regarding this, so I’ll try and weave in some of those user-contributed points as well.  Apologies in advance for the length of this article, it’s a stream of consciousness type thing which will bring in a variety of semi-connected thoughts.  

First off, I think the availability and viability of existing code in the form of frameworks and libraries is likely as important as the structure and expressiveness of the language itself.  Chances are, if your startup is looking to pick a language, it has some product idea in mind (for example, a hosted web product).  In these cases, much of the code you’ll need to create your end product will be “foundation” code.  As such, most of the contemporary languages (including Python and C#) have frameworks that are oriented towards creating particular kinds of applications.  For purposes of this article, we’ll look primarily at web applications (because that happens to be what I’m interested in), but there are similar considerations if you’re building desktop applications or other types of applications.

Since I have not worked professionally with Python (yet), I will refrain from making any value judgments but will limit myself to things that I’ve heard from trusted sources and let the community respond and push back on points that I’m mistaken on.  (Note:  I like to number my thoughts, not to represent any type of linearity, but because it makes it easy to refer to specific points in the comments).
  1. You Need A Basic Web Framework:  Back in the day, when I did my first web development (1996, in C++ on Unix) hardly anything existed to ease the pain of web development.  Basically, our web applications had to do everything – including parsing the raw input stream (we used CGI back then) and building the requisite “response”.   We had our own state management system, and had to deal with things like load balancing and security.  Over time, we built a large and robust body of C++ code to abstract away much of the complexity.  This was a bit painful, but in the long run, we really understood what was going on under the hood and I think this gave us a better appreciation for how to design what we needed.  I also had the benefit of working with some really gifted C++ programmers, and the resulting proprietary framework gave us an “edge” for some time.  These days, there are web frameworks that take care of most of the basic stuff you need to do to write a web application.  For C# (and the other .Net languages), we have ASP.NET (now in version 2.0).  For Python, there are frameworks like Django.  Now, here’s the rub:  I think some Python programmers have felt the need to “roll their own” web framework.  It may be because projects like Django are relatively new.  This is troubling to me.  In my opinion, to create a robust body of foundation code for the web right now is non-trivial.  There are just a lot of “gotchas”, particularly when it comes to doing sophisticated state/session management, security and abstracting some of the UI so that you don’t have a bunch of HTML string concatenation all over the place.  I have a bias here, but I think that those that roll their own web framework code today do not have an appreciation for how hard it is to do this stuff right – regardless of what your language is.  So, if I were picking a language today, having a robust, well-tested and easy to use core web framework is essential.  From what I know, I think Python has these frameworks now – but they came a little late.  As such, too many people are out there running on their own custom web frameworks.  

From a comment to my first article by Eric Peterson:   “For me, that answer is Ruby on Rails for most web applications. Python just doesn't have a good enough Web framework for it to really be a contender- if you used Python without a framework, you'd essentially be rolling your own, which is a lot of overhead before you can even begin to concentrate on your actual business problems. .NET probably lies somewhere in between, with the only two big things being object persistence and testing.”  Though we’re not looking at Ruby On Rails in this series, I think Eric’s comment is somewhat telling.  Earlier in the comment he posits:  “The major question is this: Which language/framework will let me concentrate more on business problems and less on technical ones?”

I can’t substantiate Eric’s claim as to whether or not Python’s web frameworks match that of RoR (or ASP.NET) and whether they’re sufficiently evolved or not.  But, the basic point is this:  If you’re building a web application in today’s world, you need a robust framework on top of which to build.  Inexperienced developers too often trivialize this effort (because fundamentally, it seems simple).  It’s not.  Also, the state of web programming evolves.  If you roll your own, you will have a harder and harder time keeping up.  That’s code your competitors are not writing, so you are at a disadvantage.
  1. You Need Other Libraries:  Most web applications (particularly business applications) will need to do things other than just process a web request and send back a web response.  Common things are interacting with databases, file systems, XML documents and server software (LDAP, SMTP, POP3, FTP, etc.)  Based on how much of this you’re going to be doing, the availability of usable third-party libraries can be a critical decision factor.  Often this kind of code, can, in aggregate match the effort required for a basic web framework.  So, it’s important to take an inventory of the kinds of things you’ll want to be doing and figure out what libraries are available to you in the languages you are looking at.  In this regard, both Python and C# likely have sufficient existing code out there.  The difference is that most of what C# provides is buried inside the large (but mostly elegant) ..Net libraries.  For Python, you have a large and growing set of libraries for most things you would want to do today.  But, this does involve a “search and evaluate” process so you can pick the right ones.  There is certainly advantage to having a choice (so you can pick the library that best meets your need), but this choice has a cost.  We’ll look at this issue further below.

  1. Libraries Should Be Easy To Use:  One of the biggest challenges back in the day that I was using C++ was that it was very, very hard to take an existing library that was out there (and did something meaningful) and reuse it in your application.  Though C++ was cross-platform, the developers of the library had to ensure that they implemented the library in a way that made it usable across platforms (like Windows and Unix).  This was not always the case.  Further, using compiled libraries (without source) was near impossible.  On Windows, we had DLLs and COM and such but it was ugly.  Now, with languages like Python, reusing a library is simple and elegant.  That’s a great thing.  For C#, things are much better than they used to be (because the reuse model on .Net is much nicer than prior Windows technologies).  But, with C# you are somewhat limited to .Net and Windows.  Yes, I’m aware of the mono project but I still think that running .Net on something other than Windows requires a fair amount of rationalization.  Though at an academic level it’s nice to know that I can do it, I don’t know that I’d ever really want to in a commercial setting.  

  1. Standards vs. Choice:  We looked at this in an earlier section.  There is a tradeoff between having a single, highly popular library to do X vs. having a set of different libraries that do similar things but make different tradeoffs.  The advantage of choice (i.e. multiple libraries) is that you are likely to get something that more closely matches your needs.  Python definitely has the advantage here.  Most common needs in the Python world have more than one existing library.  With C#, the community relies mostly on what’s inside the .Net framework (though certain third-party libraries exist for specific kinds of tasks).  The advantage to a “standard” like .Net is that more people are familiar with the library.  For example, if you take a reasonably experienced .Net programmer (in whatever language), chances are, they have used the built-in libraries for most common things (serialization, database access, XML processing, etc.).  There’s a bit of an advantage to this, because libraries have a learning curve.  To find programmers that already know the core libraries is probably easier than finding programmers that know the particular library that you picked.  I compare this loosely to the splintering that happened back in the old Unix (before Linux) days.  We had Sun’s Solaris, IBM’s AIX and HP-UX.  All of these were certainly similar, but people became experts in one or the other.  This essentially divided the technical resources into one of the camps.  At a smaller scale, this kind of stuff also happens with libraries and frameworks.  If there are four different web frameworks, you’ll have four pools of developers.  Though there’s certainly conceptual overlap, there’s usually some advantage to the developers that have worked with this particular framework before.  So, this one is a case of balancing the tradeoff between standards (gives you a larger pool of people that likely know the body of existing code and how to use it) vs. choice – which gives you the advantage of picking the library that’s the most suitable for the purpose.  Choice also gives you, at some level, competition – which is a good thing.

  1. Ecosystems:  It’s going to be hard to fit all my thoughts on development platform ecosystems into one paragraph, so I’ll try and hit the hi-lights.  I would define an ecosystem (in this context) as all the resources that surround a core language and platform.  This includes developers that know and understand the technology, book authors that write about it, library developers that create reusable code for others, trainers that help people get up to speed, and tools companies that offer add-ons to make developing on the platform more efficient.  I cannot stress enough the importance of having a vibrant, growing and sustainable ecosystem for a programming language.  If you’re a startup, chances are that if you’re building a sufficiently sophisticated application, you’ll be needing one or more of these resources to create your application.  Further, unless you are “building to flip” (which I don’t recommend), you’ll need to support your application for years to come.  In fact, your margins will likely get higher over time as you have to write less and less code to meet new market needs.  As such, if the ecosystem surrounding your chosen language/platform declines significantly or dies, you are at a severe disadvantage.  Now, the issue is that nobody consciously chooses a language/platform with a dying ecosystem. It just turns out that way (usually 3+ years out from when the decision is made).  Thought it’s difficult to predict which ecosystems will endure and which ones won’t, we have enough history now to at least have a sense for some patterns.  Ecosystems that rely on a single company are vulnerable.  Hence the slow decline of so many custom development languages (Easel, PowerBuilder, Delphi, etc.)  [Note:  I have nothing against any of those languages, and I’m sure there’s a large pool of successful Delphi programmers still out there, but I will continue to maintain that the ecosystem for these languages is likely on a decline – of varying degrees of severity].  Of course, C# is also, for the most part, dependent on Microsoft.  (Despite the language itself now being released to the  ECMA).  But, Microsoft seems to be a special case.  Given their volume of resources and conviction around the importance of strong development platforms for their operating systems, it is unlikely that C# and ..Net will fail to maintain a vibrant ecosystem around them.  Evidence is already there in terms of the number of developers, books and third-party vendors creating technology on this platform.  Python has the power of a strong and vibrant open source community.  So, it does not face the same vulnerability of a single company “owner”.  This is a good thing.  Given the scale of use of Python already, it is unlikely that in the next several years we’re going to see a sharp decline in the vibrancy of the community.  Another reason that ecosystems decline is that the language was really good at a special kind of application and there is a technical shift that causes that kind of application to be less relevant.  For example, PowerBuilder was an immensely popular platform for creating client/server applications.  But, when the shift to the web happened, there wasn’t enough advantage anymore and other languages took over.  That’s why I like general purpose languages (C++, C#, Python, etc.).  They’re more likely to endure when the inevitable shifts in the technology landscape happen.  In short, ecosystems are important and should influence where you place your bets.  Both sides have pros and cons.  Microsoft has the resources and the incentive to make .Net work.  The Python community has the resources and the incentive (in aggregate) to keep Python evolving.  

Clearly, this is a complicated topic and I think I used way more words than should be allowed for so little concrete thought.  Apologies for that, but since I have not worked extensively in Python, I am not equipped to really take a hard stand on certain points.  As such, I’m simply commenting on patterns I see and identifying some of the tradeoffs.  If the past articles were any indication, we should see a robust discussion on some of these points by way of comments from my readership (which frankly, is the motivation for writing this series in the first place).

Thanks to everyone that takes the time to comment and share their viewpoints.  In the next (and last) part of the series, I’d like to integrate some of the reader comments themselves and get a sense of what people have said.  There will likely be 100+ comments across the various articles so this may take a few days to pull together.

Look forward to reading everyone’s thoughts.

Article has 23 comments. Click To Read/Write Comments

Python vs. C#: Business and Technology Tradeoffs

Posted by on Tue, Jul 25, 2006

This is the second in a series of articles looking at the merits of Python vs. C# for startups.  The first article is here (though it is not necessary to read the prior article to get value out of this one).

Let’s jump right into a series of arguments (and counter-arguments) for both sides.  Note that some of these arguments are not your “standard fare” programming debates.  I’m actually looking at the choice from the perspective of real startups (mine and others).  This is not an academic exercise for me.  Though it’s unlikely I’m going to make a platform “switch” (because of a vested interest already), this issue continues to come up with startups that I’m advising.  Also, this list is by no means an exhaustive treatment of the issues at hand.  Just the ones I found particularly interesting or relevant.

Python vs. C#: Understanding The Tradeoffs
  1. Open Source Is Good:  Python has the advantage for being open source.  If the same language had been offered by some closed company (without the resources of Microsoft), my decision would already be made.  I’m a strong believer that successful languages and platforms require an ecosystem for long-term viability.  And, long-term viability is important for software product companies because the value being created often won’t be realized until year 3-5.  Ecosystems require either a very large player with resources and conviction to see it through (like Microsoft), or a passionate community that is motivated to keep the technology supported and evolve it.  C# has the former, Python has the latter.  If I had to choose based solely on this criteria, I’d pick Python.

  1. Dynamic vs. Static Languages:  This topic has been debated to death in the blogosphere so I’m not going to repeat it here.  Suffice it to say that Python is a more dynamic language than C# (and C# is more dynamic than something like C++).  From a pure language design perspective, it doesn’t really matter to me that much whether a language is dynamic or static.  I’m more interested in the implications of that tradeoff (see below).  Having worked in dynamic languages before (though nowhere near the power and expressiveness of Python), I can see the appeal.  In my limited experience, I have found that some of the dynamic languages require a “higher grade” developer to construct robust, scalable and sustainable applications.  It can be done, but it’s easier to shoot yourself in the foot because things are so easy and fast.  

  1. The Cost Of Compiling:  C#, being a static language generally has a build/compile step.  Some people hate this, others don’t.  In my opinion, though the build process certainly adds a small micro-step to the development process (and impacts productivity), I kind of like being able to let the compiler do it’s thing.  This catches a large body of stupid syntax mistakes.  Many argue that one should have a litany of unit tests anyways, and that these unit tests are much better at catching actual errors in the code than a compiler.  This is true, it is important to have unit tests and they will catch more errors than a compiler.  But, what the compiler gives me (by way of testing) is essentially “free” (or near free).  Simply by working within the constraints of the language, the compiler tells me things that are actually useful.  I don’t have to develop unit tests to get there.  Don’t get me wrong, I love unit tests (I really do), but having a compiler find some stupid stuff is a helpful thing.  

  1. Building Binaries:  Call me old school, but I have been writing software long enough to appreciate the ability to “package” my code in a form that is inconvenient for the casual user to tinker with and reverse-engineer.  Yes, I do realize that in .Net, it takes a good obfuscator to accomplish this, but that’s OK.  As a builder of commercial software, the “naked code” platforms like Python bother me a bit because I essentially have my naked code somewhat exposed.  As a software company, this source code is a core asset and something I’ve made considerable investment in.  Even as a hosted software company, I may elect to take parts of my platform and have it run elsewhere.  The ability to build binaries lets me distribute my IP with some protection.  Not to say I may not need to provide source anyway, but at least I have a choice.  If I had to choose between the two, I prefer being able to build binaries instead of having to distribute naked source code.

  1. Performance:  For most of the companies I’m involved in, performance is not that the dominant criteria.  Though C# may have a slight edge in this respect, this is not an important consideration for the kinds of applications I’m talking about (consumer and business applications on the web).  What C# may provide in performance, it takes away in the cost of infrastructure software (since it can’t run on open source).  Said differently, C# may take less server resources for the same volume of concurrent users, but you’re paying for Windows on those servers so in many cases, it likely comes out to a wash.  Not something I lose sleep over.

  1. Does Mainstream = Mediocrity?  This is one of the most troubling aspects of the topic.  I have read claims that one of the reasons to pick Python over C# (which is more “mainstream”) is that it provides competitive advantage.  By picking something like C# (which is mainstream), I’m lining up with the masses and doing the same thing as everyone else.  I’d like to push back on this a bit.  Simply picking the popular choice does not necessarily cause mediocrity (though the danger is certainly there).  Further arguments suggest that if I were to pick Python, I’d be able to attract a better class of programmer to the company.  Maybe.  I would argue that the best programmers are those that can understand the merits and tradeoffs of technical choices and can appreciate that there is no clear answer here.  If they’re going to pick a startup simply because it happened to choose Python (and that’s their pet language), I would question the judgment and/or experience of the programmer.  From a startup’s perspective, value is created from a combination of exceptional technical skills (i.e. “the better programmer) and some business instincts and customer intuition.  Some of this also has to do with the type of startup.  If I were doing really, really complicated things (like security/encryption, artificial intelligence, etc.) than the better technical skills win out.  But, I’m building software for people, so the other stuff is important.  As for the mediocrity argument, I’m not yet convinced that simply the act of choosing a mainstream language means you’re going to find mediocre people to work on the team or that you’re going to create mediocre solutions.  I just don’t see evidence of that.

  1. Programmer Productivity:  I have heard strong (and convincing) arguments that Python is a more productive environment for programmers.  This results from a combination of the dynamic nature of the language, the elegance of the syntax and the availability of a large set of libraries of pre-written code that can be reused relatively easily.  I also believe that the sheer volume of code to solve a similar problem is lower in Python and C# (primarily because of syntax decoration and boilerplate code required in C#).  Can’t argue that.  Programmers are likely more productive in Python, all things being equal.  The learning curve is also not quite as steep (as compared to C#).  On the flip side, I worry a little about larger and longer projects and whether or not languages like C# lend themselves better to team-based development, and projects that are large and span multiple years.  Though I can’t (and won’t) make the argument that C# is better for bigger, longer projects I think the potential for this to be true is certainly out there.  But, if programmer productivity were the sole decision criteria, I’d pick Python.

  1. Popularity and Precedence:  One of the key factors I look at when making decisions like this is searching for precedence.  Faced with similar decisions and similar needs, what have other (smart) people done?  I also weigh startups that have succeeded higher (in terms of looking for precedence).  My informal reading on this topic indicates that a lot of the successful startups (successful in the sense that they built products that worked, were bought by companies or have meaningful revenues) pick Java, C++, C# and PHP.  RubyOnRails shows up a lot in conversations with new startups, but not enough time has passed to really know what the outcome of that choice will be yet.  I just don’t see that many “household names” (in startup world) being built on Python.  This worries me.  The language has been around long enough to have gained traction and is clearly immensely popular in many context  – but just not in the context I care about most:  Startup software companies building a commercial product that hope to be acquired or go public some day.  Based on what I have seen, if I were picking solely on this criteria (and this is a subjective assessment), I would pick C#. (Note:  If you’re reading this and know of a startup that has used Python successfully, feel free to leave a comment).

  1. Valuation Impact:  As unfortunate as it is, the choice of development platform/language can and does impact the valuation of a startup.  The reason is that potential acquirers care about this stuff.  As well they should.  Not because there are right or wrong answers, but because the choices you make affect the “integration” cost for them.  If they are a big Java shop and you’re written everything in Python (or C# for that matter), and they are buying you for your IP, they’ll factor the cost of this assimilation into the purchase price.  If the cost/risk is high enough, it may actually prevent you from getting an acquisition offer in the first place.  On a related note, if you’ve build everything in C# and the acquirer hates (or competes fiercely) with Microsoft, chances are, this will impact your odds of being acquired.  This is where it becomes a bit of a numbers game.  If I were playing the odds, I’d pick a more mainstream language as the odds of it negatively impacting valuation are lower.  Of course, it’s also important to remember that the biggest impact on valuation is whether or not you have created a useful product that makes money.  For purposes of this discussion, I’m assuming that an equally gifted development team would be just as likely to create a working product in C# as they would in Python.

I’m going to cut this off here as there’s a lot to think about and talk about.  In the next installment in the series, we’ll look at surrounding components (like web frameworks and third-party libraries) and how they might influence the decision making.  As always, if you have thoughts or ideas on any of the above, please comment.  The purpose of this exercise is not to defend one side or the other, but to take enough of a stand as to spark some discussion. 

I also plan to synthesize some of the exceptional comments I’ve already been getting on the topic into a fourth (and final) article on the topic.  Hopefully you’ll have found this series helpful.

Article has 42 comments. Click To Read/Write Comments

Python vs. C#: Understanding Personal Bias

Posted by on Mon, Jul 24, 2006

Warning:  This is likely going to be a series of somewhat lengthy articles, because the topic is complicated and needs to weave in a number of conceptual threads, some technical and others strategic.  It will also include a bit of history to set some of the context for my decision making.  Feel free to ignore these parts if you choose.

This article captures some of my thinking related to the choice of language/platform for startups.  The current discussion was sparked by the introduction of a new member to our development team.  This new member has had some positive experience with Python as a language (and has been doing work that is similar to some of the work we’ve been doing in my current startup, HubSpot).  The deeper I dug into the rabbit hole, the more interesting the discussion has gotten.  I thought I’d bring the discussion to the blogosphere as a way to both refine my thinking and solicit input from the OnStartups community of readers.

Working For A Software Company
Before we delve into the issues themselves, I’d like to share a personal anecdote.  In my first real job as a programmer, I was writing code for U.S. Steel.   At the time, I was still working on my undergraduate degree at Purdue (major:  Computer Science).  One thing I learned, relatively quickly at U.S. steel is that those working at the company that were not making steel, moving steel or selling steel were “overhead”.  The programming group was seen as being little different from accounting, finance or HR.  I quickly made the decision to leave the steel industry as I didn’t want to be “overhead”.  I further decided that the only place to go where I wouldn’t be overhead would be a software/technology company where the product being sold was software.  As such, I found my next job at a large software company.  I figured that I’d learn much more and be able to contribute much more value to the organization if their revenues were being generated primarily from software products.  I could not have been more right.  Most of my early training and some of my best lessons came from this first job as a real programmer at a real software company.

Lesson 1:  If you’re a “career programmer” and really passionate about software development,  I encourage you to go work for a software company.  You’ll learn a lot and won’t regret it.

At this new company, the product was a large financial application (written in COBOL).  I was hired on to work on some “next generation” technology to build a Windows front-end for the company’s large legacy application.  Before I was hired, the company had picked Easel (a RAD development platform that was specifically created to easily build front-ends for large, mainframe-based legacy applications).  Most of you will likely have never even heard of Easel, but it was reasonably popular at the time.  Now, at this company we had a brilliant individual who had been with the company since it’s inception and had acted as “chief software architect” ever since.  The flagship product was his brain-child.  We’ll call him Warren (because that’s his name).  He was the programmer’s programmer.  He got it.  He understood the tradeoffs and constraints, he understood the business and had successfully built an application that was the company’s competitive advantage in the early days.  Over the long haul, I am convinced that the company survived (and thrived) in the face of stiff competition, primarily because the software was designed for ease of change as customer needs got more complicated and the market shifted.  The design wasn’t perfect, but it was much better than the competition.  Of course, the application was still written in COBOL so there were limits to what they could do.  One of the issues was that Warren was so good with COBOL development that he could do almost anything within it.  I remember some great technical discussions when we were talking about OOP vs. structured programming and he did some really exceptional refactoring of the core COBOL code that was “inspired” by OOP.  But, the fact remains that for the kinds of things we now needed to do, COBOL just wasn’t a good choice.  The complexity of developing rich GUI applications almost mandated OOP (along with an MVC design pattern and other stuff).  Trying to replicate OOP inside COBOL (which the COBOL vendors were in the process of doing) just didn’t really ever get there.  

One thing I quickly learned about Easel (it wasn’t a hard language to learn) was that what it provided in terms of short-term productivity, it lacked in long-term viability.  After working in objected oriented languages at school, it also seemed archaic and limiting.  My favorite development language at school had been Turbo Pascal from Borland.  So, I did some research and tried to figure out how other smart companies were solving this problem.  I discovered that C++ was a widely used, general purpose programming language for desktop applications.  It had OOP, large third-party support and an entire ecosystem around it.  This was in stark contrast to Easel which was constrained on just about all fronts.  So, I pushed for us to abandon Easel and use C++ instead (I hadn’t written a line of C++ code at the time).  To prove my point, I took a laptop with me on vacation and decided to learn the language and build a new front-end product on my off-time.  The learning curve was a little steep, but not that steep.  

At the end of about 6 weeks, I had a semi-working application.  I had built it all in my spare time and demoed it to management.  They loved it.  They productized it.  They started selling it to customers.  That’s how C++ got introduced into the company.  Easel (both the company and the product) is now pretty much dead.  There’s nothing at the website, and I’d be very ,very surprised if it’s still around (a quick Google search doesn’t turn up anything useful).

Lesson 2:  Closed languages owned by a single company without the resources to build out an ecosystem can and do fail eventually.  Sometimes a lot.  Sometimes completely.  (But, open source is a different ball game.)

Fast forward about 9 more months.  I ultimately decided to leave the company and go off on my own and kick-off my first startup.  Not because I didn’t like the company (I did) or that I wasn’t learning enough (I was), but it just “felt right”.  I’ll spare you the details of this decision in this article since that’s not the focus.  I picked C++ as the primary language for my new startup.  The reasons were obvious:  I already knew it, it was an expressive enough language, and I still liked the growing support within the community.  Lots of people were doing interesting things with C++ so I thought the language was not likely to die anytime soon (I was right).  C++ continued to serve us well and my startup built several successful commercial products (web and desktop) and we ultimately sold the company last year.  It was a happy ending.  At the time of the acquisition, our choice of C++ easily removed what could have been one point of discussion and contention.  I think the fact that we had used a mainstream language and that the acquirer already had programmers that used that language (and understood it), helped.

Lesson 3:  If you’re a startup looking to be acquired someday, your best case scenario (when it comes to technical platform choice) is not to be able to convince the acquirer that you made a good language/platform choice.  Your best case scenario is not to have the discussion in the first place.  Also, it is unwise to assume that the acquirer has the same passion for technology that your startup does or the same willingness to “evolve” to your line of thinking.

In my most recent startup, I originally decided to use C++ (on Microsoft’s .Net platform).  I felt this would give me the most “flexibility” (as I could do just about anything I needed).  Within a few months, I switched over to C# because of all the merits of a “contemporary” language that had things like garbage collection, reflection and a clean component model that actually worked.  So far, I’ve been pretty happy with my choice.  But, am I missing something?  If I could do it all over again, should I have picked Python?  That’s what we’re going to get into next.

Current Concern #1:  When looking at the C# vs. Python question, I am trying to get a handle on whether by bias towards C# is driven by my familiarity (and success) with C++ or by the true merits (both technical and business) of the choice.  Further, if a given language/platform choice, even if it is less popular, provides bottom-line results to the company, isn’t that going to raise the value of the company much more than picking the “safe” choice?  I don’t know, we’ll take a look at this.

That’s it for this installment.  Next time, we’ll take a look at some of the arguments and counter-arguments on both sides.  I promise it will be more interesting than this article, which just established some context.

Article has 18 comments. Click To Read/Write Comments

The Thin Client, Thick Client Cycle

Posted by Dharmesh Shah on Sun, May 28, 2006

One of the repeated cycles I have seen in my 15+ years in the software industry is that we constantly go through this “thin client / thick client” cycle.
In the 1980s, there was still a lot of software being developed for the mainframe.  These were basically “thin client” applications – most of the processing was done on the server and the model was that of centralized computing with “dumb terminals” acting as the primary interface.  These dumb terminals were indeed pretty dumb (not much processing power, and character-mode interfaces).  But, there were certainly advantages to this centralized model.  The software could be updated on a single server, security was simpler, viruses and other malware were not a big issue.  An important point to note is that these applications were mostly “stateless” (we called it pseudo-conversational in those days).  This allowed a user to believe they were interacting with their application directly, when in fact, 99% of the time, the server was simply waiting for another request and idle users were not consuming any resources.  This particular model allowed a very large volume of users to be served, because not each user consumed server resources (CPU, memory, etc.) when they were not active.
Then, came the “thick client” wave in the form of client-server.  An important thing to note here was that in the client-server model, we did not simply transfer all of the power back to the desktop – but a lot of it.  The reason for this shift to thick-client applications was simple:  There was a lot of horsepower in the PC and it could be leveraged to create better (and more usable) applications.  It seemed a waste to let all that horsepower sit idly around and use the PC as just another dumb terminal.  So, with this shift to client-server, we saw a rethinking of how applications were designed, built and deployed.  The server, in most of these applications was a database server and did notthing more than act as a persistence layer to store and manage data.  Along with the shift came new tools and technologies to help make it easier to build applications for the new paradigm (one that comes to mind is PowerBuilder).
In the late 1990s, we saw again a shift to “thin client”.  Now, this client was a “much smarter dumb terminal” in the form of a web browser.  This trend was fuelled by a number of things:
1.  It was painful to manage desktop applications on hundreds and thousands of desktops
2.  There were classes of applications where the server horse-power and data storage required exceeded the power of most PCs
3.  Internet standards made it relatively easy to build applications that would work across a variety of hardware platforms and operating systems
So, everyone started creating web applications.  Interestingly, and not surprisingly, these web applications too were “stateless”.  So, the web application could serve tens of thousands of users using the same “pseudo-conversational” model made popular in the mainframe days.  Instead of CICS or IMS-DC we now had HTML.  This made the user interface better than the mainframe terminals (we now had color, UI widgets, etc.) but still was a huge step backwards compared to all the progress that was made in the user interface arena in the client-server days.  But, this new thin-client model solved a lot of problems with the thick-client apps.  You could leverage the resources of huge servers, do things you simply couldn’t do on your desktop and had a nice, consistent set of behaviors from hundreds and thousands of applications that were just a browser-click away.
My thesis now is that we are due for another cycle.  Why?  For the same reason we had the prior cycles:  Because there are still problems with the current model.  User interfaces for true “thin client” applications basically suck.  Yes, I know about AJAX and Flex and Laszlo and ActiveX and Java applets and any number of other band-aids that individually could make the user experience much nicer – but the problem is much, much, deeper than that.  The problem goes back to the platform (or lack thereof).  What drives these technology cycles as much as user experience is the developer experience.  If I were looking to reproduce the user experience of even relatively trivial desktop applications today on the web, it’s hard.  Very hard.  Unnecessarily hard.  And, the end result, though orders of magnitude better than a pure thin-client application in terms of user experience, still seems a little fragile to me.  Sure, folks like Google can spend the money to make apps like Google Maps seem like they’re really stable and working.  But, for mere mortals (like me), this is non-trivial.  Technologies like AJAX are non-standard (there are over a dozen ways to do AJAX, each with its own APIs and approach).  Technologies like Flex and Laszlo are too proprietary (even though Laszlo is now open source and supports AJAX).  
So, my theory is that we will see another repeat of history (or at least elements of the historical pattern).  We’ll keep the best of what we have and bring back the best of what we gave up:
  • Nobody wants to go back to the days of updating desktop applications manually on millions of desktops, so the new “rich client” (or smart client or whatever) applications will be self-updating over the Internet.
  • We’ll be able to reuse the component metaphor for UI much like we did back in the old client-server days (sometimes, you simply need a really powerful grid).  We’ll see new UI widgets and vendors that provide these new widgets so a large pool of developers can create really cool apps. In fact, we’re already seeing this movement on platforms like ASP.NET.
  • Next-generation clients will use both a combination of “local storage” and server-side storage (at the option of the user). 
  • Applications will now use the Internet for both data and services

Much of what I’m saying is not particularly controversial (or at least isn’t intended to be).  It just seems to make sense.  There are many of you that will argue that technologies like AJAX will allow us to stay with the current browser-based model and not revert back to a thick client.  Though that’s certainly possible, I think it will be based on how standards evolve and how easy it is for the average developer to build non-trivial applications.  From a startup’s perspective, I’d advise picking platforms and languages that will likely be able to cross-over well if and when the shift back to more of a thick client model again.  
Note:  This article was originally written and posted to my personal site in October, 2005 but has been edited slightly and posted here.

Article has 23 comments. Click To Read/Write Comments

Selecting A Platform: Part 4

Posted by on Sat, Nov 05, 2005

This is part four in the series on selecting a technology platform.

Startups generally have more flexibility in picking a platform since many of the factors that would normally drive the decision (existing talent within the organization, existing products that need to be integrated with, existing code that has to be reused, etc.) are not as significant.  Startups may also use their choice of platform as a differentiator (though this is dangerous and should only be pursued if you know what you’re doing).  

A case in point is Ruby, or more specifically, Ruby On Rails (RoR).  I’ve played around with Ruby a bit.  It’s a great language and has a certain expressive elegance that is quite appealing.  The runtime engine is also available for a variety of platforms.  In this regard, Ruby’s a lot like Java.  But, in one very important respect, its very different:  Its new and unproven.  Though there are many groups that are using Ruby to create very cool applications (particularly some of the “Web 2.0” companies), the language and the framework have not been around long enough yet to really know where things are headed.  Though customers are unlikely to care what language the application is written in (as long as it meets a need and is reasonably usable), picking a niche language/framework/platform like this can have major consequences.  Startups that choose this kind of path are assuming two things:

  1. That the platform will continue to grow in popularity and be supported over time (by supported, I mean that updates will be available, new people will be learning the language, books will be written, online forums will be available, etc.)
  2.  That future partners, acquirers and others that do care about platform choice are not going to be important.

This last point is the one that worries me the most.  I can’t tell you how many times I’ve come across a promising startup with “cool” technology that I think could have been a great addition to an existing product, but the overhead in trying to integrate it was simply too high.  Examples I’ve encountered include Tcl, Python, Perl, PHP, etc.  These are all great languages (well, maybe except Tcl) and yes, all of these have ways to be integrated using web services, but my response to this is that the integration is not deep enough and has consequences of its own.  There is nothing like taking a bunch of C++ (or Java) code and reusing it at the “build” level (instead of trying to do some type of arms-length integration.

So, my advice is, unless you really know what you’re doing (and have some immensely compelling reason), try and stick with “mainstream” platforms.  I can almost guarantee you that you’ll be glad you did (5-7 years later, when your chosen platform is still alive and well).  This is one of those cases where there truly is safety in numbers.  If you’ve got some product that simply must be written in a niche language/platform, then by all means, give it a shot.  But, for most of us, mainstream platforms are generally a wiser choice.

Article has 0 comments. Click To Read/Write Comments

Selecting A Platform: Part 3

Posted by on Sat, Nov 05, 2005

This is the third part in a series of articles related to platform selection for software startups.

In this article, we’ll look at how target customers and users might affect the choice of platform.  As always, there are no easy answers, but I find it helpful to look at some broad generalities that might sway the decision towards one platform (or the other).

Lets assume we can divide customers into three broad categories:  large enterprises, small/medium enterprises and consumers. 

Today, it seems that many large enterprises (particularly financial services institutions) have a learning towards J2EE (WebSphere/WebLogic).  Without going too deeply into the rationale for this leaning, suffice it to say that this is a fine choice and these types of customers have a lot at stake when picking a platform.  Its less a matter of individual products (and how they exploit a platform) and more a matter of the how their various “portfolio” of products can be made work together and how they can leverage resources (hardware, people, infrastructure).  Large enterprises are concerned with TCO (total cost of ownership) and will generally lean towards a platform that they believe will minimize TCO.  In many shops, this means J2EE – as the platform is already proven, they have internal resources that can support it and often large relationships with application server providers (IBM, BEA, Oracle, etc.) that they can leverage.

Small/Medium enterprises have a different set of challenges (and needs).  Often, these customers value simplicity (and short-term convenience) over any kind of perceived long-term benefits.  These customers also seem to have a larger volume of client/server applications, desktop applications (as they have not made the investment in switching everything over to the web).  As a result, Microsoft and its related technologies (Windows and .Net) seem prevalent here.  Many of the applications these customers buy also need to “integrate” with existing applications they own (like Microsoft Office).  As such, there seems to be a leaning towards .Net (and even classic Win32) for these shops.  They need something that the “IT guy” (possibly the nephew of the CEO) can install on a weekend.

On the consumer front, things get a little trickier and a lot depends on the deliver mechanism (hosted software vs. “packaged” software).  If you are planning on delivering a hosted application (using some type of web application), then the platform choice is not impacted as much by the customer – other variables will more strongly influence the decision.  If you are delivering a desktop application, the choice comes down to picking which operating system you want to support.  If its Windows (which in a large majority of cases it will be), then Microsoft ..Net is the clear path.  If its exclusively Apple’s Mac (which has a smaller, but more loyal set of users), then you would use any of the great Mac development tools.  If you’re looking to support both (Windows and Mac OS) you should look at frameworks/libraries that support elegant cross-platform capabilities (like Qt).  Be careful here.  If you build a cross-OS product using emulation or virtual machines (like Java) though in theory you should be satisfying both your user bases, you will often serve neither.  The windows fanatics will sense there’s something “not quite right” about the interface and the Mac fanatics will find that its not really a Mac application (and their dogs will bark at the screen indicating there’s something wrong).

Note:  If you are building an application that will be sold to large enterprise customers, but the users of the application will be departmental folks running the application on the desktop, and a rich-client is called for, the decision looks more like the consumer platform choice.

In the next installment, we’ll look at how the stage of the company might influence the platform decision choice.


Article has 0 comments. Click To Read/Write Comments

Previous Page | All Posts | Next Page