Saturday, January 30, 2010

How to figure out what those VC terms mean for your equity

My friend Chris Dixon wrote about what everyone should know about their equity grants. Following up on that, I wrote a simple python program that helps you simulate what your stock would be worth in the event someone buys your company. The reason it's not just as simple as purchase_price * your_percent_ownership is that many times VC deals include things like preferences and anti-dilution provisions. These are basically mechanisms where by the VCs may get more than their percent ownership.

Even though I'm currently working on my third VC backed company I found I still had to spend a lot of time looking up the definitions of terms and thinking through how they affected various outcomes. This was a really good exercise for me and I highly recommend it for anyone else raising VC funding. As a side note, I find that writing a program to simulate something is the best way to see if I really understand something.

The code is available at along with some extremely basic documentation. I know that I haven't gotten all the scenarios exactly right, so contributions or improvements are definitely welcome.

Preferred Stock Background

The key point that everyone in a VC backed company should understand is the difference between the stock VCs buy (called preferred stock) and the stock you and I get (common stock). Preferred stock is called that because it gets preferential treatment over common stock. As far as dividing up the proceeds from the sale of a company, that preferential treatment usually falls into two broad categories: preferences and anti-dilution provisions.

If there are multiple rounds of financing, each new round of preferred stock sold is called a "series" with the most recent series being "senior" to the older series. Debt is usually the most senior in a company's capital structure. Then higher seniority stock holders get paid before more junior stock holders. Common stock is the most junior of all.

Preferred stock almost always has the right to convert to common stock. Conversion causes the investor to lose any special privileges that the preferred stock holds. The preferred shares may not necessarily convert 1-to-1 into common stock depending on anti-dilution provisions. Typically, investor A will have the right to convert each preferred share into more than one common share if subsequent investor B paid less per share than A did.

There are several different standard ways to calculate how A's conversion ratio from preferred to common should be adjusted when B invests at a lower price. The two most common forms are called broad-weighted and full-ratchet. The former averages A and B's prices while the latter fully adjusts A's price down to B's price.

When people talk about percent ownership, they're really talking about the "as converted to common" percent ownership. This is a hypothetical number of common shares that would result from adding together all the preferred shares if they converted into common shares, all the stock options or warrants issued plus any other common stock granted.

Preferences give the preferred stock holder the right to get some multiple of their investment off the top without regards to the percent ownership that stockholder has. Participating preferred gives the preferred holder the right to take their preferences off the top of the deal AND then still get a cut of what's left based on their percent ownership. Non-participating preferred means that preferred stockholder can EITHER take their preference off the top OR convert to common and participate on a percent ownership basis only. Capped participating preferred limits how much a preferred holder can make through their preferences.

Wednesday, January 27, 2010

Short milestones mean small surprises

I like trying to build things through a series of very short milestones. Mostly this is because the longer the milestone, the longer it takes before you can realize that you're going to miss it. It's hard to tell in the first few weeks that a month long milestone isn't going to work out. It's usually in the last couple of weeks that you realize things are taking longer than planned, the technology doesn't work, or the product UI is clumsy.

I personally like one day milestones. It just feels long enough to get something measurable done but short enough to quickly realize if things are going off the tracks. It's also motivating for everyone to see daily progress when working toward big long term goals.

Thursday, January 14, 2010

Thoughts on securing against the attacks on Google

The recent attempt to hack into Google mirrors other successful attacks, like the Twitter attack and the one against a few years ago. Roughly, the attacker either guesses the victim's password or sends email to the victim that either phishes passwords or installs spyware which can then steal passwords or other information.

I find it incredible that here we are in 2010 and we still mostly use passwords to authenticate ourselves to websites, file servers and whatnot. We've been doing much better with ATMs since the 1970s by requiring a card AND a password (as well as having a camera that in theory can be used to assess blame after thefts). There are those little dongles from RSA that solve this, but they're an incredible pain to use and most things don't work out of the box with them. I don't know that this is a problem someone is going to make money solving, but it sure seems like an important problem to solve.

I touched on the second half of the solution in a prior blog post about how I'd like to see a more Apple App Store-like model of application control brought to consumer computers. There's no reason for my laptop to be running unknown code that got injected through some web page I just visited.

Sunday, January 10, 2010

Use it or lose it

When I worry about IT things that could go disastrously wrong at my company, I first usually worry about losing our database of users and all their personal information followed by the prospect of just losing our database due to hardware failure, corruption etc. Joel on Software has a good post about how people need to worry less about backups and more about restores. I think this is great advice.

The primary way we make sure db restores work is to constantly do restores and use the data. Every one of our developers and QA engineers has a development environment that is a full copy of our entire website (website, batch jobs, db etc). Every night a production backup is restored into their personal development database. If there's a problem with the backup / restore or the code, we know immediately. Better to find out something is broken before you really need that restore to work!

We store every night's backup going back a week, then store every week's backups going back a month, and so on. This helps protect us against subtle database corruption issues that could ruin the last few night's backups. It also gives us a good way to go back in time and try to figure out when data problems first happened.

This advice goes for other stuff beyond backups and restores. I don't trust fail-over servers to work when the primary server goes down unless I've tried failing over live traffic every week. Things rot. Anything you're not doing constantly probably doesn't work.

Wednesday, January 6, 2010

Betting against commodity systems and losing

When I was in grad school I was working on building operating systems to run applications 10x faster on commodity hardware. I left part way through to co-found a startup that took some of that technology and built Windows Media video servers that were about 10x faster than what was possible running on Windows on the same hardware.

Our sales pitch was that we'd save you from having to buy 10 servers, their associated power/cooling, ethernet and storage switch ports, Windows licenses etc. Video over the internet was growing rapidly and seemed like the next big thing and so we were talking about service providers needing to buy serious numbers of servers.

Unfortunately, Windows and Windows Media Server got better and better faster than we were able to get better. We started off at 10x better and after a couple of years we were 5x better. At 10x we had enough of an advantage to overcome the inertia of getting something new into service providers. At 5x we didn't.

The other problem we faced was that video consumption didn't grow as fast as Moore's law grew computing power. When we started our company everyone seemed to agree that just racking-and-stacking more servers to handle the increased video load wouldn't scale for long. Unfortunately, every year turned out to require fewer and fewer new servers to handle the incremental bandwidth service providers had to deliver.

Many supercomputing startups in the 80s and 90s fell into a similar trap. Other industries have been able to compete and win -- graphics chip vendors like nVidia come to mind. Rendering a frame of a video game is still too hard to do on general purpose CPUs and probably will remain so for the near future.

Commodity hardware and software can justify huge R&D given that everyone uses them. If your business is predicated on performance gains relative to commodity hardware, make sure your advantage will still be there in five years.