When I was in grad school I was working on building operating systems to run applications 10x faster on commodity hardware. I left part way through to co-found a startup that took some of that technology and built Windows Media video servers that were about 10x faster than what was possible running on Windows on the same hardware.
Our sales pitch was that we'd save you from having to buy 10 servers, their associated power/cooling, ethernet and storage switch ports, Windows licenses etc. Video over the internet was growing rapidly and seemed like the next big thing and so we were talking about service providers needing to buy serious numbers of servers.
Unfortunately, Windows and Windows Media Server got better and better faster than we were able to get better. We started off at 10x better and after a couple of years we were 5x better. At 10x we had enough of an advantage to overcome the inertia of getting something new into service providers. At 5x we didn't.
The other problem we faced was that video consumption didn't grow as fast as Moore's law grew computing power. When we started our company everyone seemed to agree that just racking-and-stacking more servers to handle the increased video load wouldn't scale for long. Unfortunately, every year turned out to require fewer and fewer new servers to handle the incremental bandwidth service providers had to deliver.
Many supercomputing startups in the 80s and 90s fell into a similar trap. Other industries have been able to compete and win -- graphics chip vendors like nVidia come to mind. Rendering a frame of a video game is still too hard to do on general purpose CPUs and probably will remain so for the near future.
Commodity hardware and software can justify huge R&D given that everyone uses them. If your business is predicated on performance gains relative to commodity hardware, make sure your advantage will still be there in five years.