I came across an excellent (and old) online talk by Greg Wilson on the net today, covering what we believe about software development. It’s a good talk about how little science we use in deciding and reviewing how we plan software, and highlights how many bad or guesstimate numbers computer scientists and engineers throw around when talking about their profession. Watch the video.
The best developers are 28x more effective than the worst
Along with the bad statistics of it, it’s not backed by good data. The studies were long ago with small sample sets, they really don’t apply to modern conditions. Common sense? Not especially. Also, there is the statistical problem of *ever* comparing the best to the worst. It’s a useless measure since ‘worst’ could be someone taught to code an hour ago. Comparing against mean/median/mode is probably a lot smarter.
SCRUM and Sprints keep software from being late
Another good point. As far as I’m concerned, working in a sprint system is what got MakerWare out the door on such a great tight schedule. Iterative design and culling features, as well as allowing for error and error correction during the process. But the plural of anecdotes isn’t data, the plural of anecdotes is rumors. Is there really smarter planning going on? Smarter throwing away features? What is the actual process that makes SCRUM seem or be, better than planning up front?
Until I watched the video, I had not recognized how many software process decisions we as a culture made by anecdote based suggestions, or convinced over beer discussions. If you have time, watch the talk, or add his blog to your RSS reader.