Are We Really Bad at Estimating?

The topic of estimating can be a contentious one. Teams may be reluctant to estimate for fear of creating something stakeholders will use against them. Clients and stakeholders will always want to know what will be delivered and when, but how do you predict this if the work is new or unfamiliar? And if you work in an environment where estimates are usually wrong, should you skip estimating altogether and get on with building something?

I believe a lot of the frustration and issues surrounding estimates stems from the belief that, as humans, we’re just bad at estimating.

But I don’t think that’s true.

We’re Actually Pretty Good at Estimating (Some Things)

Don’t get me wrong—we’re definitely bad at estimating some things. But others we’re quite adept at.

For example, later today, I plan to write another blog post. I estimate it will take two hours to complete the first draft of that. I’m pretty sure it won’t take exactly two hours, but it will probably take between one-and-a-half and three hours. To plan my afternoon, that’s a good estimate.

When I was teaching in-person Certified ScrumMaster® courses, I would set up the room the day before. I would put a lot of supplies out for each person in the class. I had to hang some posters on the wall. And so on. From experience, I’d estimate that a typical room set-up would take 45 minutes. I’ve set up for so many Certified ScrumMaster courses that I feel fairly confident in that estimate.

There is probably a myriad of similar tasks that you find yourself estimating (successfully) most days—whether it’s fixing dinner, driving to a friend’s house, or going grocery shopping.

We’re pretty good at estimating these things because we have a certain level of familiarity with them. We’re not as good at estimating things we aren’t familiar with.

Data supports my claim that we’re not really that bad at estimating. In a review of the existing research on estimates, the University of Oslo professor and Chief Scientist at the Simula Research Laboratory Magne Jørgensen found most estimates to be within 20 to 30% of actuals. And on software projects, he did not find an overall tendency for estimates to be too low:

The large number of time prediction failures throughout history may give the impression that our time prediction ability is very poor and that failures are much more common than the few successes that come to mind. This is, we think, an unfair evaluation. The human ability to predict time usage is generally highly impressive. It has enabled us to succeed with a variety of important goals, from controlling complex construction work to coordinating family parties. There is no doubt that the human capacity for time prediction is amazingly good and extremely useful. Unfortunately, it sometimes fails us. –Magne Jørgensen

But if we’re not too bad at estimating, why is there a common perception that we are?

One Answer Lies in the Projects We Never Start

Imagine a boss who describes a new product to a team. The boss wants an estimate before approving or rejecting work on the project. Let’s suppose the project if played out, would actually take 1,000 hours. Of course, we don’t know that yet, since the team is just now being asked to provide an estimate.

For this example, let’s imagine the team estimates the project will take 500 hours.

The boss is happy with this and approves the project.

But…in the end, it takes 1,000 hours of work to complete. It comes in late, and everyone involved is left with a vivid memory of how late it was.

Let us now imagine another scenario playing out in a parallel universe. The boss approaches the team for an estimate of the same project. The team estimates it will take 1,500 hours.

(Remember, you and I know this project is actually going to take 1,000 hours, but the team doesn’t know that yet.)

So what happens?

Does the team deliver early and celebrate?

No. Because when the boss hears that the project will take 1,500 hours, she decides not to do it. This project never sees the light of day and no one ever knows that the team overestimated.

several studies suggest we are overconfident when forecasting.

Can We Get Better? The Data Suggests It’s Possible (with Feedback)

When people are presented with evidence of their overconfidence, there’s data to show that individual estimators do improve.

In one study of software development work, programmers gave correct estimates 64% of the time on the first ten items they estimated.

When provided with that feedback, they improved to 70% correct on the second set of ten items. And then to 81% on the third set, after additional feedback on their accuracy.

Getting estimators to realize that excessive confidence in their own estimating abilities is misplaced helps encourage a collaborative approach to estimate.

If someone is absolutely convinced of their own estimates’ infallibility, that person won’t engage constructively in debates about the right estimate to provide.

Want More Help With Story Points? (Coming Soon)

Very soon, I’ll be releasing some free video training to help solve some common story points problems. If you want to be the first to find out when it’s available, register now to join the waitlist.

What Do You Think?

Do you think the perception that we’re bad at estimating is unfair? Have you and your team been able to improve estimates as you work together? How did you do it? Let me know in the comments.

Rob Broadhead

Rob is a founder of, and frequent contributor to, Develpreneur. This includes the Building Better Developers podcast. He is also a longtime student of technology as a developer, designer, and manager of software solutions. Rob is a founder and principle of RB Consulting and has managed to author a book about his family experiences. In his free time, he stays busy raising five children (although a few have grown into adults). When he has a chance to breathe, he is on the ice playing hockey to relax.

Leave a Reply