This podcast season is a great excuse to highlight new anti-patterns. We look at a new one we call statistical bigotry in this episode. That is an excellent emotional word and a common challenge across many areas of problem-solving and debate. This anti-pattern appears when we give too much weight to our data and improperly use statistics.
Statistical Bigotry Defined
We use data and facts to support our decisions regularly. However, we can not blindly use data in this way. We need to ensure the basis for our decision is based on overall reality instead of a small subset. When we improperly use a small subset to assume the whole, we are committing statistical bigotry. The issue may arise from anecdotal data or other situations where the information is given more weight than it deserves.
For example, we see this in cultural discussions all the time. An example is when someone says everyone or no one they know does or believes X. That may be entirely true. However, it is also utterly irrelevant to the discussion.
The Anti-Pattern In Action
Statistical bigotry is a focus problem when crafting solutions. We spend too much time or resources on something that appears essential but is not. Thus, the solution is imbalanced in its approach. Unfortunately, It is not always immediately apparent. For example, we may think a particular feature needs to be highly tuned due to its use. Then, when it goes out to real users, we find that highly tuned functionality is rarely used, and another one was ignored that is needed.
This situation also can arise when we start to see actual data in a system. Row counts and result sets are different in reality than we saw during development, and performance bottlenecks arise. The issue is not that we did not tune our solution. We just misplaced the emphasis.