I was thinking about the following scenario the other day:
Give an experienced cook a dull knife. The blade slips and he cuts himself. Give an inexperienced cook a sharp knife. He doesn’t hold it right and he cuts himself.
It occurred to me that there’s no way to tell the difference between the two from the outside. All we see is a cook with with a bloody hand, cursing the knife. In the same way there’s no way to tell the difference between programmers using tools too sharp or too dull for their skills.
The scary part is this: often there’s no way to tell the difference from the inside either. As programmers we tend to be pretty smart people, or at least we think we are. But smart people suffer from several crippling fallacies, two being:
- being experts in a field they think they are experts in others, and
- they tend to underestimate the value of experience and the effort it takes to become really good at something.
Together these imply that when forced to use a new tool we think we’ll be as productive as before a lot sooner than we think. We try to use it in ways that are unnatural and unsafe, get hurt, and then we blame the tool. We may even claim that the knife is dull, not knowing any better. Re-appropriating Paul Graham: “That Java project was bound to fail. How could it not? Java doesn’t even have X.”
We think knowing all sorts of things about programming languages means we don’t have to know how databases work. “All they do is push and pull bytes from the disk, right?” Or that knowing everything there is to know about pointers and recursion means we should ignore OOAD. (News flash! 99% of code written today doesn’t use recursion or pointers. Well, not directly anyway.)
Next time, try holding the sharp edge downwards. There are no bad programming languages, only bad programmers.
Update: Commentary on Hackernews.