Then teach us. Advocate for us. Help us improve and understand.
A very large part of the problem is that the people who are knowledgeable are often the ones that bought into the whole lone wolf coder shtick.
Most junior people I work with are interested and want to learn, but between high demands, no time to do it and senior devs who focus only on their own problems - it’s very hard to know how to learn and improve.
We can and need to solve this but it requires that we work together and actually sit down to bridge the knowledge gap.
A very large part of the problem is that the people who are knowledgeable are often the ones that bought into the whole lone wolf coder shtick.
I’d add that a large part of the problem is that we have people complaining about perceived problems without being able to present any kind of solution.
I think a part of it is how we look for information in the first place. If you search/ask “How do I do (task) in (environment)?”, you’re going to find out about various libraries/frameworks/whatever that abstract everything away for you. But if you instead look for information on “How do I do (task)?”, you’ll probably get more generalized information that you can take and use to write your own stuff from scratch. Try only to look for help related to your specific environment/language when you have a specific implementation issue, like how to access a file or get user input.
We also need a willingness to learn how things actually work. I see quite a few folks who seem to be so worried that they’ll never be able to understand some task that they unwittingly spend almost as much or even more time and effort learning all the ins and outs of someone else’s codebase as a way to avoid what they see as the scarier unknown.
Fortunately, I’ve seen an increase in the last year or two of people deliberately giving answers or writing tutorials that are “no-/low-library”, for people who want to know what’s actually going on in their programs.
I would never say to avoid all libraries or frameworks, because many of them are well-written (small, modular, stable) and can save us a lot of boilerplate coding. But there are at least as many libraries which suffer from “kitchen-sinkism”, where the authors want so much for their library to become the pre-eminent choice that it becomes a bloated tangle, trying to be all things to all people. This can be compounded by less-experienced coders including multiple huge libraries in one program, using only a fraction of each library’s features without realizing that there’s almost complete overlap. The cherry on top is when the end developer uses one of these libraries to do just one or two small tasks that could’ve been done in less than a dozen lines of standard code, if only someone had told them how, instead of sending them off to install yet another library.
Yes, let’s all go back to coding in assembly!
― Sarcastic comment by arrogant developer`
I like this bit because it really is a common answer whenever someone complains about how maddening/inefficient some tooling is nowadays. Like, why the fuck is this [OS EXCLUSIVE] application made with electron and running its own node server? It’s what “they know”, fuck it if there are alternatives that could do a much better job
About half a year ago I stumbled upon some front-end web developers who did not know that you can create a website without a deployment tool and that you don’t need any JavaScript at all, even when the website takes payment.
Said front-end dev is probably too young to have perused the 2004 and earlier internet. Javascript already existed, but it was more of an afterthought. When a site wanted to be flashy and visual, it used Flash, but I don’t think any halfway decent site was crazy enough to leave the payment inside a Flash page
I like this bit because it really is a common answer whenever someone complains about how maddening/inefficient some tooling is nowadays.
I don’t think this is a valid take. What we see in these vague complains about levels of abstraction is actually an entirely different problem: people complaining that they don’t understand things, and they feel the cognitive load of specific aspects is too much for them to handle.
If the existing layers of abstraction were actually a problem and they solved nothing, and if removing them would solve everything, it would be trivial to remove them and replace them with the simpler solutions these critics idealize.
Except that never happens. Why is that, exactly?
I think that it’s because a) the abstraction does solve a problem, and b) the idealized solutions aren’t actually all that simple.
But I still agree with the article because I also think that a) the problem solved by the added abstraction isn’t practical, but emotional, and b) the idealized solutions aren’t all that complex, either.
It seems to me that many devs reach immediately for a tool or library, rather than looking into how to create their own solution, due more to fear of the unknown than a real drive for efficiency. And while learning the actual nuts and bolts of the task is rarely going to be the faster or easier option, it’s frequently (IMO) not going to be much slower or more difficult than learning how to integrate someone else’s solution. But at the end of it you’ll have learned a lot more than you would’ve by using a tool or library.
Another problem in the commercial world is accountability to management.
Many decades ago there used to be a saying in tech: “No-one ever got fired for buying IBM.'” What that meant was that even if IBM’s solution was completely beaten by something offered by one of their competitors, you personally may still be better off overall going with IBM. The reason being, if you went with the competitor, and everything worked out, the less tech-savvy managers were just as likely to pat you on the back as to assert that the IBM solution would’ve been even better. If the competitor’s solution didn’t meet expectations, you’d be hauled over the coals for going with some cowboy outfit instead of good old reliable IBM. Conversely, if you went with IBM and everything worked, everyone would be happy. But if you chose IBM and the project failed, it’d be, “Well, it’s not your fault. Who could’ve predicted that IBM wouldn’t come through?”
In the modern era, replace “IBM” with the current tool-of-the-month, and your manager will be demanding to know why you’re wasting time reinventing the wheel on the company’s dime.
I think that it’s because a) the abstraction does solve a problem, and b) the idealized solutions aren’t actually all that simple.
I’d go a step further and state quite bluntly that these critics do not even understand the problem that the abstraction solves, and their belief is formed based on their poor and limited understanding of the problem space.
Everyone can come up with simpler alternatives if they throw most requirements out of the window. That’s basically the ages old problem caused by major rewrites and their expected failure once the unknowns start to emerge.
But I still agree with the article because I also think that a) the problem solved by the added abstraction isn’t practical, but emotional, and b) the idealized solutions aren’t all that complex, either.
Hard disagree.
There is not a single technical argument refuting these abstraction layers; only ignorance of the problems they solve. It’s easy to come up with simpler solutions if you leave out whole sets of hard requirements.
The idealized solution never leaves the conceptual stage because the idealized solution is never thought all the way through and the key requirements are never gathered. That’s when the problems solved by the abstraction layers rear their head, and what forces these critics to face the fact that their proposed solution is inconveniently converging to the real world solution they are complaining about, but that they are reinventing the wheel poorly.
I’m off two minds. On the one side, there is far too much reliance on black box libraries to do trivial things.
On the other, this complaint is decades old. Back in the late 80s there was a software developer for the apple iigs called FTA, which stood for Free Tools Association. They claimed that the tools in the os were too slow and you should code to the raw hardware.
I’m a gray developer and nothing makes me more get-off-my-lawn than too many levels of abstractions. :)
A big percentage of so-called experts today only know how to configure some kind of hype-tool, but they understand nothing about how things work at the deeper level. This is a real challenge and a big problem for the future.
“Don’t worry about it. Large Language Models are going to fix it.” - Some CEO, probably.
Edit: This is the bit that so few people outside the profession understand: I’m not being paid to write it, I’m being paid to try to understand it enough to change it safely.
Most of the time I don’t understand it well quite enough*, and chaos ensues. I would worry more about that, except that it turns out my paycheck clears either way, most of the time.
- Disclaimer: I’m a genius, but I wasn’t there when their special snowflake software was written.