“All the little bits”
Algebraic notation breaks just about every rule programmers are taught about keeping their code human readable. For example:
- Variable and function names should be descriptive
- Don’t cram everything into one line
- Break up large statements
- Consistency is key
- Don’t be fancy for fancy’s sake, don’t over-optimize (this is for learning, remember?)
- Add in-line comments for lines that aren’t easily grasped
- Be explicit where possible (it’s a convention to omit the multiplication operator when multiplying variables because variables are only one letter anyway…)
And then we force kids to cram the whole stdlib (or rather its local bastardization) into their heads or at best give them intentionally bad (uncommented) documentation during exams while wondering why so many just don’t seem to get it, even resent it.
I feel like this isn’t quite fair to math, most of these can apply to school math (when taught in a very bad way) but not even always there imo.
Its true that math notation generally doesn’t give things very descriptive names, but most of the time, depending on where you are learning and what you are learning, symbols for variables/functions do hint at what the object is supposed to be
E.g.: When working in linear algebra capital letters (especially A
, B
, C
, D
as well as M
) are generally Matrices, v
, w
, u
are usually vectors and V
, W
are vector spaces. Along with conventions that are largely independent of the specific math you are doing, like n
, m
, k
usually being integers, i
or j
being indices, f
and g
being functions and x
, y
, z
being unknowns.
Also math statements should be given comments too. But usually this function is served by the text around the equations or the commentary given along side them, so its not a direct part of the symbolic writing itself (unlike comments being a direct part of source code). And when a long symbolic expression isn’t broken up or given much commentary that is usually an implicit sign that it should be easy/quick for the reader to understand/derive based on previously learned material.
Finally there’s also the Problem with having to manipulate the symbols. In Code you just write it and then the computer has to deal with it (and it doesn’t care how verbose you made a variable name). But in math you are generally expected to work with your symbolic expressions and manipulate them. And its very cumbersome to keep having to rewrite multi-letter names every time you manipulate an expression. Additionally math is still generally worked on in paper first, and then transferred into a digital/printed format second, so you can’t just copy + paste or rely on auto completion to move long variable names around, like you might when coding.
You can’t except learning the science of abstraction by making it concrete. Exampled are not more than examples and if one field required abstract theory, it is indeed the mathematics.
Well, a lot of these points are really more about readability than they are about reducing the abstraction. Smaller, labeled chunks of information are easier to process than larger ones with no anatomy.
But even so, abstractions, especially in programming, are often made because a pattern was noticed between concrete examples. Teaching the abstraction first or even alone does inherently skip a lot of context for why it was made in the first place. Sometimes, you need to know what problem a function is solving before you can truly know the function.