If you think is-number
can be replaced with a one-liner, you don’t have the enterprise code mindset. What if the world gets more inclusive and MMXXIV, ½ and ⠼⠁ become recognized as numbers? 𒐍𓆾 were numbers in the past but what if people start assigning numeric value to other characters? Are 🖐🔟💯🆢🂵🀌🁅 numbers of the future???
/s
I’m not even all kidding, Regex implementations are split on whether “٣” matches \d
.
You may argue that writiing 2024 as “MMXXIV
” and not “ⅯⅯⅩⅩⅣ
” is a mistake but while typists who’d use “2OlO
” for “2010
” (because they grew up using cost-reduced typewriters) are dying out, you’ll never get everyone to use the appropriate Unicode for Roman numerals.
Even if they did use unicode, any codeset , glyph or language changes over time , ulimately they emerge out of communication, not the other way round.
If some culture decides they want to use the glyph “2” to mean a word “to”, they can and will, and no codeset is going to stop them. And if they get their message to their intended audience it doesnt matter that somebody else’s isnumber fuction get’s it wrong.
A person, community or standard codeset or dictionary cannot deny the accuracy or content of encrypted communication just because they can’t decipher it.
Put another way a more robust isnumber() should maybe have a second argument to specify the codeset being used, and maybe whether written words - in some defined languare - are also to be converted
On the other hand “1/4/12” is not a fucking date.
Lisp code is already like this. That’s why I keep trying to explain it to programmers. Try reading the book SICP, published decades ago by MIT computer researchers.
All junior devs should read OCs comment and really think about this.
The issue is whether is_number()
is performing a semantic language matter or checking whether the text input can be converted by the program to a number type.
The former case - the semantic language test - is useful for chat based interactions, analysis of text (and ancient text - I love the cuneiform btw) and similar. In this mode, some applications don’t even have to be able to convert the text into eg binary (a ‘gazillion’ of something is quantifying it, but vaguely)
The latter case (validating input) is useful where the input is controlled and users are supposed to enter numbers using a limited part of a standard keyboard. Clay tablets and triangular sticks are strictly excluded from this interface.
Another example might be is_address()
. Which of these are addresses? ‘10 Downing Street, London’, ‘193.168.1.1’, ‘Gettysberg’, ‘Sir/Madam’.
To me this highlights that code is a lot less reusable between different projects/apps than it at first appears.
So the only valid digits are arabic numbers but arabic script numbers are not a valid digit? If we want programming to be inclusive then doesn’t that make sense to also include the arabic script number?
So the only valid digits are arabic numbers but arabic script numbers are not a valid digit?
Some people writing Regex implementations have that opinion. I’ve refrained from saying mine.
If we want programming to be inclusive then doesn’t that make sense to also include the arabic script number?
Maybe. IMO, number tests should be chosen/implemented based on the project’s requirements. If you want to include every Unicode character or string pattern anyone’s ever used to convey a numeric value, that would be a long and growing list. Arguably, it’s impossible: the word “elf” means a number if interpreted as German for “eleven” but not if interpreted as English for 🧝.
Yeah, but “elf” are not digits. Digits are a symbol abstracted from the language itself. Does 5 and V convey different meanings in the context of digits? And yeah, I can see why they would argue about the implementation because inclusivity is important. Especially when designing a language implementation. If you are designing it wrong, it will be very hard to extend it in the future. But for application level implementation, go nuts.