🙃 compression algorithms hate this one simple trick!!
This is a joke, right? This feels like a very dumb solution. I don’t know much about UTF-8 encoding, but it sounds like Roman characters can be encoded shorter than most or all others because of a shorthand that assumes Roman characters. In that case, why not take that functionality and let a UTF-8 block specify which language makes up most of the text so that you can have that savings almost every time? I don’t see why one would want it to be random.
It’s a joke.
UTF-16 already exists, which doesn’t favor Roman characters as much, but UTF-8 is more popular because it is backword compatible with the legacy ASCII.
UTF-32 also exists which has exactly equal length representation for every character.
But the thing that equalizes languages is compression.
Yes, a text written in Cyrillic with UTF-8 will take more space than a Roman language, easily double. However this extra space is much more easily compressed by an algorithm like GZIP.
So after compression, the two compressed texts will then be similarly sized and much smaller than UTF-16 or UTF-32.
Besides most text on the average computer is either within some configuration file (which tend to use latin script), or within some SGML derived format which has a bunch of latin characters in it. For network transmission most things will use HTML, XML or JSON and use English language property names even in countries that don’t speak English (see Yandex’s and Baidu’s APIs for example).
No one is moving large amounts of .txt files around.
You’ve never worked in finance then. All our systems at work do nothing but move large amounts of txt files around.
That said, many of our clients still don’t support utf-8 so its all ascii and non-latin alphabets are screwed. They can’t even handle characters 128-255 so even stuff like £ is unsupported.
It’ll be added when they’d find some free time!
You see, adding pictures women with white cane facing right, limes and pregnant men is a very important and time consuming job! Standardizing encoding for some human language people use is just not as important!
Emoji are defined as part of Unicode, so they can be encoded alongside other text:
I was not expecting the drama around it. Is the issue truly a different orthography or is more like a different font/ligature issue?
EDIT: forgot the article I found on it: https://restofworld.org/2021/tulu-unicode-script/
I immediately thought of Leeroy Jenkins in the last sentence.
You’re right, and someone else might be a part of the lucky 10,000 today.
I can’t read “what a time to be alive” without hearing Two Minute Papers in my head