Your post seemed unfinished, but I guess you are.
>Not true. Most good crypto libraries(OpenSSL, GPG, libsodium) are made by professionals who spend hundreds of hours making sure they've got it right. It's not only a waste of time to do that, it's also dangerous for a novice to try.
Anyone can implement a cryptographic procedure correctly. It's important to use a variety of implementations, if only so one vulnerability in one implementation doesn't touch billions of people.
>I wouldn't, if it weren't *the entire purpose of text encoding*
The purpose of text encoding is to encode characters.
The Unicode standard can't fit on one page. Use of computer storage is similarly large. A Unicode manipulation program is inherently large.
As an example, try writing a Unicode program to display all of the characters and return case conversions and whatnot, without any libraries.
>Alright, then *don't use them*. You're going to die on this hill for no fuarrrking reason besides that you don't like big numbers.
That's not a good argument. If something exists in a standard, it will inevitably become a problem at some point for someone and require understanding.
Look at how web software can regularly be attacked by Unicode characters that aren't properly escaped.
>And for incomplete reads? For corrupted files?
Is it ever desirable for a file to be read improperly? No, the system shouldn't allow it.>How about for in-memory strings? Should you have to store a big-ass hunk of metadata for every string in memory, to make sure it works correctly if it has multiple encodings in it?
This is very specific to the type of program being written. I've merely proposed one strategy. There's no reason every program would use the same one.
>C already has the problem that dealing with unicode requires its own set of libraries, because the stdlib doesn't support it by default. Now imagine if there were a hundred different encodings, all of which must be understood by a general-purpose text processing program. Do you realize how much complexity that adds to a program?
Any text processing program, such as Emacs, must already support the many different encodings and whatnot.
It's already a mess.
>My reasons are all *very* strong. I mean, if we wanted to boil things down further, you just think that unicode is bad because it's old. X86 is bad because it's old the Von Neumann architecture is bad because it's old. That's all bullsoykaf, it's just an appeal to novelty.
I don't like Unicode because it's complex and makes many decisions I disagree with. I don't like the idea of a character set having multiple implementations. The x86 processor is similarly full of cruft, even containing vulnerabilities baked into the silicon.
Your arguments amount to an appeal to authority. You would rather everyone else make a decision for you. It's one thing for a programming language to have a standard, if it's a good one, but it's entirely different to have a standard I don't like shoved down my throat.
So, follow standards you like and vehemently oppose those you don't.
>any reason why it will cause issues, or are you just so sure of it that it's unimaginable that it could be otherwise?
As written earlier, if something exists in a standard, it will inevitably become a problem at some point for someone and require understanding.
Why are these Unicode code points these stupid, strange symbols? Oh, it was a fad in the 2010s to have a symbol for feces and other inanities.>>19559>So you save 25% of space by introducing huge data complexity? Good job.
On constrained systems, it's worthwhile. It depends on the system.