When I analyzed the Extended Euclidean Algorithm, one thing led to another, and I ended up writing a little intro to generating functions. You can even jump to that part and ignore the rest, if you want! Other readers may instead choose to skip that section altogether, as it's self-contained.
What you wish for is more akin to coding like this:
declaring a function whose name is "max" and its arguments are "x" (of type number) and "y" (of type number) that returns a number:
statement: if a is greater than b, the function returns a
statement: the function returns b
But programmers don't bat an eye at {}[](),.!&^| (and I just realized I used the term "function" which outsiders might wish was replaced by simpler terminology!) // This is more readable if you're "in the know"
// even if it looks like a jumbled mess to outsiders
fn max(a: num, b: num): num => a > b ? a : b
Math uses terms of art like "group", "field", "modulo" and "multiplicative inverse"; and notation like "∑"; because they are short and communicate very specific (and common) things, many of which are implicit and we probably wouldn't even notice.In other words: we're not the target audience.
Note that this is not only a matter of conciseness. See Ken Iverson's (of APL/J fame) "Notation as a Tool of Thought": https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p...
I don't have anything against introducing new words. If your concept can be adequately described by existing language that seems like a good way to allow people to learn and talk about it. Technically as a person who has studied philosophy the greek alphabet is also no big hurdle to me. But it is to others. Try googling some weird sign you found in a formula. First you don't know how it is called or how to write it, second any signed might have been used in 100 different formulae so even if you, know how to search for it (there are applications people use to identify mathematical signs) good luck at finding any meaningful answer.
I know for mathematicians these signs are arbitrary and they would say you could just use emojis as well. But then it turns out mathematicians ascribe meaning to which alphabet they are using and whether it is upper- or lowercase. Except sometimes they will break that convention for what appears to be mostly historical reasons.
I know mathematicians will get used to this just fine, but the mathematical notation system has incredibly bad UX and the ideals embedded within it are more about density and intransparency (only the genius mathematician knows what is going on), than about rigorous precision and understanding.
When I studied philosophy there were philosophers like Hegel who had to expand the German language to express their new thoughts. And there were philosophers who shall remain unnamed that would use nearly unparseable dense and complex language to express trivial thoughts. The latter always felt like an attempt to paper over their own deficiencies with the convoluted language they had learned to express themselves in.
Mathematicans can also have a degree of the latter at times. If your notation is more complex than the problem it describes your notation sucks and you waste collective human potential by using it.
In contrast, EdDSA (which is based on Schorr signatures) does, by construction: the public key is included in one of the hashes, which binds the signature to a particular public key.
I haven't investigated whether cryptocurrency's use of Schnorr satisfies this property or not. (Indeed, I do not care about cryptocurrency at all.) So it's an exercise to the reader if it's satisfactory or not :3
https://www.youtube.com/live/2IpZWSWUIVE?si=-LRRbU2mJgL9LiNP...
The ed25519 issues are absolutely insane. Anywhere I can read more about that?
and talks about fields and groups
Paraphrasing 'Group' from the article to see if I've understood it:
A set of elements G, and some operation ⊕, where
(g1 ⊕ g2) is also in G. // "Type-safety"
Some g0 exists such that (gn ⊕ g0) == (g0 ⊕ gn) == gn // "Zero"
For every g, there's some inverse gi such that (g ⊕ gi) == (gi ⊕ g) == g0 // "Cancelling-out"
a ⊕ (b ⊕ c) == (a ⊕ b) ⊕ c // "Associative"
If (a ⊕ b) == (b ⊕ a) then the group is also "abelian/commutative"I'm mentioning this, as other people in this thread are discussing "explaining symbols you use", and you're using a non-standard symbol for +. I can easily imagine a circle around + making + a different operation, and wonder if it is so?
Aspirin I've bought in the past has a + on it, and its trademark is a + within a circle. That's why I've latched on what a "common person" might view the symbol as:
https://www.brand.aspirin.com/sites/g/files/vrxlpx46831/file...
Interestingly, I have University level math courses, but decades out of date, and have never run into that symbol. I see it here:
It acts as a normal +, mostly. When you're dealing with modulo math, the "normal" plus becomes a bit weird as there are rules attached to a number expressed as "(a + b) mod c", so mathematicians often use symbols like ⊕ to mean something like "+, but different". The second link you posted does the same, it acts sort of like normal addition, conceptually, except it's not done on actual numbers but groups.
In definitions like these, you may as well use a peace symbol or a picture of a frog; "some operation ⊕" means "there is some operation we write down like this, and it does this and that".
Another place you may find ⊕ is when it's used to represent XOR in some cases; (a + b) mod 2 is a bitwise XOR when operating on single bits (again, it means "normal addition except with weird rules", namely the mod 2 that makes you throw out anything larger than the last bit).
I specifically didn't use an already-existing symbol because then you wouldn't know if I'm talking about that symbol, or any symbol in general.
Integer-multiplication is associative, E.g.
"3 times 9 is 9 times 3"
(3 + 9) == (9 + 3) == 27
and it has an identity element, E.g. "3 times 1 is 3"
(3 + 0) == 3edit: What I mean is that, as a consequence, the symbol used is not really important.
We devs (take back that "just" :) ) deal with much harder stuff when we build complex APIs, so the problem must be at the syntactic level. To us devs, math may look like an antipattern, with all the short names and operator overloading.
But that's unavoidable, unfortunately. It's normal to spend hours or more on a single concept until it clicks. I'd say don't give up, but I understand one's time is valuable, and the return might not be high enough to justify the cost.
Since my exposition is constructive in nature, the proofs and other remarks are an integral part of the article, not digressions.
Not that this text was particular bad in that regard, but I wish more math people had a heightened awareness of the fact that for many the hard part is not understanding the concept (e.g. fourier transformation), but the foreign looking signs mathematicians have decided to use to write them down.
That is as if someone explaines the way to the next train station to you in a foreign language. The hard part isn't understanding the way, it is understanding the noises that are supposed to make up the description.
And as a programmer who from time to time has to translate maths into discrete programs (or the other way around) the hard part was always parsing the notation and when I figured it out I was usually like: "Ohh, this is just a simple algorithm doing that.
So if you want to explain a math concept to programmers you should chose one of two routes:
(A) Stay with your notation and explain every character that isn't visible on a regular keyboard in length and gently lead the reader into being able to read the notation or
(B) let go of the notation and first explain what it does and how, e.g. for our FFT example: FFT slices your list of values into frequency buckets, figures out how much of each frequency is present, and returns those strengths as numbers. And then you can work backwards from that understanding towards your notation explaining which sign relates to which part of the concept (e.g. to the number of buckets).
I would prefer the latter, since it explains both the concept and gives the mathematician a chance to explain how and why math notation can be useful on top, e.g. to figure out certain properties of the method that may even have practical implications.