>If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Seeing the relationships between objects is partly why math has settled on a terse notation (the other reason being that you need to write stuff over and over). This helps up to a point, but mainly IF you are writing the same things again and again. If you are not exercising your memory in such a way, it is often easier to try to make sense of more verbose names. But at all times there is tension between convenience, visual space consumed, and memory consumption.
I also don't know what historically motivated the development of this system (the Indian system). Why did the Romans not think of it? What problems were the Indians solving? What was the evolution of ideas that led to the final system that still endures today?
I don't mean to underplay the importance of notation. But good notation is backed by a meaningfully different way of looking at things.
>The notation itself doesn't really make a difference. We could call X=1, M=2, C=3, V=4 and so on.
Technically, the positional representation is part of the notation as well as the symbols used. Symbols had to evolve to be more legible. For example, you don't want to mix up 1 and 7, or some other pairs that were once easily confused.
>Why did the Romans not think of it?
I don't know. I expect that not having a symbol for zero was part of it. Place value systems would be very cumbersome without that. I think that numbers have some religious significance to the Hindus, with their so-called Vedic math, but the West had Pythagoras. I'm sure that the West would have eventually figured it out, as they figured out many impressive things even without modern numerals.
>But good notation is backed by a meaningfully different way of looking at things.
That's just one aspect of good notation. Not every different way of looking at things is equally useful. Notation should facilitate or at least not get in the way of all the things we need to do the most. The actual symbols we use are important visually. A single letter might not be self-describing, but it is exactly the right kind of symbol to express long formulas and equations with a fairly small number of quantities. You can see more "objects" in front of you at once and can mechanically operate on them without silently reading their meaning. On the other hand, a single letter symbol in a large computer program can be confusing and also makes editing the code more complicated.
I was annoyed by this in some introductory math lectures where the prof just skipped explaining the general idea and motivation of some lemmata and instead just went through the proofs line by line.
It felt a bit like being asked to use vi, without knowing what the program does, let alone knowing the key combinations - and instead of a manual, all you have is the source code.
I agree whole heartedly.
What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
They do.
The purpose of papers is to teach working mathematicians who are already deeply into the subject something novel. So of course only novel or uncommon notation is introduced in papers.
Systematic textbooks, on the other hand, nearly always introduce a lot of notation and background knowledge that is necessary for the respective audience. As every reader of such textbooks knows, this can easily be dozens or often even hundreds of pages (the (in)famous Introduction chapter).
They already do this. That is how we all learn notation. Not sure what you mean by numerically though, a lot of concepts cannot be defined numerically.
A little off topic perhaps, but out of curiosity - how many of us here have an interest in recreational mathematics? [https://en.wikipedia.org/wiki/Recreational_mathematics]
That assumes it’s the language that makes it hard to understand serious math problems. That’s partially true (and the reason why mathematicians keep inventing new language), but IMO the complexity of truly understanding large parts of mathematics is intrinsic, not dependent on terminology.
Yes, you can say “A monad is just a monoid in the category of endofunctors” in terms that more people know of, but it would take many pages, and that would make it hard to understand, too.
Players of magic the gathering will say a creature "has flying" by which they mean "it can only be blocked by other creatures with reach or flying".
Newcomers obviously need to learn this jargon, but once they do, communication is greatly facilitated by not having to spell out the definition.
Just like games, the definitions in mathematics are ethereal and purely formal as well, and it would be a pain to spell them out on every occasion. It stems more from efficient communication needs then from gatekeeping.
You expect the players of the game to learn the rules before they play.
I'd say the ability to take complicated definitions and to not have to through a rigorous definition every time the ideas are referenced are, in a sense a form of abstraction, and a necessary requirement to be able to do advanced Math in the first place.
> You expect the players of the game to learn the rules before they play.
TFA is literally from a 'player' who has 'learned the rules' complaining that the papers remain indecipherable.
> You expect the players of the game to learn the rules before they play.
Actually, I expect to have to teach rules to new players before they play. We are different.
I'm not fully convinced the article makes the claim that jargon, per se, is what needs to change nor that the use of jargon causes gatekeeping. I read more about being about the inherent challenges of presenting more complicated ideas, with or without jargon and the pursuit of better methods, which themselves might actually depend on more jargon in some cases (to abstract away and offload the cognitive costs of constantly spelling out definitions). Giving a good name to something is often a really powerful way to lower the cognitive costs of arguments employing the names concept. Theoretics in large part is the hunt for good names for things and the relationships between them.
You'd be hard pressed to find a single human endeavor that does not employ jargon in some fashion. Half the point of my example was to show that you cannot escape jargon and "gatekeeping" even in something as silly and fun as a card game.
It’s not gatekeeping. It’s just hard.
The sentence I called out, independent of the article's content: "You expect the players of the game to learn the rules before they play."
Is you explicitly stating your goal is gatekeeping.
"In order to enter this gate you must know what this symbol means."
"I am unfamiliar with that symbol."
"Well, I expect you to learn what it means before I allow you to enter this gate. Now go away."
He separates conceptual understanding from notational understanding— pointing out that the interface of using math has a major impact on utility and understanding. For instance, Roman numerals inhibit understanding and utilization of multiplication.
Better notational systems can be designed, he claims.
I understand that some degree of formalism is required to enable the sharing of knowledge amongst people across a variety of languages, but sometimes I'll read a white paper and think "wow, this could be written a LOT more simply".
Statistics is a major culprit of this.
I think you're confusing "I don't understand this" with "the man is keeping me down".
All fields develop specialized language and syntax because a) they handle specialized topics and words help communicate these specialized concepts in a concise and clear way, b) syntax is problem-specific for the same reason.
See for example tensor notation, or how some cultures have many specialized terms to refer to things like snow while communicating nuances.
> "wow, this could be written a LOT more simply"
That's fine. A big part of research is to digest findings. I mean, we still see things like novel proofs for the Pythagoras theorem. If you can express things clearer, why aren't you?
I'm surprised at how could you get at this conclusion. Formalisms, esoteric language and syntax are hard for everyone. Why would people invest in them if their only usefulness was gatekeeping? Specially when it's the same people who will publish their articles in the open for everyone to read.
A more reasonable interpretation is that those fields use those things you don't like because they're actually useful to them and to their main audience, and that if you want to actually understand those concepts they talk about, that syntax will end up being useful to you too. And that a lack of syntax would not make things easier to understand, just less precise.
OK, challenge accepted: find a way to write one of the following papers much more simply:
Fabian Hebestreit, Peter Scholze; A note on higher almost ring theory
https://arxiv.org/abs/2409.01940
Peter Scholze; Berkovich Motives
https://arxiv.org/abs/2412.03382
---
What I want to tell you with these examples (these are, of course, papers which are far above my mathematical level) is: often what you read in math papers is insanely complicated; simplifying even one of such papers is often a huge academic achievement.
If you want to understand what is going on there, what is the most effective way to build a bridge from what you know, to what is written there?
If you are in a situation where the knowledge of these papers could actually greatly help, how do you become aware of it?
I think if AI could help solve these two issues, that would be really something.
But I don’t believe it to be used as gatekeeping at all. At worst, hazing (“it was difficult for me as newcomer so it should be difficult to newcomers after me”) or intellectual status (“look at this textbook I wrote that takes great intellectual effort to penetrate”). Neither of which should be lauded in modern times.
I’m not much of a mathematician, but I’ve read some new and old textbooks, and I get the impression there is a trend towards presenting the material in a more welcoming way, not necessarily to the detriment of rigor.
What, as opposed to using ambiguous language and getting absolutely nothing done?
The saying, "What one fool can do, another can," is a motto from Silvanus P. Thompson's book Calculus Made Easy. It suggests that a task someone without great intelligence can accomplish must be relatively simple, implying that anyone can learn to do it if they put in the effort. The phrase is often used to encourage someone, demystify a complex subject, and downplay the difficulty of a task.
Also, an additional thing is that videos are great are making people think they understand something when they actually don't.
I hope mathematicians have a better reason than "it's tradition" for making the entire field completely opaque to anyone who hasn't studied math extensively.
This is so wrong it can only come from a place of inexperience and ignorance.
Mathematics is flush with inconsistent, abbreviated, and overloaded notation.
Show a child a matrix numerically and they can understand it, show them Ax+s=b, and watch the confusion.
It's good to be able to think and talk in terms of abstractions that do not force viewing analogous situations in very different terms. This is much of what math is about.
Sorry, the notation is bit confusing. The 'A' here is a matrix.
Thanks for the laughs :D
> Show a child a matrix numerically and they can understand it, show them Ax+s=b, and watch the confusion.
Show a HN misunderstood genius Riemann Zeta function as a Zeta() and they think they can figure out it's zeros. Show it as a Greek letter and they'll lament how impossible it is to understand.
1. Can we reinvent notation and symbology? No superscripts or subscripts or greek letters and weird symbols? Just functions with input and output? Verifiable by type systems AND human readable
2. Also, make the symbology hyperlinked i.e. if it uses a theorem or axiom that's not on the paper - hyperlink to its proof and so on..
For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.
And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl? If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.
Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.
Large part of math notation is to compress the writing so that you can actually fit a full equation in your vision.
Also, something like what you want already exists, see e.g. Lean: https://lean-lang.org/doc/reference/latest/. It is used to write math for the purpose of automatically proving theorems. No-one wants to use this for actually studying math or manually proving theorems, because it looks horrible compared to conventional mathematics notation (as long as you are used to the conventional notation).
(2) Higher-level proofs are using so many ideas simultaneously that doing this would be tantamount to writing Lean code from scratch: painful.
> It's a domain reserved for a few high priests inducted into the craft and completely inaccessible to everyone else.
It's a domain reserved for people who want to learn it, and there's ton of resources to learn it. Expecting to understand it without learning it does not make any sense.
The theories are learnable, making sense of all the weird symbols is what's breaking my brain. I tried to get into set theory thrice now, not happening with all the math lingo, hieroglyphs and dry ass content. Learning can be incredibly fun if it was designed fun. Math is a dry and slow process. Make it fun, make it readable and people will be capable to learn it easier.
No one memorizes the Greek alphabet. We just learn it as we go because it’s useful to have different types of letters to refer to different types of objects. That’s it.
> I tried to get into set theory thrice now, not happening with all the math lingo, hieroglyphs and dry ass content.
That sounds like you’re trying to learn a specific field without actually having any of the prerequisites to learn it. I don’t know what you’re specifically referring to when you say “set theory” as that’s an incredibly wide field, and depending on what you’re trying to learn it can be quite technical.
> Learning can be incredibly fun if it was designed fun. Math is a dry and slow process.
This sounds like someone complaining that getting to run a marathon is tiresome and hard. Yes, teaching mathematics can always be improved and nothing is perfect, but it will still be hard work.
Polysemy vs Homonymy vs Context Dependency will always be a problem.
There are lots of areas to improve, but one of the reasons learning math is hard is that in the elementary forms we pretend that there is a singular ubiquitous language, only to change it later.
That is why books that try to be rigorous tend to dedicate so much room at the start to definitions.
Abstract algebra is what finally help it click for me, but it is rare for people to be exposed to it.
Compare something like
equals(integral(divide(exponentiate(negate(divide(square(var),2))),sqrt(multiply(2,constant_pi))),var,negate(infinity),infinity),1)
vs
$$\int_{-\infty}^{\infty}\frac{e^{-x^2/2}}{\sqrt{2\pi}}dx = 1$$
(imagine the actual generated mathematical formula here :-/ )
it is infinitely easier to grok what is going on using symbolic notation after a minimal amount of learning.
1. It's more human-readable. The superscripts and subscripts and weird symbols permit preattentive processing of formula structures, accelerating pattern recognition.
2. It's familiar. Novel math notations face the same problem as alternative English orthographies like Shavian (https://en.wikipedia.org/wiki/Shavian_alphabet) in that, however logical they may be, the audience they'd need to appeal to consists of people who have spent 50 years restructuring their brains into specialized machines to process the conventional notation. Aim t3mpted te rait qe r3st ev q1s c0m3nt 1n mai on alterned1v i6gl1c orx2grefi http://canonical.org/~kragen/alphanumerenglish bet ai qi6k ail rez1st qe t3mpt8cen because, even though it's a much better way to spell English, nobody would understand it.
3. It's optimized for rewriting a formula many times. When you write a computer program, you only write it once, so there isn't a great burden in using a notation like (eq (deriv x (pow e y)) (mul (pow e y) (deriv x y)) 1), which takes 54 characters to say what the conventional math notation¹ says in 16 characters³. But, when you're performing algebraic transformations of a formula, you're writing the same formula over and over again in different forms, sometimes only slightly transformed; the line before that one said (eq (deriv x (pow e y)) (deriv x x) 1), for example². For this purpose, brevity is essential, and as we know from information theory, brevity is proportional to the logarithm of the number of different weird symbols you use.
We could certainly improve conventional math notation, and in fact mathematicians invent new notation all the time in order to do so, but the direction you're suggesting would not be an improvement.
People do make this suggestion all the time. I think it's prompted by this experience where they have always found math difficult, they've always found math notation difficult, and they infer that the former is because of the latter. This inference, although reasonable, is incorrect. Math is inherently difficult, as far as anybody knows (an observation famously attributed to Euclid) and the difficult notation actually makes it easier. Undergraduates routinely perform mental feats that defied Archimedes because of it.
______
¹ \frac d{dx}e^y = e^y\frac{dy}{dx} = 1
² \frac d{dx}e^y = \frac d{dx}x = 1
³ See https://nbviewer.org/url/canonical.org/~kragen/sw/dev3/logar... for a cleaned-up version of the context where I wrote this equation down on paper the other day.
It's not just "rewriting" arbitrarily either, but rewriting according to well-known rules of expression manipulation such as associativity, commutativity, distributivity of various operations, the properties of equality and order relations, etc. It's precisely when you have such strong identifiable properties that you tend to resort to operator-like notation in any formalism (including a programming language) - not least because that's where a notion of "rewriting some expression" will be at its most effective.
(This is generally true in reverse too; it's why e.g. text-like operators such as fadd() and fmul() are far better suited to the actual low-level properties of floating-point computation than FORTRAN-like symbolic expressions, which are sometimes overly misleading.)
Maybe there is an advantage for associativity, in that rewriting add(a, add(b, c)) as add(add(a, b), c) is harder than rewriting a + b + c as a + b + c. Most of the time you would have just written add(a, b, c) in the first place. That doesn't handle a + b - c (add(a, sub(b, c)) vs. sub(add(a, b), c)) but the operator syntax stops helping in that case when your expression is a - b + c instead, which is not a - (b + c) but a - (b - c).
Presumably the notorious non-associativity of floating-point addition is what you're referring to with respect to fadd() and fmul()?
I guess floating-point multiplication isn't quite commutative either, but the simplest example I could come up with was 0.0 * 603367941593515.0 * 2.9794309755910265e+293, which can be either 0 or NaN depending on how you associate it. There are also examples where you lose bits of precision to gradual underflow, like 8.329957634267304e-06 * 2.2853928075274668e-304 * 6.1924494876619e+16. But I feel like these edge cases matter fairly rarely?
On my third try I got 3.0 * 61.0 * 147659004176083.0, which isn't an edge case at all, and rounds differently depending on the order you do the multiplications in. But it's an error of about one part in 10⁻¹⁶, and I'd think that algorithms that would be broken by such a small amount of rounding error are mostly broken in floating point anyway?
I am pretty sure that both operators are commutative.
It's not conventional to write commutative-but-not-associative functions as infix operators, but I don't think that's due to some principled reason, but just because they're not very common; non-associative operators such as subtraction and function application are almost universally written with infix operators, even the empty-string operator in the case of function application. The most common one is probably the Sheffer stroke for NAND (although Sheffer himself used it to mean NOR in his 01913 paper: https://www.ams.org/journals/tran/1913-014-04/S0002-9947-191...).
You can go a bit further in the direction of logical manipulability, as George Spencer Brown did with "Laws of Form" (LoF): his logical connective, the "cross", is an N-ary negation function whose arguments are written under the operation symbol without separators between them, and he denotes one of the elementary boolean values as the empty string (let's call it false, making the cross NOR). ASCII isn't good at reproducing his "cross" notation, but if we use brackets instead, we can represent his two axioms as:
[][] = [] (not false or not false is not false)
[[]] = (not not false is false)
In this way Spencer Brown harnesses the free monoid on his symbols: the empty string is the identity element of the free monoid, so appending it to the arguments of a cross doesn't change them and thus can't change the cross's value. Homomorphically, false is the identity element of disjunction, which is a bounded semilattice, and thus a monoid.This allows not only the associative axiom but also the identity axiom to be simple string identity, which seems like a real notational advantage. (Too bad there isn't any equivalent for the commutative axiom.) It allows Spencer Brown to derive all of Boolean logic from those two simple axioms.
However, so far, I haven't found that the LoF notation is an actual improvement over conventional algebraic notation. Things like normalization to disjunctive normal form seem much more confusing:
a(b + c) → ab + ac (conventional notation, rewrite rule towards DNF)
[[a][bc]] → [[a][b]][[a][c]] (LoF notation)
It's a little less noisy in Spencer Brown's original two-dimensional representation (note that the vertical breaks between the U+2502 BOX DRAWINGS LIGHT VERTICAL characters are not supposed to be there; possibly if you paste this into a text editor or terminal it will look better) ┌───── ┌────┌────
│┌─┌── → │┌─┌─│┌─┌─
││a│bc ││a│b││a│c
but not, to my eye, any less confusing.Also, the axioms I cited above are written in his notation on his gravestone: https://en.wikipedia.org/wiki/G._Spencer-Brown#/media/File:G... but I have evidently reversed left and right in my rendering of the DNF rewrite rule above. It should be:
─────┐ ────┐────┐
─┐──┐│ → ─┐─┐│─┐─┐│
a│bc││ a│b││a│c││
His first statement of the first axiom in the book is a little more general than the version I reproduced earlier and which is inscribed on his gravestone; rather than his "form of condensation" [][] = []
his "law of calling" is general idempotence, i.e., AA = A
although the two statements are equipotent within the system he constructs. Similarly, before stating his "form of cancellation" [[]] =
he phrases it as the "law of crossing", which I interpret as [[A]] = A1 and 2 would be
1) d/dx e^y = e^y dy/dx = 1
2) d/dx e^y = d/dx x = 1
edit: edited, first got them wrongIf they do, it seems like an error-prone way to write your math.
If they don't, it seems like it will make your math look terrible.
Supposing that the parentheses aren't necessary, as implied by your edit: how does AsciiMath determine that e^y isn't in the numerator in "e^y dy/dx", or (worse) in the denominator in "d/dx e^y"?
It seems somewhat less noisy than the LaTeX version, but not much; assuming I can insert whitespace harmlessly:
\frac d{dx}e^y = e^y\frac{dy}{dx} = 1
d/dx e^y = e^y dy/dx = 1
\frac d{dx}e^y = \frac d{dx}x = 1
d/dx e^y = d/dx x = 1The rules are basically the same as LaTeX, with saner symbol names, support for fractions, \ is not needed before symbols and () can be used instead of {}.
> Supposing that the parentheses aren't necessary, as implied by your edit: how does AsciiMath determine that e^y isn't in the numerator in "e^y dy/dx"
It seems to me that dx,dy,dz,dt behave like numbers, single letter variables and symbols (probably they are symbols, but not listed for some reason). Just as LaTeX doesn't need {} parentheses for numbers, single letter variables and symbols, AsciiMath allows omitting them too.
So `/` captures a single number/symbol/variable left to it, and that is `dy`. But if there was `du` for example it would only capture u, and you would need to put du between parentheses.
Probably figuring out how to write things in AsciiMath is more trouble than copying and pasting them from Wikipedia though. (The alt text on equation images is the LaTeX source preceded with \displaystyle.)
How do you do \bigg(\big((4x + 2)x + 1\big)x - 3\bigg)x + 5 in AsciiMath? (((4x + 2)x + 1)x - 3)x + 5 makes all the parens the same size.
One last rant point is that you don't have "the manual" of math in the very same way you would go on your programming language man page and so there is no single source of truth.
Everybody assumes...
Your rant would be akin to this if the sides are reversed: "It's surprising how many different ways there are to describe the same thing. Eg: see all the notations for dictionaries (hash tables? associative arrays? maps?) or lists (vectors? arrays?).
You don't have "the manual" of programming languages. "
Well, we kinda do when you can say "this python program" the problem with a lot of math is that you can't even tell which manual to look up.
A. Grothendieck
Understanding mathematical ideas often requires simply getting used to them
For example, Dvoretzky-Rogers theorem in isolation is hard to understand.
While more applications of it appear While more generalizations of it appear While more alternative proofs of it appear
it gets more clear. So, it takes time for something to become digestible, but the effort spent gives the real insights.
Last but not least is the presentation of this theorem. Some authors are cryptic, others refactor the proof in discrete steps or find similarities with other proofs.
Yes it is hard but part of the work of the mathematician is to make it easier for the others.
Exactly like in code. There is a lower bound in hardness, but this is not an excuse to keep it harder than that.
This fundamental truth is embedded in the common symbols of arithmetic...
+ ... one line combined with another ...linear...line wee
- ...opposite of + one line removed
x ...eXponential addition, combining groups
•/• ... exponential breaking into groups ...also hints at inherent ratio
From there it's symbols that describe different objects and how to apply the fundamental arithmetic operations; like playing over a chord in music
The interesting work is in physical science not the notation. Math is used to capture physics that would be too verbose to describe in English or some other "human" language. Which IMO should be reserved for capturing emotional context anyway as that's where they originate from.
Programming languages have senselessly obscured the simple and elegant reality of computation, which is really just a subset of math; the term computer originated to describe humans that manually computed. Typescript, Python, etc don't exist[1]. They are leaky abstractions that waste a lot of resources to run some electromagnetic geometry state changes.
Whether it's politics, religion or engineering, "blue" language, humans seem obsessed with notation fetishes. Imo it's all rather prosaic and boring
[1] at best they exist as ethno objects of momentary social value to those who discuss them
All of which is compounded by the desire to provide minimal "proofs from the book" and leave out the intuitions behind them.
Do you know the reason for that? The reason is that those problems are open and easy to understand. For the rest of open problems, you need an expert to even understand the problem statement.
Mathematics is old, but a lot of basic terminology is surprisingly young. Nowadays everyone agrees what an abelian group is. But if you look into some old books from 1900 you can find authors that used the word abelian for something completely different (e.g. orthogonal groups).
Reading a book that uses "abelian" to mean "orthogonal" is confusing, at least until you finally understand what is going on.
Hopefully interactive proof assistants like Lean or Rocq will help to mitigate at least this issue for anybody trying to learn a new (sub)field of mathematics.
If we are already venturing outside of scientific realm with philosophy, I'm sure fields of literature or politics are older. Especially since philosophy is just a subset of literature.
As far as anybody can tell, mathematics is way older than literature.
The oldest known proper accounting tokens are from 7000ish BCE, and show proper understanding of addition and multiplication.
The people who made the Ishango bone 25k years ago were probably aware of at least rudimentary addition.
The earliest writings are from the 3000s BCE, and are purely administrative. Literature, by definition, appeared later than writing.
That depends what you mean by "literature". If you want it to be written down, then it's very recent because writing is very recent.
But it would be normal to consider cultural products to be literature regardless of whether they're written down. Writing is a medium of transmission. You wouldn't study the epic of Gilgamesh because it's written down. You study it to see what the Sumerians thought about the topics it covers, or to see which god some iconography that you found represents, or... anything that it might plausibly tell you. But the fact that it was written down is only the reason you can study it, not the reason you want to.
That is what literature means: https://en.wiktionary.org/wiki/literature#Noun
The person who hears that poem in circulation and records it in his notes has created literature; an anthology is literature but an original work isn't.
Sure they have, by virtue of writing it down. It becomes literature when it hits the paper (or computer screen, as it were).
(Unless you mean to imply that formulating an original poem in your mind counts as "writing", in which case I guess we illustrate the overarching point of value in shared symbols and language and the waste of time in stating our original definitions for every statement we want to make)
You're close. I'm making the point that, in modern English, no other verb is available for the act of creating a poem.
Here's a quote from the fantasy novel The Way of Kings that always appealed to me:
>> "Many of our nuatoma -- this thing, it is the same as your lighteyes, only their eyes are not light--"
>> "How can you be a lighteyes without light eyes?" Teft said with a scowl.
>> "By having dark eyes," Rock said, as if it were obvious. "We do not pick our leaders this way. Is complicated. But do not interrupt story."
For an example from reality, I am forced to tell people who ask me that the English translation of 姓 is "last name", despite the fact that the 姓 comes first.
Similarly, the word for writing a poem is "write", whether this creates a written artifact or not. And the poem is literature whether a written artifact currently exists, used to exist, or never existed.
(Though you've made me curious: if the Iliad wasn't literature until someone wrote it down, do you symmetrically believe that Sophocles' Sisyphus is no longer literature because it is no longer written down?)
Once someone writes it down, it is.
This type of resoning becomes void if instead of "AI" we used something like "AGA" or "Artificial General Automation" which is a closer description of what we actually have (natural language as a programming language).
Increasingly capable AGA will do things that mathematitians do not like doing. Who wants to compute logarithmic tables by hand? This got solved by calculators. Who wants to compute chaotic dynamical systems by hand? Computer simulations solved that. Who wants to improve by 2% a real analysis bound over an integral to get closer to the optimal bound? AGA is very capable at doing that. We just want to do it if it actually helps us understand why, and surfaces some structure. If not, who cares it its you who does it or a machine that knows all of the olympiad type tricks.
To use an example from functional programming, I could say:
- "A monad is basically a generalization of a parameterized container type that supports flatMap and newFromSingleValue."
- "A monad is a generalized list comprehension."
- Or, famously, "A monad is just a monoid in the category of endofunctors, what's the problem?"
The basic idea, once you get it, is trivial. But the context, the familiarity, the basic examples, and the relationships to other ideas take a while to sink in. And once they do, you ask "That's it?"
So the process of understanding monads usually isn't some sudden flash of insight, because there's barely anything there. It's more a situation where you work with the idea long enough and you see it in a few contexts, and all the connections become familiar.
(I have a long-term project to understand one of the basic things in category theory, "adjoint functors." I can read the definition just fine. But I need to find more examples that relate to things I already care about, and I need to learn why that particular abstraction is a particularly useful one. Someday, I presume I'll look at it and think, "Oh, yeah. That thing. It's why interesting things X, Y and Z are all the same thing under the hood." Everything else in category theory has been useful up until this point, so maybe this will be useful, too?)
You know the meme with the normal distribution where the far right and the far left reach the same conclusion for different reasons, and the ones in the middle have a completely different opinion?
So on the far right you have people on von Neumann who says "In mathematics we don't understand things". On the far left you have people like you who say "me no mats". Then in the middle you have people like me, who say "maths is interesting, let me do something I enjoy".
To date I have not met anyone who thought he summed the terms of the infinite series in geometric series term by term. That would take infinite time. Of course he used the expression for the sum of a geometric series.
The joke is that he missed a clever solution that does not require setting up the series, recognising it's in geometric progression and then using the closed form.
The clever solution just finds the time needed for the trains to collide, then multiply that with the birds speed. No series needed.
For example there's a story that von Neumann told Shannon to call his information metric entropy, telling S "nobody really understands entropy anyway." But if you've engaged with Shannon to the point of telling him that quantity seems to be the entropy, you really do understand something about entropy.
So maybe v N's worry was about really undertanding math concepts fully and extremely clearly. Going way beyond the point where I'd say "oh I get it!"
Probably they are trying to romanticize something that may not sound good if told plainly.
Face it. Mathematics is one of fields strongly affected by AI, just like programming. You need to be more straight forward about it rather than beating around the bush.
To simply put, it appears to be a struggle for redefining new road map, survival and adoption in AI era.
> "few of us"
You see, if you plebs are unable to understand our genius its solely due to your inadequacies as a person and as an intellect, but if we are unable to understand our genius, well, that's a lamentable crisis.
To make Mathematics "understandable" simply requires the inclusion of numerical examples. A suggestion 'the mathematics community' is hostile to.
If you are unable to express numerically then I'd argue you are unable to understand.
1. Study predicate logic, then study it again, and again, and again. The better and more ingrained predicate logic becomes in your brain the easier mathematics becomes.
2. Once you become comfortable with predicate logic, look into set theory and model theory and understand both of these well. Understand the precise definition of "theory" wrt to model theory. If you do this, you'll have learned the rules that unify nearly all of mathematics and you'll also understand how to "plug" models into theories to try and better understand them.
3. Close reading. If you've ever played magic the gathering, mathematics is the same thing--words are defined and used in the same way in which they are in games. You need to suspend all the temptation to read in meanings that aren't there. You need to read slowly. I've often only come upon a key insight about a particular object and an accurate understanding only after rereading a passage like 50 times. If the author didn't make a certain statement, they didn't make that statement, even if it seems "obvious" you need to follow the logical chain of reasoning to make sure.
4. Translate into natural english. A lot of math books will have whole sections of proofs and /or exercises with little to no corresponding natural language "explainer" of the symbolic statements. One thing that helps me tremendously is to try and frame any proof or theorem or collection of these in terms of the linguistic names for various definitions etc. and to try and summarize a body of proofs into helpful statements. For example "groups are all about inverses and how they allow us to "reverse" compositions of (associative) operations--this is the essence of "solvability"". This summary statement about groups helps set up a framing for me whenever I go and read a proof involving groups. The framing helps tremendously because it can serve as a foil too—i.e. if some surprising theorem contravene's the summary "oh, maybe groups aren't just about inversions" that allows for an intellectual development and expansion that I find more intuitive. I sometimes think of myself as a scientist examining a world of abstract creatures (the various models (individuals) of a particular theory (species))
5. Contextualize. Nearly all of mathematics grew out of certain lines of investigation, and often out of concrete technical needs. Understanding this history is a surprisingly effective way to make many initially mysterious aspects of a theory more obvious, more concrete, and more related to other bits of knowledge about the world, which really helps bolster understanding.
Awesome that for mathematicians notation does not matter, an every solved problem is trivial..
But for a student this is not the case yet.
Take the simple pi vs tau debate. Of course it doesn’t matter which you use once you understand them.
But if you don’t understand it yet, and learn about it for the first time, tau makes everything a lot more intuitive.
If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Someone quoted von Neumann about getting used to mathematics. My interpretation always was that once is immersed in a topic, slowly it becomes natural enough that one can think about it without getting thrown off by relatively superficial strangeness. As a very simple example, someone might get thrown off the first time they learn about point-set topology. It might feel very abstract coming from analysis but after a standard semester course, almost everyone gets comfortable enough with the basic notions of topological spaces and homeomorphisms.
One thing mathematics education is really bad at is motivating the definitions. This is often done because progress is meandering and chaotic and exposing the full lineage of ideas would just take way too long. Physics education is generally far better at this. I don't know of a general solution except to pick up appropriate books that go over history (e.g. https://www.amazon.com/Genesis-Abstract-Group-Concept-Contri...)