my 1000th post

Maria2394 said:
well now, thanks for clearing that up--i feel even dumber than usual :(
Maria, if you'd like me, I can present a brief story of the number notion evolution (so that you will feel less dumb than usual :)).

Regards,
 
Maria2394 said:
that would be nice Senna, thank you :)
Here you go (but this is an offhand account).

The small natural numbers 1 2 3 4 are treated in Polish language in an irregular manner. Also 1 2 in English: one-first, two-second. This indicates that a long time ago people were using these small numbers only, and then it was "many". With the time passing they had to deal with larger natural numbers, they realized that numbers go on and on just like after each step you can make the next one. The old counting systems though were inferior to what we have now. For instance, the ancient Greeks used the 24 characters of they alphabet, plus three additional characters, to represent natural numbers from 1 to 999, and they didn't have any standard, common way to represent larger numbers. For the sake of this presentation let me replace Greek charakters with the 26 latin characters a-z plus one more character Ó. Thus let letters a b ... i stand for numbers 1 2 ... 9, letters j-r for numbers 10 20 ... 90, and letters s ... z Ó for numbers 100 ... 800 900. Thus you could write number 14 both as jd or dj; or number 926 as Ókf or fÓk or kfÓ or fkÓ or Ófk or Ófk. For them the position of a symbol was not essential, the system was not positional. Observe that our system is positional: when you borrow $981 you can't get away with returning $189.

The Greek system didn't feel any acute need for zero. We write 901 while they would write Óa or . For the range from 1 to 999 their system was fine, while you cannot extend it for an arbitrary large range of numbers because you'd run out of symbols. They alreay needed 27 just for the 1-999 range.

In the positional system you can represent arbitrarily large numbers by using just a few symbols. Computers use only 0 and 1, while we-humans use 0 1 2 3 4 5 6 7 8 9 and that's all.

Babilonians were perhaps the first to use a positional system. Then Chinese. But Babilonians used a system based on 60, while Chinese used the decimal system. Both of them had to use the notion of zero.

The positional system was one of the reasons for introducing zero. First you had small numbers: 1 2 3 4. Then larger numbers. (In several languages you have also special, irregular names for certain numbers like 12, 15, 40, 60... which were convenient for trade, for counting say eggs in bunches instead of counting each single egg. That's actually a parenthetical digression). So, natural numbers were well established before zero was accepted as another number. For a long, long time zero was treaded with a suspicion.

Also fractions were introduced at a slow rate. This again is reflected by languages. You have "half" for "1/2" and "quarter" for "1/4" but no such word for fraction "1/3". This shows you that fractions 1/2 and 1/4 were introduced before 1/3. With the time passing people have accepted all positive fractions A/B, where
A B are natural numbers.

Now let me stop making any historical pretenses and stick to the conceptual aspect of the evolution of numbers. When you have some money in your account, and some parties owe you money, and you owe money to still some other parties (credit cards, tax...) then hopefully you have a positive balance. If not then you are dealing with a negative number: after returning all the money which you have to the respective parties you may still owe some more (you didn't have enough to return all loans). If you are short a hundred fifty and a half dollars then we say in mathematics that you have minus hundred fifty and a half dollars (-150.50).

Zero felt already abstract, while negative numbers may feel even more abstract. Algebraists justify zero and negative numbers on the ground of the algebraic convenience. They want a regular, predictable situation. When they have an equation they prefer to have a solution always instead of here you have it, here you don't. Also, an equation describes a condition indirectly (implicitly) while solutions describe things directly, explicitly. When encountering an equation like:
x + 20 = 15​
one may open his mouth and say "I don't know" or, preferably, you get "x=-5" as an answer. Negative answer may mean that someone is $5 short or perhaps is 5 miles on the left side of the starting point P when his goal is to go to a point Q which is to the right of P. Just imagine the following phone converstation:

--Where are you, Mark?
-- I don't know. But someone had to drive North 20 miles to get to Q from here.

Now, you know that Q is 15 miles North of P. Thus you know that Mark is 15-20 = -5 miles North of P; in other words Mark is 5 miles south of P. The point of negative numbers is that you subtract X - Y without worrying which of X and Y is larger/smaller. You subtract in either case of X > Y or Y > X -- you use smoothly one and the same procedure.

Even before the development of the negative numbers the ancient Greek Pythagoras was shocked by another discovery. Ancient Greek mathematicians didn't like numbers as such--that was for common folks, like merchants, not for "philosophers". They prefered geometrical intervals. They could always choose one as a unit interval, then they would measure all others with respect to that one. Actually, they didn't bother with this inelegant (non-geometric) approach. When they had two intervals A B, then they were looking for a third one, C, perhaps a smaller one, which would fit in A a number of times (exactly), say 5 times, and into B an exact number of times, say 3. That gave them a proportion A : B = 5 : 3, and they were happy, they knew the relative lengths of A and B, one with respect to the other.

And then one day Pythagoras or one of his students have discovered that for a side A of a square and for its diagonal B there does NOT exist any interval C which would fit a number of times exactly into A and B. In other words, the side of a square, and the diagonal, are not comeasurable! A non-Greek way to say it is, that there does not exist any rational number x = a/b (where a b are natural numbers) such that x*x = 2.

Pythagoras had decided to keep this information secret. One of his disciples passed this infromationm to the outside world and for that he paid with his life.

Pythagoras supposedly lived something like from 569BC to 475BC (incredibly long, especially for those times!). By the time of Eudoxus(408BC-355BC), Euclid(325BC-265BC) and Archimedes(287BC-212BC) the structure of the Euclidean straight line, or what is the same: of the field of real numbers (rational and irrational), was understood pretty well while not as completely as in 19th century, thanks to Cantor, Dedekind, Hilbert and others.

Completely or not, but the real numbers were pretty well understood for a long time, or at least people had this cozy illusion. After all, real numbers--rationals and irrationals--fill up a straight line. Thus you had not only square root of 2 but any root of any nonegative number. And when logarithms were invented then it was also assumed that there is such a thing as logarithm (base 10 or another) for any positive number; the same with sinus function (sin(x), for arbitrary x, possibly negative) etc.

Of course there is no real square root b=sqrt(a) for a negative number a, because then we would have, by definition, b*b = a would be negative, which is a nonsense, the same kind of nonsense as a long time ago when we tried to subract 20 from 15. You couldn't subtract 15-20 when you didn't have negative numbers. You had to introduce them, you had to invent them.

So, algebraists started to use nonsensical numbers like sqrt(-6) or the simplest of them: i = sqrt(-1). Actually, to have solutions for ALL quadratic equations:
a*x*x + b*x + c = 0​
it was enough to consider i=sqrt(-1) together with the "combinations": s + i*t (where s t are the "ordinary" real numbers). You may add, subtract, multiply and even divide such "combinations". Since number sqrt(-1) as well as all numbers i*t were "sheer nonsense", they were called imaginary. And numbers s + i*t became the "complex numbers" (since each is a complex of two components: of the real one, s, and the imaginary one, i*t).

Quadratic equations were considered since the ancient times while the early Italian algebraists of the fifteenth/sixteenth century: Ferro, Tartaglia and Cardano were studying equations of degree three and four. Actually the guys in those days would challenge each other to solve some particular kinds of equations. The winner would win money good enough to prompt them to keep their methods secret. The secrecy didn't help the progress but the money did. For the sake of convenience they were using with some pains the "nonexisting", "nonsensical" complex numbers. The legalization process of the complex (including the imaginary) numbers was finally accomplished by Gauss, near the end of the eighteen century. He did much more. He proved the so-called fundamental theorem of algebra:

once you add i=sqrt(-1) and its real combinations s+i*t (complex numbers) to the field of real numbers then algebraically you do not need to add any new numbers, meaning that algebraic equation of any degree (1 2 3 4 5 ...) will have a solution.

The times were ripe for this kind of results, there were also other mathematicians who could claim part of the credit, but Gauss did it the best. For him it was one of a long string of achievements on the highest level. He is one of the greatest minds ever.

Not too many years later the fantastically sharp youngsters, Abel and Galois showed (independently) that there are no algebraic formulas which provide the solutions of the algebraic equations.
Yes, the solutions exist, but they are in general, starting with degree 5, NOT given by any formula which involves only the coefficients of the equation f(x)=0 (of the polynomial f) and symbols of operations + - * / and roots of any degree.

That fundamental theorem of algebra can be considered the high point of the evolution of the arithmetics. If I had to choose just one mathematical object it would be the field of complex numbers. The continuation of the story is exciting anyway. For example, it includes Sir W. R. Hamilton's quaternions. This direction was completed in the late sixties of the last century by topologist Frank Adams. Another direction was created by Abraham Robinson, the Nonstandard Analysis (including the nonstandard field of real numbers and other nonstandard models; one should mention here other mathematicians too). And you may have fun reading "Games and Numbers" by Conway, who invented his very general kind of numbers. He also coauthored other books on games, which you can actually play for fun.

Regards,

Senna Jawa

PS. The list of the greatest, sharpest minds ever: Archimedes, Newton, Gauss, Abel, Galois, Riemann, Poincare, Hilbert, Einstein.

Then, after them, the second super-league: Eudoxus, Euclid, Copernicus, Fermat, Liebnitz, Euler.

PPS. I am not going to proofread all this. Not now for sure :)
 
Last edited:
wow Senna, that history is fascinating!! Thank you for taking the time to write and post it. It seems we humans have come a long way from scratch marks on rocks and dirt.
I remember reading about a little Russian girl whose father was an algebra teacher and when she was barely old enough to read, he was giving her complex equations, ( writing them on the walls, I think) and she could solve them. I wish I could remember her name, ( just for trivia purposes, of course)

I understand a good deal of basics in algebra, but graphing and plotting numbers baffles me and I know it must be easy, I have a mental block or something, but the quadratics and polynomials were always sort of fun..my last instructor said she thought I had some sort of spatial limitations..I never heard of that before LOL

thanks again, I know I will reread it several times :)
 
I find your history surprisingly illustrative and succinct. For the purpose you wrote it, I would have enjoyed seeing more of a tie-in with the practical applications of imaginary number theory (as was done with negative numbers) such as fourier transforms or the vector spaces in which nuclear spin-spin interactions reside. The world can always use concrete proof of the effects of the imaginary. If it can call fourier transforms and spin-spin interactions "real."
 
Back
Top