Still though, any int can get as large as you like by default, no weird -n suffix (that I never saw in any other language -- just like most of Javascript's other recently added syntax, by the way, it's the new Perl).
I do wonder where I got this notion of Number. Is there some other language that has this?
I think stuff like wolfram language and mathematica probably have some "universal" numeric type.
However, I don't know a single mainstream application programming language that has a single numeric type that can handle: arbitrarily large integers, floating point values, and correct decimal arithmetic (0.1 + 0.2 == 0.3). I have at least a passing familiarity with probably about a dozen general purpose programming languages, and none of them can do it. If anyone knows of one, I'd be interested to learn about it.
Common name for what you call "universal numeric type" is "number tower". Most lisp dialects have something like that. What that means is that you have classes for small integers (fixnum), arbitrary precision integers (bignum), fractions, floats, and even complex numbers along with the appropriate abstract base classes (eg. integer, rational, real...) and arithmetic operations transparently use the most appropriate type for the result, i.e. the result of "1 / 10" comes out as "1/10" (of type fraction) and not as float "0.500...something".
Python 3 has mostly same approach to number types.
How odd to notice that my brain really messed that number type up. I could swear Python has a type called (capitalized) Number and that this handles arbitrarily large numbers as well as decimals. Seems like that 'memory' is completely fictional.
Still though, any int can get as large as you like by default, no weird -n suffix (that I never saw in any other language -- just like most of Javascript's other recently added syntax, by the way, it's the new Perl).
I do wonder where I got this notion of Number. Is there some other language that has this?