Both ways are just notation. There’s nothing more real about 3/10 compared to 0.3.
Telling you otherwise might have worked as a educational “shorthand”, but there are no mathematical difficulties as long as you use good definitions of what you mean when you write them down.
The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.
I agree that ultimately both are just notations. I do think the fractional notation has some definite advantages and few disadvantages, so I think it's better to regard it as more canonical.
I disagree though that it's necessary or even useful to think of 0.99... or 0.33... as sequences or limits. It's of course possible, but it complicates a very simple concept, in my opinion, and muddies the waters of how we should be using notions such as equality or inifinity.
For example, it's generally seen as a bad idea to declare that some infinite sum is actually equal to its limit, because that only applies when the series of partial sums converges. It's more rigorous to say that sigma(1/n) for n from 2 going to infinity converges to 1, not that it is equal to 1; or to say that lim(sigma(1/n)) for n from 2 to infinity = 1.
So, to say that 0.xxx... = sigma(x/10^n) for n from 1 to infinity, and to show that this is equal to 1 for x = 9, muddies the waters a bit. It still gives this impression that you need to do an infinite addition to show that 0.999... is equal to 1, when it's in fact just a notation for 9/9 = 1.
It's better in my opinion to show how to calculate the repeating decimal expansion of a fraction, and to show that there exists no fraction whose decimal expansion is 0.9... repeating.
> The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.
Also a possible third thing: not enjoying working in a Base that makes factors of 3 hard to write. Thirds seem like common enough fractions "naturally" but decimal (Base-10) makes them hard to write. It's one of the reasons there are a lot of proponents of Base-12 as a better base for people, especially children, because it has a factor of 3 and thirds have nice clean duodecimal representations. (Base-60 is another fun option; it's also Babylonian approved and how we got 60 minutes and 60 seconds as common unit sizes.)
You get the same problem with 0.44... + 0.55... - I don't think that makes it any easier to anyone who is confused. It's more likely just that 0.33... and 0.66... are very common and simple repeating fractions that lead to this issue.
Sure, I was just pointing out that Base you use for your math does affect how common repeating digits are, based on the available factors in that base.
In Base-12 math, 1/3 = 0.4 and 2/3 = 0.8. With the tradeoff that 1/5 is 0.2947 repeating (the entire 2947 has the repeating over-bar).
Base-10 only has the two main factors 2 and 5, so repeating fractions are much more common in decimal representation, making this overall problem much more common, than compared to duodecimal/dozenal/Base-12 (or even hexadecimal/Base-16). It's interesting that this is a trade-off directly related to the base number of digits we want to express rational numbers in.
Telling you otherwise might have worked as a educational “shorthand”, but there are no mathematical difficulties as long as you use good definitions of what you mean when you write them down.
The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.