I just found out about this debate and it’s patently absurd. The ISO 80000-2 standard defines ℕ as including 0 and it’s foundational in basically all of mathematics and computer science. Excluding 0 is a fringe position and shouldn’t be taken seriously.
I could be completely wrong, but I doubt any of my (US) professors would reference an ISO definition, and may not even know it exists. Mathematicians in my experience are far less concerned about the terminology or symbols used to describe something as long as they’re clearly defined. In fact, they’ll probably make up their own symbology just because it’s slightly more convenient for their proof.
My experience (bachelor’s in math and physics, but I went into physics) is that if you want to be clear about including zero or not you add a subscript or superscript to specify. For non-negative integers you add a subscript zero (ℕ_0). For strictly positive natural numbers you can either do ℕ_1 or ℕ^+.
From what i understand, you can pay iso to standardise anything. So it’s only useful for interoperability.
Yeah, interoperability. Like every software implementation of natural numbers that include 0.
Ehh, among American academic mathematicians, including 0 is the fringe position. It’s not a “debate,” it’s just a different convention. There are numerous ISO standards which would be highly unusual in American academia.
FWIW I was taught that the inclusion of 0 is a French tradition.
I’m an American mathematician, and I’ve never experienced a situation where 0 being an element of the Naturals was called out. It’s less ubiquitous than I’d like it to be, but at worst they’re considered equally viable conventions of notation or else undecided.
I’ve always used N to indicate the naturals including 0, and that’s what was taught to me in my foundations class.
Of course they’re considered equally viable conventions, it’s just that one is prevalent among Americans and the other isn’t.
This isn’t strictly true. I went to school for math in America, and I don’t think I’ve ever encountered a zero-exclusive definition of the natural numbers.
I have yet to meet a single logician, american or otherwise, who would use the definition without 0.
That said, it seems to depend on the field. I think I’ve had this discussion with a friend working in analysis.
Well, you can naturally have zero of something. In fact, you have zero of most things right now.
But there are an infinite number of things that you don’t have any of, so if you count them all together the number is actually not zero (because zero times infinity is undefined).
There’s a limit to the number of things unless you’re counting spatial positioning as a characteristic of things and there is not a limit to that.
there’s no limit to the things you don’t have, because that includes all of the things that don’t exist.
the standard (set theoretic) construction of the natural numbers starts with 0 (the empty set) and then builds up the other numbers from there. so to me it seems “natural” to include it in the set of natural numbers.
Counterpoint: if you say you have a number of things, you have at least two things, so maybe 1 is not a number either. (I’m going to run away and hide now)
I think if you ask any mathematician (or any academic that uses math professionally, for that matter), 0 is a natural number.
There is nothing natural about not having an additive identity in your semiring.