Hi Canario:
Your question makes sense to me, also. I'll add to Ishuda's comments.
In mathematics, definitions are not always set in stone. Definitions may change, when moving between one area of mathematics and another.
Here are some examples.
In beginning algebra, mathematicians define signed numbers as positive or negative. Positive numbers lie to the right of zero, on the Real number line, and negative numbers lie to the left. We define zero as neither positive nor negative. Yet, other mathematicians define zero as a signed number, and they treat zero as either positive or negative depending on what's convenient. (If memory serves, treating zero as a negative number helps out in electrical engineering.)
In beginning algebra, mathematicians define the set of Natural numbers as the familiar numbers used for counting common objects {1,2,3,4,5,...}. We then define the set of Whole numbers as {0,1,2,3,4,5...}. In other words, the set of Whole numbers is what we get when we add zero to the set of Natural numbers. Yet, other mathematicians claim zero as a Natural number, so (for them) the definition of Natural number is different. (Some mathematicians deny the existence of the set of Whole numbers, even though that definition appears in a gazillion classrooms and text books.)
In intermediate algebra, the expression 0^0 is defined as meaningless. This is because 0^0 leads to a contradiction using properties of exponents. 0^0=0^(1-1) and 0^(1-1) may be rewritten as 0^1 times 0^(-1). However, the expression 0^(-1) represents 1/0. This result (dividing by zero) demonstrates why 0^0 is undefined. Yet, in precalculus and calculus, the expression 0^0 is defined to mean 1, despite contradicting properties of exponents. If we did not change the definition of 0^0 from meaningless to 1, other stuff (eg: binomial expansion, derivative power-rule) breaks down at x=0.
I could go on, but we're on the Beginning Algebra board, so I'm trying to stick to introductory definitions. My point is this: Be prepared to be flexible!
Cheers