Difference between T1(N) and f(N): "Suppose T_1(N) = O(f(N)) and T_2(N) = O(f(N)). Is T_1 the same as f in this context?"

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
Interesting, so it sounds like to truly answer whether each question is true or false, I probably would want to have a better understanding of proofs.
 

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
I think I finally understand this a little bit.

Is my reasoning for (a) sound?:

(a)
True. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that no matter what, the absolute value of T1(N) and the absolute value of T2(N) can’t be greater than a constant multiplied by the absolute value of f(N). This means that if f(N) = N, for instance, then we know that T1(N) and T2(N) can only be N at most. Since the maximum values of T1(N) and T2(N) are f(N), it would follow that for the maximum values of T1(N) and T2(N) them being added together would be 2 * f(N). Since we can safely say that the maximum value of T1(N) and T2(N) is 2 * f(N), it would follow that we can easily find a value of C and k such that T1(N) + T2(N) = O(f(N)) would be true for all values of T1(N), T2(N), and O(f(N)). We would make C greater than or equal to 2 and k greater than or equal to 0 because if C is greater than or equal to 2 there is no possible way that the absolute value of T1(N) + T2(N) could be greater than f(N) since the greatest possible value of T1(N) + T2(N) is 2 * f(N). C being greater than or equal to 2 and k being greater than 0 would ensure that for all possible values of T1(N), T2(N), and O(f(N)), T1(N) + T2(N) would never be greater than O(f(N)). Since there is at least one value of C and k that makes the relationship true for every possible value of T1(N), T2(N), and O(f(N)), the statement is true.

Note: I copy and pasted this from a Word document so the subscript notation did not carry over. In this case, know that T1(N) is the same as T1(N) and T2(N) is the same as T2(N). Sorry if it makes this look a little messy but I just don't have the energy to go through the entire paragraph and change all of the notation to fit the forum.
 

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
Actually, I have come up with much better reasoning:

(a)True. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). Because of this, it follows that there will always exist some constant C * f(N) that will exceed the value of abs(T1(N) + T2(N)). For example, if T1(N) = 1000N and T2(N) = 500N then we know that in the context of T1(N) + T2(N) that there will always be a value of C that can multiplied by f(N) to exceed or equal T1(N) + T2(N). In this case, for example, we would make C equal to or greater than 1500. For all cases, we would make k = 0, so x must be more than 0. Since neither T1(N) nor T2(N) can ever be greater than f(N) separately, we know there will always be a value of C that can be multiplied by C to make f(N) equal to or greater than T1(N) + T2(N). Q.E.D
 

Dr.Peterson

Elite Member
Joined
Nov 12, 2017
Messages
3,334
There are several wrong details in what you've said; you would really need to mention several constants from which you could determine C, and you'd have to mention that all this is true for N greater than some particular value, from which you would determine k (not necessarily 0), and so on. But your course is not about making proofs, so we don't need to perfect your thinking. The general ideas are more or less right. To learn more about this, you'd want to study a proof-based textbook that covers the topic.

One of the benefits of the Big-O concept is that you can use it, once you have theorems like this one, without paying attention to all the details that make a proof complicated.
 

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
Why would I need to determine several constants of C? Isn't it enough to know, in this case, that no matter how I add T1(N) + T2(N), there can always be a C that I can multiply by f(x) that makes the relationship true? Isn't it enough to know that T1(N) and T2(N) are separately less than or equal to f(N) and therefore I can always find some value of C to multiply by f(N) that makes the relationship true?
 

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
For now, it seems like informally proving something is enough for my professor. She doesn't need us to write formal proofs. But if I did want to truly prove different relationships, I would definitely want to learn way more about formal proofs.
 

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
Also, is my reasoning for (b) sound? I can basically use the same logic for (b) right, except it is little-oh notation?

(b) Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). Because of this, it follows that there will always exist some constant C * f(N) that will exceed the value of abs(T1(N) - T2(N)). For example, if T1(N) = 5000N and T2(N) = 500N then we know that in the context of T1(N) - T2(N) that there will always be a value of C that can multiplied by f(N) to exceed or equal T1(N) + T2(N). In this case, for example, we would make C greater than 5500. For all cases, we would make k = 0, so x must be more than 0. Since neither T1(N) nor T2(N) can ever be greater than f(N) separately, we know there will always be a value of C that can be multiplied by C to make f(N) greater than T1(N) - T2(N). Q.E.D
 

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
Actually, I realize I did (b) wrong. I misread the definition for little-oh notation. Here is my reworked answer for (b) that I think is correct:

False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). For little-oh notation we only need to find one example of C that makes the relationship untrue.If T1(N) = 2N, T2(N) = N, and f(N) = N, there exists a C which makes the statement false. For example 2N – N < 1 * N is NOT true since N is not less than N. Since there is at least one example of C that makes the relationship untrue, the answer is false.
 

Dr.Peterson

Elite Member
Joined
Nov 12, 2017
Messages
3,334
Why would I need to determine several constants of C? Isn't it enough to know, in this case, that no matter how I add T1(N) + T2(N), there can always be a C that I can multiply by f(x) that makes the relationship true? Isn't it enough to know that T1(N) and T2(N) are separately less than or equal to f(N) and therefore I can always find some value of C to multiply by f(N) that makes the relationship true?
In a proof, you have to start with exactly what you know, in this case the definition of Big-O. For each given function, that means you assume a different constant multiplier and a different minimum input - each function is not less than f(N) for all N, but rather is less than its own constant times f(N) for N greater than its own minimum input.

So you state the definition as applied to each function, then use that to show that, for N greater than the larger of the two minimum inputs, |T1(N) + T2(N)| is less than the greater of the two C's times f(N), and so on. As I said, the basic idea is along the lines of what you wrote, but you have to be precise about details, not just pull a single C out of a hat without showing how to find it from the C's you know exist.
 

Dr.Peterson

Elite Member
Joined
Nov 12, 2017
Messages
3,334
Actually, I realize I did (b) wrong. I misread the definition for little-oh notation. Here is my reworked answer for (b) that I think is correct:

False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). For little-oh notation we only need to find one example of C that makes the relationship untrue.If T1(N) = 2N, T2(N) = N, and f(N) = N, there exists a C which makes the statement false. For example 2N – N < 1 * N is NOT true since N is not less than N. Since there is at least one example of C that makes the relationship untrue, the answer is false.
You got it. That's pretty much the same counterexample I had in mind myself.

There's just a little detail missing here, as you haven't mentioned your definition mentioning a minimum n. But that won't affect the result.

If you want to discuss this further, please copy the definition as given to you, as there may be slight variations. (There are two very different ways to state the definition, but it is clear which of the two you are using; I just am not sure of the details of how yours says it.) At the moment I'm looking at the definition given here, where what you're missing is "n0".
 

vaderboi

New member
Joined
Jan 18, 2019
Messages
47
Here are my answers for (c) and (d). Is my reasoning correct?

c. False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). All we need is one example where there is no C that would make the relationship true for all values of N > k. If T1(N) = N2, T2(N) = N, and f(N) = N2, then the theoretical relationship would read as N = O(1). There is no C that could make this relationship true for all values of N > k. Eventually, N will become larger than C * abs(1) no matter how large C is. Because of this, the answer is false.

d. False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). All we need is one example where there is no C that would make the relationship true for all values of N > k. Say T1(N) = N2, T2(N) = N, and f(N) = N2. No matter what value of C is multiplied by abs(N), T2(N) will never be greater than or equal to T1(N) for all N > 1. Because of this, the answer is false.
 
Top