In a proof, you have to start with exactly what you know, in this case the definition of Big-O. For each given function, that means you assume a different constant multiplier and a different minimum input - each function is not less than f(N) for all N, but rather is less than its own constant times f(N) for N greater than its own minimum input.Why would I need to determine several constants of C? Isn't it enough to know, in this case, that no matter how I add T1(N) + T2(N), there can always be a C that I can multiply by f(x) that makes the relationship true? Isn't it enough to know that T1(N) and T2(N) are separately less than or equal to f(N) and therefore I can always find some value of C to multiply by f(N) that makes the relationship true?
You got it. That's pretty much the same counterexample I had in mind myself.Actually, I realize I did (b) wrong. I misread the definition for little-oh notation. Here is my reworked answer for (b) that I think is correct:
False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). For little-oh notation we only need to find one example of C that makes the relationship untrue.If T1(N) = 2N, T2(N) = N, and f(N) = N, there exists a C which makes the statement false. For example 2N – N < 1 * N is NOT true since N is not less than N. Since there is at least one example of C that makes the relationship untrue, the answer is false.