- Thread starter vaderboi
- Start date

Is my reasoning for (a) sound?:

(a)

True. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that no matter what, the absolute value of T1(N) and the absolute value of T2(N) can’t be greater than a constant multiplied by the absolute value of f(N). This means that if f(N) = N, for instance, then we know that T1(N) and T2(N) can only be N at most. Since the maximum values of T1(N) and T2(N) are f(N), it would follow that for the maximum values of T1(N) and T2(N) them being added together would be 2 * f(N). Since we can safely say that the maximum value of T1(N) and T2(N) is 2 * f(N), it would follow that we can easily find a value of

Note: I copy and pasted this from a Word document so the subscript notation did not carry over. In this case, know that T1(N) is the same as T

(a)True. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). Because of this, it follows that there will always exist some constant

- Joined
- Nov 12, 2017

- Messages
- 3,334

One of the benefits of the Big-O concept is that you can use it, once you have theorems like this one, without paying attention to all the details that make a proof complicated.

(b) Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). Because of this, it follows that there will always exist some constant

False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). For little-oh notation we only need to find one example of

- Joined
- Nov 12, 2017

- Messages
- 3,334

In a proof, you have to start with exactly what you know, in this case theC? Isn't it enough to know, in this case, that no matter how I add T1(N) + T2(N), there can always be aCthat I can multiply by f(x) that makes the relationship true? Isn't it enough to know that T1(N) and T2(N) are separately less than or equal to f(N) and therefore I can always find some value ofCto multiply by f(N) that makes the relationship true?

So you state the definition as applied to each function, then use that to show that, for N greater than the larger of the two minimum inputs, |T1(N) + T2(N)| is less than the greater of the two C's times f(N), and so on. As I said, the basic idea is along the lines of what you wrote, but you have to be precise about details, not just pull a single C out of a hat without showing how to find it from the C's you know exist.

- Joined
- Nov 12, 2017

- Messages
- 3,334

You got it. That's pretty much the same counterexample I had in mind myself.

False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). We know that neither T1(N) nor T2(N) can be greater than f(N). For little-oh notation we only need to find one example ofCthat makes the relationship untrue.If T1(N) = 2N, T2(N) = N, and f(N) = N, there exists aCwhich makes the statement false. For example 2N – N < 1 * N is NOT true since N is not less than N. Since there is at least one example ofCthat makes the relationship untrue, the answer is false.

There's just a little detail missing here, as you haven't mentioned your definition mentioning a minimum n. But that won't affect the result.

If you want to discuss this further, please copy the definition as given to you, as there may be slight variations. (There are two very different ways to state the definition, but it is clear which of the two you are using; I just am not sure of the details of how yours says it.) At the moment I'm looking at the definition given here, where what you're missing is "n0".

c. False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). All we need is one example where there is no

d. False. Suppose T1(N), T2(N), and f(N) are functions. By hypothesis, T1(N) = O(f(N)) and T2(N) = O(f(N)). All we need is one example where there is no