Prove or Disprove.

TheWrathOfMath

Junior Member
Joined
Mar 31, 2022
Messages
162
v1...vk is a set of linearly independent vectors.
Vector u cannot be written as a linear combination of the above set.
Then {v1...vk, u} is linearly independent.
 
Last edited:
Why can't u be written as a linear combination of v1 and v2?

Any vector in R^2 can be written as a linear combination of v1 and v2.

Let (a, b) be in R^2. Then (a, b) = a(1,0) + b(0,1)
 
v1...vk is a set of linearly independent vectors.
Vector u cannot be written as a linear combination of the above set.
Then {v1...vk, u} is linearly independent.

I think that this is a false statement and disproved it using the following counterexample:

Set k=2, then:
v1= (1,0)
v2= (0,1)
u= (0,0)

The set of v1 and v2 is linearly independent; u cannot be written as a linear combination of v1 and v2.
However, the set {v1, v2, u} is linearly dependent since the set contains the zero vector.

Is this a valid counterproof?
[imath]0\cdot v_1+0\cdot v_2=?[/imath]
 
[imath]0\cdot v_1+0\cdot v_2=?[/imath]
Yes, I just realized.
Never mind.

I do not know how to approach this question, then.

Is stating that u does not exist in span{v1...vk} and that a1v1+...+akvk=0, where a1=...=ak=0 a good place to start?
Why can't u be written as a linear combination of v1 and v2?

Any vector in R^2 can be written as a linear combination of v1 and v2.

Let (a, b) be in R^2. Then (a, b) = a(1,0) + b(0,1)
 
Suppose that [imath]\gamma_0 {\bf u}+\gamma_1 V_1++\gamma_2 V_2+\cdots+\gamma_k V_k=\bf 0[/imath]
What would it imply if [imath]\gamma_0\ne 0~\&~(\exists j\ge 1)[\gamma_j\ne 0]~?[/imath]
[imath][/imath][imath][/imath]
 
v1...vk is a set of linearly independent vectors.
Vector u cannot be written as a linear combination of the above set.
Then {v1...vk, u} is linearly independent.
I would look at the contrapositive of the theorem.
We are given that v1...vk is a set of l.i. vectors
Suppose {v1...vk, u} is a linearly dependent set.
(we need to show that u can be written as a l.c. of (v1...vk)

Case 1: u=0.
Then u = 0v1 + ... +0vn.

Case 2: u is not the zero vector.
Since v1...vk is a set of l.i. vectors we know that the only l.c. that sums to the zero vector is the trivial one.
Since {v1...vk, u} is a linearly dependent set we know that there exists c1, ..., ck, c(k+1), where c1,...,ck are not all 0, such that
c1v1 + c2v2 +...+ cmvk + c(k+1)u = 0

Now why can't c(k+1) = 0 and what does that tell us????
 
Meant to write..,
Since {v1...vk, u} is a linearly dependent set we know that there exists c1, ..., ck, c(k+1), where c1,...,c(k+1) are not all 0
 
I would look at the contrapositive of the theorem.
We are given that v1...vk is a set of l.i. vectors
Suppose {v1...vk, u} is a linearly dependent set.
(we need to show that u can be written as a l.c. of (v1...vk)

Case 1: u=0.
Then u = 0v1 + ... +0vn.


Case 2: u is not the zero vector.
Since v1...vk is a set of l.i. vectors we know that the only l.c. that sums to the zero vector is the trivial one.
Since {v1...vk, u} is a linearly dependent set we know that there exists c1, ..., ck, c(k+1), where c1,...,ck are not all 0, such that
c1v1 + c2v2 +...+ cmvk + c(k+1)u = 0

Now why can't c(k+1) = 0 and what does that tell us????
So don't I need to draw conclusions from this and disprove this case?
What should I write? Vector u cannot be equal to the zero vector since then it can be written as a linear combination of vectors v1...vk -- Contradiction?
 
I would look at the contrapositive of the theorem.
We are given that v1...vk is a set of l.i. vectors
Suppose {v1...vk, u} is a linearly dependent set.
(we need to show that u can be written as a l.c. of (v1...vk)

Case 1: u=0.
Then u = 0v1 + ... +0vn.

Case 2: u is not the zero vector.
Since v1...vk is a set of l.i. vectors we know that the only l.c. that sums to the zero vector is the trivial one.
Since {v1...vk, u} is a linearly dependent set we know that there exists c1, ..., ck, c(k+1), where c1,...,ck are not all 0, such that
c1v1 + c2v2 +...+ cmvk + c(k+1)u = 0

Now why can't c(k+1) = 0 and what does that tell us????
No idea.
 
I would look at the contrapositive of the theorem.
We are given that v1...vk is a set of l.i. vectors
Suppose {v1...vk, u} is a linearly dependent set.
(we need to show that u can be written as a l.c. of (v1...vk)

Case 1: u=0.
Then u = 0v1 + ... +0vn.

Case 2: u is not the zero vector.
Since v1...vk is a set of l.i. vectors we know that the only l.c. that sums to the zero vector is the trivial one.
Since {v1...vk, u} is a linearly dependent set we know that there exists c1, ..., ck, c(k+1), where c1,...,ck are not all 0, such that
c1v1 + c2v2 +...+ cmvk + c(k+1)u = 0

Now why can't c(k+1) = 0 and what does that tell us????
Perhaps ck+1 cannot be equal to zero since then u cannot be written as a linear combination of v1...vk, and then it contradicts our assumption that u can be written as a linear combination of {v1...vk}.
 
Perhaps ck+1 cannot be equal to zero since then u cannot be written as a linear combination of v1...vk, and then it contradicts our assumption that u can be written as a linear combination of {v1...vk}.
Look at post #5 again. Hint: Solve for u.

-Dan
 
Look at post #5 again. Hint: Solve for u.

-Dan
Prove or disprove:
{v1...vk} is a set of linearly independent vectors.
Vector u cannot be written as a linear combination of the above set.
Then {v1...vk, u} is linearly independent.

Like Steven G wrote:

I would look at the contrapositive of the theorem.
We are given that {v1...vk} is a set of linearly independent vectors.
Suppose {v1...vk, u} is a linearly dependent set.

Case 1: u=0.
Then u = 0v1 + ... +0vk.
This means that u can be written as a linear combination of {v1...vk} -- contradiction.


Case 2: u=/= 0
Since {v1...vk} is a set of linearly independent vectors, we know that the only linear combination that sums to the zero vector is the trivial one.
Since we assume that {v1...vk, u} is a linearly dependent set, we know that there exists c1, ..., ck, b, where c1,...,b are not all 0, such that
c1v1 + c2v2 +...+ ckvk + bu = 0.
Let b=/=0, since we know that not all scalars are equal to zero.

Then u= (-c1/b)v1+...+(-ck/b)vk
which means that u can be written as a linear combination of {v1...vk}, which means that there is a contradiction, hence {v1...vk, u} is a linearly independent set?
 
Prove or disprove:
{v1...vk} is a set of linearly independent vectors.
Vector u cannot be written as a linear combination of the above set.
Then {v1...vk, u} is linearly independent.

Like Steven G wrote:

I would look at the contrapositive of the theorem.
We are given that {v1...vk} is a set of linearly independent vectors.
Suppose {v1...vk, u} is a linearly dependent set.

Case 1: u=0.
Then u = 0v1 + ... +0vk.
This means that u can be written as a linear combination of {v1...vk} -- contradiction.


Case 2: u=/= 0
Since {v1...vk} is a set of linearly independent vectors, we know that the only linear combination that sums to the zero vector is the trivial one.
Since we assume that {v1...vk, u} is a linearly dependent set, we know that there exists c1, ..., ck, b, where c1,...,b are not all 0, such that
c1v1 + c2v2 +...+ ckvk + bu = 0.
Let b=/=0, since we know that not all scalars are equal to zero.

Then u= (-c1/b)v1+...+(-ck/b)vk
which means that u can be written as a linear combination of {v1...vk}, which means that there is a contradiction, hence {v1...vk, u} is a linearly independent set?
Looks good to me.

-Dan
 
No, you can't just say that b is not zero.
However IF b = 0, then c1v1 + c2v2 +...+ ckvk + bu = 0 becomes c1v1 + c2v2 +...+ ckvk = 0. What does this imply? Remember that v1,...vn is linearly independent!
 
Last edited:
No, you can't just say that b is not zero.
However IF b = 0, then c1v1 + c2v2 +...+ ckvk + bu = 0 becomes c1v1 + c2v2 +...+ ckvk = 0. What does this imply? Remember that v1,...vn is linearly independent!
I would like to say that it contradicts the fact that {v1...vk} is linearly independent; however, in order to do so, I must first show that at least one of the scalars c1...ck is not equal to zero.
 
What is the problem here. c1v1 + c2v2 +...+ ckvk = 0 implies that c1=...=cn=0 since v1,...,vn is linear independent set. Now if b=0 as well we have a problem. What is that problem?
 
Top