Matrix Proof: If det(A)=0, then A inverse does not exist

onemachine

New member
Joined
Feb 2, 2012
Messages
28
Let A be a (2x2) matrix. Prove:

If the determinant of A is zero, then A inverse does not exist.

(can't use the 2x2 inverse formula...because that is essentially what I am proving)


So far I've tried assuming for contradiction that A inverse does exist.

Then A*A inverse = I
and A inverse * A = I

I let the rows of A be [a b] and [c d] respectively.
Let the rows of A inverse be [w x] and [y z].

This allows me to come up with tons of equations, but I still can't come up with a contradiction using ad-bc=0.

Any ideas? Thanks!
 
If A inverse did exist, det(A^(-1)A) = det(I) = 1. But det(A^(-1)A) = det(A^(-1))det(A) = 0
 
If A inverse did exist, det(A^(-1)A) = det(I) = 1. But det(A^(-1)A) = det(A^(-1))det(A) = 0

Very nice...thank you. We haven't been taught the properties of determinants yet so I had no way of knowing that property. But I found it in our book 2 chapters ahead. Thanks again!
 
here is a proof that doesn't use the formula det(AB) = det(A)det(B).

suppose det(A) = 0, where A is the matrix:

[a b]
[c d]. thus ad = bc.

if any of a,b,c or d is 0, then at least one other entry must also be 0 (because ad = bc, if we have a 0 on one side, we have to have 0 on the other).

thus A has one zero row, or column. either way, rank(A) ≤ 1, so the image space (column space) A has dimension 0, or 1. this means A cannot be invertible, because there are some vectors in R^2 that cannot be written as A(x,y), but if A had an inverse, we would have (x,y) = A(u,v) where (u,v) = A^-1(x,y).

so suppose that none of a,b,c or d is 0. then ad = bc means a/c = b/d that is: (b,d) = d((b/d),1) = d((a/c),1) = (d/c)(a,c), that is, column 2 is a scalar multiple of column 1. so again, we see that rank(A) < 2, so A cannot be invertible (if you prefer, you can show that a/b = d/c, and that row 2 is also a scalar multiple of row 1, and row-reduce).

****

this result generalizes to larger matrices as follows: if A is an nxn matrix and rank(A) < n, then A is not invertible (and det(A) = 0). put another way: A^-1 exists iff rref(A) = I.

the proof that det(AB) = det(A)det(B) is not very pretty to wade through (although it is a very useful result), and some texts omit it.
 
the proof that det(AB) = det(A)det(B) is not very pretty to wade through (although it is a very useful result), and some texts omit it.

For this case, it is not tedious at all. If A is degenerate then obviously so is BA. So det(A)=0 implies det(BA)=0=det(B)det(A). Replacing B with A^(-1) gives the result.
 
Hello, onemachine!

Let \(\displaystyle A\) be a \(\displaystyle 2 \times 2\) matrix.

Prove: .If the determinant of \(\displaystyle A\) is zero, then \(\displaystyle A^{\text{-}1}\) does not exist.


So far, I've tried assuming for contradiction that A inverse does exist.

Then: .\(\displaystyle A\!\cdot\!A^{\text{-}1} \:=\:A^{\text{-}1}\!\cdot\!A \:=\:I\)

I let: .\(\displaystyle A \:=\:\begin{bmatrix}a & b \\ c & d \end{bmatrix}\,\text{ and }\,A^{\text{-}1} \:=\:\begin{bmatrix}w&x \\ y&z\end{bmatrix}\)

This allows me to come up with tons of equations,
but I still can't come up with a contradiction using \(\displaystyle ad-bc=0.\) . You can't?

We have: .\(\displaystyle \begin{bmatrix}a&b \\ c&d\end{bmatrix}\,\begin{bmatrix}w&x \\ y&z \end{bmatrix} \;=\;\begin{bmatrix}1&0 \\ 0&1 \end{bmatrix}\)

Then: .\(\displaystyle \begin{bmatrix}aw + by & ax + bz \\ cw + dy & cx + dz\end{bmatrix} \;=\;\begin{bmatrix}1&0 \\ 0 & 1\end{bmatrix}\)

Hence: . \(\displaystyle \begin{Bmatrix}aw + by \:=\:1 & [1] && ax + bz \:=\:0 & [2] \\ cw + dy \:=\:0 & [3] && cx + dz \:=\:1 & [4] \end{Bmatrix}\)


\(\displaystyle \begin{array}{cccccccc}\text{Multiply [1] by }d\!: & adw + bdy \:=\:d \\
\text{Multiply [3] by }b\!: & bcw + bdy \:=\:0 \\ \text{Subtract:} & adw - bcw \:=\:d\end{array}\)

Therefore: .\(\displaystyle (ad-bc)w \:=\:d \quad\Rightarrow\quad w \:=\:\dfrac{d}{ad-bc}\)


But if \(\displaystyle ad-bc = 0\), then \(\displaystyle w\) is undefined.

Therefore, \(\displaystyle A^{\text{-}1}\) does not exist.
 

Therefore: .\(\displaystyle (ad-bc)w \:=\:d \quad\Rightarrow\quad w \:=\:\dfrac{d}{ad-bc}\)


But if \(\displaystyle ad-bc = 0\), then \(\displaystyle w\) is undefined.

Therefore, \(\displaystyle A^{\text{-}1}\) does not exist.

That implication is false. 0*1 = 0 is true. But it does not imply 1=0/0. Writing the implication assumes you know ad-bc is not zero.
 
For this case, it is not tedious at all. If A is degenerate then obviously so is BA. So det(A)=0 implies det(BA)=0=det(B)det(A). Replacing B with A^(-1) gives the result.

for a 2x2 matrix, one can prove det(A)det(B) = det(AB) explicitly easily enough (well, it's not TOO hard). proving it for a 3x3 matrix is...ugly. proving it for a 4x4 matrix is either an exercise in self-discipline, or torture, i'm not sure which.

the equivalence of the statements "A is singular" and "det(A) = 0", is also non-trivial.
 
the equivalence of the statements "A is singular" and "det(A) = 0", is also non-trivial.

That all depends on what is known about them. Nearly everything is non-trivial if one back-tracks to definitions.
 
Top