Elementary matrices: find E such that EA = B for....

sigma

Junior Member
Joined
Feb 19, 2006
Messages
106
Having a hard time understanding how to find elementary matrices in an equation. Here's the question: Find the elementary matrix E such that EA=B

a. \(\displaystyle \
\L\
A = \left[ {\begin{array}
1 & 2 \\
3 & 4 \\
\end{array}} \right]B = \left[ {\begin{array}
3 & 4 \\
1 & 2 \\
\end{array}} \right]
\\)

Answer: \(\displaystyle \
\L\
\left[ {\begin{array}
0 & 1 \\
1 & 0 \\
\end{array}} \right]
\\)

b.\(\displaystyle \
\L\
A = \left[ {\begin{array}
1 & 2 & 3 \\
2 & 3 & 5 \\
3 & 7 & 4 \\
\end{array}} \right]B = \left[ {\begin{array}
1 & 2 & 3 \\
3 & 5 & 8 \\
3 & 7 & 4 \\
\end{array}} \right]
\\)

Answer:\(\displaystyle \
\L\
\left[ {\begin{array}
1 & 0 & 0 \\
1 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}} \right]
\\)

Its easy to find (a) because its a 2x2 matrix so I can just set it up algebraically and find E but with the 3x3 matrix in (b), you would have to write a book to do all the calculations algebraically. I tried isolating E by doing \(\displaystyle \
\L\
E = BA^{ - 1}
\\) so taking the inverse of A and then multiply it by B to find E but I couldn't get that to work. I would show what work I have done but it would take hours to type. How do specifically solve for E with a 3x3 matrix?
 
I would have multiplied on the right by A<sup>-1</sup>. I'm surprised that this didn't work. What did you get when you tried this?

Thank you.

Eliz.
 
Part (b) is easy if you just take time to think about what is happening.
We operate on A with an elementary matrix E.
What does it do to A? Well, leaves the first and third rows unchanged!
But it adds the first row to the second row and puts the result in the second row.
Look at \(\displaystyle \left[ \begin{array}{ccc}
1 & 0 & 0 \\
1 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right]\) do you see how that matrix will do exactly that?
 
Retry \(\displaystyle \H\\BA^{-1}\)

Remember, \(\displaystyle \H\\A^{-1}B\) isn't the same thing.

\(\displaystyle \L\\A^{-1}=\begin{bmatrix}-2&1\\\frac{3}{2}&\frac{-1}{2}\end{bmatrix}\)

\(\displaystyle \L\\\begin{bmatrix}3&4\\1&2\end{bmatrix}\cdot\begin{bmatrix}-2&1\\\frac{3}{2}&\frac{-1}{2}\end{bmatrix}=\begin{bmatrix}0&1\\1&0\end{bmatrix}\)

Also, \(\displaystyle \L\\AB^{-1}\)
 
To expand on pka's reply: you should not need to take an inverse. There are only 3 elementary row operations:
1. swap rows,
2. multiply a row by p,
3. add to a row p times another row.

So using a process of elimination in that order, you should be able to figure out with some simple calculations what the row operation was. To get E, just perform that same operation on the unit matrix.
 
JakeD said:
To expand on pka's reply: you should not need to take an inverse. There are only 3 elementary row operations (swap rows, multiply a row by p, add to a row p times another row). So using a process of elimination, you should be able to figure out just by looking at the matrices what the row operation was. To get E, just perform that same operation on the unit matrix.
I can almost guarantee you that that sort of problem is designed to prepare you for several applications of this technique. For example we use these ideas to prove that two simple graphs are isomorphic.
 
I just don't understand. Taking the inverse for (a) works but when I try it in (b) its just a disaster. Here's what I got for \(\displaystyle \
A^{ - 1}
\\) for (b)

\(\displaystyle \
\L\
\left[ {\begin{array}
{-\frac{7}{4}} & {\frac{7}{4}} & { - \frac{1}{4}} \\
{\frac{{13}}{4}} & { - \frac{5}{4}} & { - \frac{1}{4}} \\
{ - \frac{5}{4}} & {\frac{1}{4}} & {\frac{1}{4}} \\
\end{array}} \right]
\\)

Ugly isn't it? Just way to much work to go this way. When I mutiply B and A together, I get

\(\displaystyle \
\L\
\left[ {\begin{array}
1 & 0 & 0 \\
1 & 1 & 0 \\
{\frac{{25}}{2}} & { - \frac{5}{2}} & { - \frac{3}{2}} \\
\end{array}} \right]
\\)

Which does not equal the correct answer. What confuses me more is that the first 2 rows of my answer are correct to that of the books answer but the last row is completely different. This does not make sense because if the 1st and 2nd rows check out, the 3rd row should also work if the first 2 did. It takes too long to do it this way anyways.

To expand on pka's reply: you should not need to take an inverse. There are only 3 elementary row operations:
1. swap rows,
2. multiply a row by p,
3. add to a row p times another row.

So using a process of elimination in that order, you should be able to figure out with some simple calculations what the row operation was. To get E, just perform that same operation on the unit matrix.

Please bare with me, but I do not understand this. I need to see some sorta visual example that my text fails to provide. I know that what ever row operation or something that is done to A, you do to that same row operation to the identity matrix and that becomes E but does that mean I'm just row reducing matrix A and then when I get to the last step before A is completely row reduced, I do that step to the unit matrix and that becomes E? Again without some sorta visual example, this is really hard for me to grasp. Its hard to explain/understand stuff like this in all words.
 
O.K. Let’s take an example.
Using \(\displaystyle \left[ {\begin{array}{ccc}
1 & 2 & 3 \\
2 & 3 & 5 \\
3 & 7 & 4 \\
\end{array}} \right]\) we want to a new matrix where we have swapped rows 2 & 3; and replace the first row with the sum of rows 2 & 3.

That would be \(\displaystyle \left[ {\begin{array}{ccc}
5 & 10 & 9 \\
3 & 7 & 4 \\
2 & 3 & 5 \\
\end{array}} \right].\) Is that correct?

The matrix \(\displaystyle \left[ {\begin{array}{ccc}
0 & 1 & 1 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}} \right]\) times the matrix \(\displaystyle \left[ {\begin{array}{ccc}
1 & 2 & 3 \\
2 & 3 & 5 \\
3 & 7 & 4 \\
\end{array}} \right]\) gives \(\displaystyle \left[ {\begin{array}{ccc}
5 & 10 & 9 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}} \right].\)

The matrix \(\displaystyle \left[ {\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}} \right]\) times the matrix \(\displaystyle \left[ {\begin{array}{ccc}
1 & 2 & 3 \\
2 & 3 & 5 \\
3 & 7 & 4 \\
\end{array}} \right]\) gives \(\displaystyle \left[ {\begin{array}{ccc}
0 & 0 & 0 \\
3 & 7 & 4 \\
2 & 3 & 5 \\
\end{array}} \right].\)

Therefore the matrix we want is the matrix \(\displaystyle \left[ {\begin{array}{ccc}
0 & 1 & 1 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}} \right].\)
 
What they were getting at is what Jake was saying.

Note your matrix A in part b

\(\displaystyle \begin{bmatrix}1&2&3\\2&3&5\\3&7&4\end{bamtrix}\)

Note your B matrix in part b:

\(\displaystyle \begin{bmatrix}1&2&3\\3&5&8\\3&7&4\end{bmatrix}\)

See what the difference is between them?. The 2nd row. In A, they have added the 1st row to the 2nd and replaced the 2nd with the result.
See?. 1+2=3, 2+3=5, 3+5=8. There's the new row 2 in B: 3 5 8.

Now, do the same operation on the identity matrix.

\(\displaystyle \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}\)

Add the 1st row to the 2nd row and replace the 2nd row with the result:

\(\displaystyle \begin{bmatrix}1&0&0\\1&1&0\\0&0&1\end{bmatrix}\)

See?.

Do the inverse thing too. It'll work if done correctly. This is easiest ,though, huh?.

Elementary row ops are pretty cool once you wrap your brain around them. You can use them to find the inverse of your matrix. Did you do that or with a calculator?.
 
Top