Difference between revisions of "Section 1.11 Homework"

From Grad Wiki
Jump to navigation Jump to search
(Created page with "'''7.''' Show that two 2-dimensional subspaces of a 3-dimensional subspace must have nontrivial intersection.<br /> <br /> ''Proof:'' (by contradiction) Suppose <math>M,N</mat...")
 
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
'''7.''' Show that two 2-dimensional subspaces of a 3-dimensional subspace must have nontrivial intersection.<br />
 
'''7.''' Show that two 2-dimensional subspaces of a 3-dimensional subspace must have nontrivial intersection.<br />
 
<br />
 
<br />
''Proof:'' (by contradiction) Suppose <math>M,N</math> are both 2-dimensional subspaces of a 3-dimension vector space <math>V</math> and assume that <math>M,N</math> have trivial intersection. Then <math>M+N</math> is also a subspace of <math>V</math>, and since <math>M,N</math> have a trivial intersection <math>M+N = M \oplus N</math>. But then:<br />
+
{| class="mw-collapsible mw-collapsed" style = "text-align:left;"
 +
!Proof:
 +
|-
 +
|(by contradiction) Suppose <math>M,N</math> are both 2-dimensional subspaces of a 3-dimension vector space <math>V</math> and assume that <math>M,N</math> have trivial intersection. Then <math>M+N</math> is also a subspace of <math>V</math>, and since <math>M,N</math> have a trivial intersection <math>M+N = M \oplus N</math>. But then:<br />
 
<math>\dim (M+ N) = \dim M + \dim N = 2 + 2</math>. However subspaces must have a smaller dimension than the whole vector space and <math>4 > 3</math>. This is a contradiction and so <math>M,N</math> must have trivial intersection.<br />
 
<math>\dim (M+ N) = \dim M + \dim N = 2 + 2</math>. However subspaces must have a smaller dimension than the whole vector space and <math>4 > 3</math>. This is a contradiction and so <math>M,N</math> must have trivial intersection.<br />
 +
|}
 
<br />
 
<br />
8. Let <math>M_1,M_2 \subset V</math> be subspaces of a finite dimensional vector space <math>V</math>. Show that <math>\dim (M_1 \cap M_2) + \dim (M_1 \cup M_2) = \dim M_1 + \dim M_2</math>.<br />
+
 
<br />
 
Proof: Define the linear map <math>L: M_1 \times M_2 \to V</math> by <math>L(x_1,x_2) = x_1 - x_2</math>. Then by dimension formula <math>\dim(M_1 \times M_2) = \dim \ker(L) + \dim \text{im}(K)</math> First note that in general <math>\dim (V \times W) = \dim V + \dim W</math>. This fact I won’t prove here but is why <math>\dim \mathbb{R}^2 = 1+1 = 2</math>. Now <math>\ker(L) = \{(x_1,x_2): L(x_1,x_2) = 0\}</math>. That is, <math>(x_1,x_2) \in \ker(L)</math> iff <math>x_1 - x_2 = 0 \Rightarrow x_1 = x_2</math>. But since <math>x_1 \in M_1</math> and <math>x_2 \in M_2</math> and they are actually the same vector, <math>x_1 = x_2</math>, then we must have <math>x_1 = x_2 \in M_1 \cap M_2</math>. That says that the elements of the kernel are ordered pairs where the first and second component are equal and must be in <math>M_1 \cap M_2</math>. Then we can write <math>\ker(L) = \{ (x,x) : x \in M_1 \cap M_2\}</math>. I claim that this is isomorphic to <math>M_1 \cap M_2</math>. To prove this consider the function <math>\phi: M_1 \cap M_2 \to \ker(L)</math> as <math>\phi(x) = (x,x)</math>. This map <math>\phi</math> is an isomorphism which you can check. Since we have an isomorphism, the dimensions must equal and so <math>\dim(M_1 \cap M_2) = \dim(\ker(L))</math>. Finally let us examine <math>\text{im}(L) = \{x_1 - x_2: x_1 \in M_1, x_2 \in M_2\}</math>. I claim that <math>\text{im}(L) = M_1 + M_2</math>. Note, this is equal and not just isomorphic. To see this, we note that if <math>x_2 \in M_2</math> then <math>-x_2 \in M_2</math> by subspace property. So then any <math>x_1 + x_2 \in M_1 + M_2</math> is also equal to <math>x_1 - (-x_2) \in \text{im}(L)</math>. So these sets do indeed contain the exact same elements. That means <math>\dim (M_1 + M_2) = \dim \text{im}(L)</math>. Putting this all together gives:<br />
 
<math>\dim M_1 + \dim M_2 = \dim(M_1 \times M_2) = \dim \ker(L) + \dim \text{im}(L) = \dim (M_1 \cap M_2) + \dim(M_1 + M_2)</math>.<br />
 
<br />
 
 
'''8.''' Let <math>M_1,M_2 \subset V</math> be subspaces of a finite dimensional vector space <math>V</math>. Show that <math>\dim (M_1 \cap M_2) + \dim (M_1 \cup M_2) = \dim M_1 + \dim M_2</math>.<br />
 
'''8.''' Let <math>M_1,M_2 \subset V</math> be subspaces of a finite dimensional vector space <math>V</math>. Show that <math>\dim (M_1 \cap M_2) + \dim (M_1 \cup M_2) = \dim M_1 + \dim M_2</math>.<br />
 
<br />
 
<br />
''Proof:'' Define the linear map <math>L: M_1 \times M_2 \to V</math> by <math>L(x_1,x_2) = x_1 - x_2</math>. Then by dimension formula <math>\dim(M_1 \times M_2) = \dim \ker(L) + \dim \text{im}(K)</math> First note that in general <math>\dim (V \times W) = \dim V + \dim W</math>. This fact I won’t prove here but is why <math>\dim \mathbb{R}^2 = 1+1 = 2</math>. Now <math>\ker(L) = \{(x_1,x_2): L(x_1,x_2) = 0\}</math>. That is, <math>(x_1,x_2) \in \ker(L)</math> iff <math>x_1 - x_2 = 0 \Rightarrow x_1 = x_2</math>. But since <math>x_1 \in M_1</math> and <math>x_2 \in M_2</math> and they are actually the same vector, <math>x_1 = x_2</math>, then we must have <math>x_1 = x_2 \in M_1 \cap M_2</math>. That says that the elements of the kernel are ordered pairs where the first and second component are equal and must be in <math>M_1 \cap M_2</math>. Then we can write <math>\ker(L) = \{ (x,x) : x \in M_1 \cap M_2\}</math>. I claim that this is isomorphic to <math>M_1 \cap M_2</math>. To prove this consider the function <math>\phi: M_1 \cap M_2 \to \ker(L)</math> as <math>\phi(x) = (x,x)</math>. This map <math>\phi</math> is an isomorphism which you can check. Since we have an isomorphism, the dimensions must equal and so <math>\dim(M_1 \cap M_2) = \dim(\ker(L))</math>. Finally let us examine <math>\text{im}(L) = \{x_1 - x_2: x_1 \in M_1, x_2 \in M_2\}</math>. I claim that <math>\text{im}(L) = M_1 + M_2</math>. Note, this is equal and not just isomorphic. To see this, we note that if <math>x_2 \in M_2</math> then <math>-x_2 \in M_2</math> by subspace property. So then any <math>x_1 + x_2 \in M_1 + M_2</math> is also equal to <math>x_1 - (-x_2) \in \text{im}(L)</math>. So these sets do indeed contain the exact same elements. That means <math>\dim (M_1 + M_2) = \dim \text{im}(L)</math>. Putting this all together gives:<br />
+
{| class="mw-collapsible mw-collapsed" style = "text-align:left;"
 +
!Proof:
 +
|-
 +
|Define the linear map <math>L: M_1 \times M_2 \to V</math> by <math>L(x_1,x_2) = x_1 - x_2</math>. Then by dimension formula <math>\dim(M_1 \times M_2) = \dim \ker(L) + \dim \text{im}(K)</math> First note that in general <math>\dim (V \times W) = \dim V + \dim W</math>. This fact I won’t prove here but is why <math>\dim \mathbb{R}^2 = 1+1 = 2</math>. Now <math>\ker(L) = \{(x_1,x_2): L(x_1,x_2) = 0\}</math>. That is, <math>(x_1,x_2) \in \ker(L)</math> iff <math>x_1 - x_2 = 0 \Rightarrow x_1 = x_2</math>. But since <math>x_1 \in M_1</math> and <math>x_2 \in M_2</math> and they are actually the same vector, <math>x_1 = x_2</math>, then we must have <math>x_1 = x_2 \in M_1 \cap M_2</math>. That says that the elements of the kernel are ordered pairs where the first and second component are equal and must be in <math>M_1 \cap M_2</math>. Then we can write <math>\ker(L) = \{ (x,x) : x \in M_1 \cap M_2\}</math>. I claim that this is isomorphic to <math>M_1 \cap M_2</math>. To prove this consider the function <math>\phi: M_1 \cap M_2 \to \ker(L)</math> as <math>\phi(x) = (x,x)</math>. This map <math>\phi</math> is an isomorphism which you can check. Since we have an isomorphism, the dimensions must equal and so <math>\dim(M_1 \cap M_2) = \dim(\ker(L))</math>. Finally let us examine <math>\text{im}(L) = \{x_1 - x_2: x_1 \in M_1, x_2 \in M_2\}</math>. I claim that <math>\text{im}(L) = M_1 + M_2</math>. Note, this is equal and not just isomorphic. To see this, we note that if <math>x_2 \in M_2</math> then <math>-x_2 \in M_2</math> by subspace property. So then any <math>x_1 + x_2 \in M_1 + M_2</math> is also equal to <math>x_1 - (-x_2) \in \text{im}(L)</math>. So these sets do indeed contain the exact same elements. That means <math>\dim (M_1 + M_2) = \dim \text{im}(L)</math>. Putting this all together gives:<br />
 
<math>\dim M_1 + \dim M_2 = \dim(M_1 \times M_2) = \dim \ker(L) + \dim \text{im}(L) = \dim (M_1 \cap M_2) + \dim(M_1 + M_2)</math>.<br />
 
<math>\dim M_1 + \dim M_2 = \dim(M_1 \times M_2) = \dim \ker(L) + \dim \text{im}(L) = \dim (M_1 \cap M_2) + \dim(M_1 + M_2)</math>.<br />
 +
|}
 
<br />
 
<br />
  
 
'''16.''' Show that the matrix<br />
 
'''16.''' Show that the matrix<br />
<math>\begin{bmatrix} 0 & 1 \\ 0 & 1\end{bmatrix</math>
+
<math>\begin{bmatrix} 0 & 1 \\ 0 & 1\end{bmatrix}</math>
 
as a linear map satisfies <math>\ker(L) = \text{im}(L)</math>.<br />
 
as a linear map satisfies <math>\ker(L) = \text{im}(L)</math>.<br />
 
<br />
 
<br />
''Proof:'' The matrix is already in eschelon form and has one pivot in the second column. That means that a basis for the column space which is the same as the image would be the second column. In other words, <math>\text{im}(L) = \text{Span} \left (\begin{bmatrix} 1 \\ 0 \end{bmatrix} \right )</math>. Now for the kernel space. Writing out the equation <math>Lx = 0</math> reads <math>0x_1 + 1x_2 = 0</math> or in other words <math>x_2 = 0</math>. Then an arbitrary element of the kernel <math>\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = x_2 \begin{bmatrix} 1 \\ 0 \end{bmatrix}</math>. So again <math>\ker(L) = \text{Span} \left (\begin{bmatrix} 1 \\ 0 \end{bmatrix} \right )</math>. In other words, <math>\ker(L) = \text{im}(L)</math>.<br />
+
{| class="mw-collapsible mw-collapsed" style = "text-align:left;"
 +
!Proof:
 +
|-
 +
|The matrix is already in eschelon form and has one pivot in the second column. That means that a basis for the column space which is the same as the image would be the second column. In other words, <math>\text{im}(L) = \text{Span} \left (\begin{bmatrix} 1 \\ 0 \end{bmatrix} \right )</math>. Now for the kernel space. Writing out the equation <math>Lx = 0</math> reads <math>0x_1 + 1x_2 = 0</math> or in other words <math>x_2 = 0</math>. Then an arbitrary element of the kernel <math>\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = x_2 \begin{bmatrix} 1 \\ 0 \end{bmatrix}</math>. So again <math>\ker(L) = \text{Span} \left (\begin{bmatrix} 1 \\ 0 \end{bmatrix} \right )</math>. In other words, <math>\ker(L) = \text{im}(L)</math>.<br />
 +
|}
 
<br />
 
<br />
  
17. Show that<br />
+
'''17.''' Show that<br />
 
<math>\begin{bmatrix} 0 & 0 \\ \alpha & 1\end{bmatrix}</math>
 
<math>\begin{bmatrix} 0 & 0 \\ \alpha & 1\end{bmatrix}</math>
 
defines a projection for all <math>\alpha \in \mathbb{F}</math>. Compute the kernel and image.<br />
 
defines a projection for all <math>\alpha \in \mathbb{F}</math>. Compute the kernel and image.<br />
 
<br />
 
<br />
First I will deal with the case <math>\alpha = 0</math>. In this case the matrix is <math>\begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}</math> and we see by the procedure in the last problem that: <math>\text{im} (L) = \text{Span} \left (\begin{bmatrix} 0 \\ 1 \end{bmatrix} \right )</math> and <math>\ker(L) = \text{Span} \left ( \begin{bmatrix} 1 \\ 0 \end{bmatrix} \right )</math>.<br />
+
{| class="mw-collapsible mw-collapsed" style = "text-align:left;"
 +
!Proof:
 +
|-
 +
|First I will deal with the case <math>\alpha = 0</math>. In this case the matrix is <math>\begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}</math> and we see by the procedure in the last problem that: <math>\text{im} (L) = \text{Span} \left (\begin{bmatrix} 0 \\ 1 \end{bmatrix} \right )</math> and <math>\ker(L) = \text{Span} \left ( \begin{bmatrix} 1 \\ 0 \end{bmatrix} \right )</math>.<br />
 
<br />
 
<br />
 
Now for the case <math>\alpha \ne 0</math>. Then we still have only one pivot and either column can form a basis for the image. Using the second column makes it look nicer, and is the same as the previous case. <math>\text{im} (L) = \text{Span} \left (\begin{bmatrix} 0 \\ 1 \end{bmatrix} \right )</math>. The difference is when we write out the equation <math>Lx = 0</math> to find the kernel, we get <math>\alpha x_1 + x_2 = 0</math>. With <math>x_2</math> as our free variable this means <math>x_1 = -\frac{1}{\alpha} x_2 </math> so that a basis for the kernel is <math>\ker(L) = \text{Span} \left ( \begin{bmatrix} -\frac{1}{\alpha} \\ 1 \end{bmatrix} \right )</math>.
 
Now for the case <math>\alpha \ne 0</math>. Then we still have only one pivot and either column can form a basis for the image. Using the second column makes it look nicer, and is the same as the previous case. <math>\text{im} (L) = \text{Span} \left (\begin{bmatrix} 0 \\ 1 \end{bmatrix} \right )</math>. The difference is when we write out the equation <math>Lx = 0</math> to find the kernel, we get <math>\alpha x_1 + x_2 = 0</math>. With <math>x_2</math> as our free variable this means <math>x_1 = -\frac{1}{\alpha} x_2 </math> so that a basis for the kernel is <math>\ker(L) = \text{Span} \left ( \begin{bmatrix} -\frac{1}{\alpha} \\ 1 \end{bmatrix} \right )</math>.
 +
|}

Latest revision as of 00:05, 16 November 2015

7. Show that two 2-dimensional subspaces of a 3-dimensional subspace must have nontrivial intersection.

Proof:
(by contradiction) Suppose are both 2-dimensional subspaces of a 3-dimension vector space and assume that have trivial intersection. Then is also a subspace of , and since have a trivial intersection . But then:

. However subspaces must have a smaller dimension than the whole vector space and . This is a contradiction and so must have trivial intersection.


8. Let be subspaces of a finite dimensional vector space . Show that .

Proof:
Define the linear map by . Then by dimension formula First note that in general . This fact I won’t prove here but is why . Now . That is, iff . But since and and they are actually the same vector, , then we must have . That says that the elements of the kernel are ordered pairs where the first and second component are equal and must be in . Then we can write . I claim that this is isomorphic to . To prove this consider the function as . This map is an isomorphism which you can check. Since we have an isomorphism, the dimensions must equal and so . Finally let us examine . I claim that . Note, this is equal and not just isomorphic. To see this, we note that if then by subspace property. So then any is also equal to . So these sets do indeed contain the exact same elements. That means . Putting this all together gives:

.


16. Show that the matrix
as a linear map satisfies .

Proof:
The matrix is already in eschelon form and has one pivot in the second column. That means that a basis for the column space which is the same as the image would be the second column. In other words, . Now for the kernel space. Writing out the equation reads or in other words . Then an arbitrary element of the kernel . So again . In other words, .


17. Show that
defines a projection for all . Compute the kernel and image.

Proof:
First I will deal with the case . In this case the matrix is and we see by the procedure in the last problem that: and .


Now for the case . Then we still have only one pivot and either column can form a basis for the image. Using the second column makes it look nicer, and is the same as the previous case. . The difference is when we write out the equation to find the kernel, we get . With as our free variable this means so that a basis for the kernel is .