Inner direct sum – Serlo

Derivation and definition Bearbeiten

We have already learned about sums of two subspaces. If   and   are two subspaces, then the sum of   and   is again a subspace  . So for each vector   we may find two vectors   and  , such that  . Now the question arises: Are there several ways to write   as such a combination?

The answer is yes, there can be several possibilities. As an example, let's look at the vector space  . This space can be viewed as the sum of the  -plane and the  -plane. This means that if  , then there are actually several ways to represent   as the sum of vectors from the  -plane and the  -plane. For the vector  , for example, we have  .

Such representations are therefore generally not unique. We now want to find a criterion for uniqueness.

Suppose we have two different representations of  , i.e.   and   with   and   (if one of them is the same, then so is the other). In particular, we know that   and  . If we now rearrange the equation  , we get  . Because the left-hand side is in   and the right-hand side is in  , this is an element in   which is not a zero vector at the same time. So   is not just  . (The zero vector is in the intersection because   and   are both subspaces). This means that if the representation is not unique, then the intersection   does not only contain the zero vector.

Conversely, if the intersection is not  , we do not have a unique representation: Let   with  . Then there are two representations of  , namely   (on the one hand   with   and   and on the other hand   with   and  ). Because of  , these representations are different from each other.

We can therefore conclude an equivalence: The intersection   is exactly   if the representation of all vectors in   is unique.

In this case, we give the sum a special name: We call the sum of   and  , in the case  , the direct sum of   and   and write  .

Definition (Direct sum)

Let   and   be two subspaces of a vector space  . We call the sum   direct if   holds. The subspace   is called the direct sum of   and   and we write  .

Examples Bearbeiten

Sum of two lines in ℝ² Bearbeiten

 
The lines   and  

We consider the following two lines in  :

 

So   is the  -axis and   is the line that runs through the origin and the point  . Their sum is  

Question: Why do wa have  ?

By the definition   we can describe the set   as

 

We can write each vector in   as   with matching  . Specifically, for each vector   we can find scalars   and   such that  , namely   and  . We conclude  .

Intuitively, you can immediately see that  . This is because   is a subspace of  , which contains the lines   and  . The only subspaces of   are the null space, lines that run through the origin and  . As the lines   and   do not coincide but are different,   cannot be a line. Therefore, we must have  .

Let us now investigate whether this sum is direct. To do so, we need to determine  . If  , then we know the following: Because  , we have  . And because  , we have  . Therefore   and we get  . Because   also contains  , we get  . This means that the sum of   and   is direct and we can write  .

Sum of two lines in ℝ³ Bearbeiten

 
The lines   and  

We have the following lines in  :

 

Then   is a line in   that runs through the origin and the point  , and   is a line that runs through the origin and  . The sum   is a plane spanned by the vectors   and  , i.e.

 

Question: Why is this the sum?

 

So   is a plane that is spanned by the vectors   and  .

Also here, we want to determine whether the sum is direct. To do so, we consider a vector  . Then, because  , we have  . And because  , we obtain  . Therefore,   and the sum is direct. This means that we can write  .

Sum of a line and a plane in ℝ³ Bearbeiten

 
The line   and the plane  

We consider the subvector spaces   and   of  .

 

The subspace   is the line through the origin and the point  , while   represents the y-z-plane. Together,   and   span the entire  , i.e.  .

Question: Why is the sum of   and   the entire space  ?

Since   and   are subvspaces of  , the sum   is also a subspace of  . We still have to show that   is contained in  . To do so, we prove that any vector   lies in  . To accomplish this, we show that there is a   and a   with  .

We choose   and  . Then  . In addition,   and   apply.

Therefore, the entire   is contained in  . Thus  .

One may now ask whether the sum   is direct. To check this, we need to analyze the intersection  . If   only contains the zero vector  , then the sum is direct.

Let   be a vector in  . Since  , we have  . Consequently, we can write   as  . Furthermore,  , which implies  . We have therefore shown that  .

It follows that  . Since the intersction only contains the zero vector, the sum   is direct. Therefore, we can conclude  .

Sum of even and odd polynomials Bearbeiten

We will now look at an example of a direct sum in the vector space of real polynomials  . Let us consider the following subspaces   and   of  :   consists of all odd polynomials over  , while   is the space of the even polynomials over  . In formulas, this is

 

The odd polynomials   only contain monomials with odd exponents, while the even polynomials   only contain monomials with even exponents. For example,   is an even polynomial, while   is neither even nor odd. We now show that the even and odd polynomials together generate the entire polynomial space  . Expressed in formulas:  .

To show this, we need to prove that every polynomial in   can be written as the sum of an odd and an even polynomial. To do so, we consider any polynomial   from  . We must write   as the sum of an even and an odd polynomial.

 

Therefore,   is contained in the sum  .

Now we want to check whether the sum   is direct. That is, we need to check whether the intersection of the two subspaces   only contains the zero vector, i.e. the zero polynomial. Let   be a polynomial in the intersection  . Then   lies both in   and in  . We can write   as  . Since   lies in  ,   only consists of odd monomials. Therefore, the prefactors of the even monomials must be equal to  . So   for all even  . Since   lies in  ,   only consists of even monomials. So   for all odd  . This means that all coefficients   are equal to zero and   is therefore the zero polynomial. Thus  , and the sum of   and   is direct.

We have seen that  . In other words, the polynomial space   can be written as the direct sum of the subspaces   and  , where   is the subspace of odd polynomials and   is the subspace of even polynomials.

Counterexamples Bearbeiten

Two planes in ℝ³ Bearbeiten

 
The planes   and  

We consider the following two planes:

 

The two planes together span all of  . However, the sum is not direct, as the intersection is a line and therefore does not only contain the zero vector. That is,  .

We want to check this mathematically. This requires looking for a vector in the intersection of   and   which is not zero. We consider a vector   that lies in the intersection  . Because this vector lies in  , we have   so  . In addition, there must be   so  , since  .

We now look for suitable values for   to fulfill both conditions. From   and  , we get  . Because  , we also have  . Furthermore,   results from  . Finally, we conclude  .

One possible solution is  ,   and  . The vector   therefore lies in the intersection of   and  . Hence,  .

Various polynomials in polynomial space Bearbeiten

Let   be a field. We consider two subspaces in the polynomial space  : Let   be the space of polynomials of degree less or equal to two, and let

 

be the space of polynomials whose sum of coefficients is  . We want to investigate whether the sum   is direct. To find this out, we need to decide whether  .

An element   is a polynomial  , which has a maximum degree of   and for which   applies. Because the polynomial has degree two, we have  . Therefore, we get  . This means   consists of all polynomials   for which  . Thus, we can find a non-zero element of   if we use the equation

 

with non-trivial  . One possibility for this is  , i.e.  . So the intersection of   and   is not zero, and the sum   is therefore not direct.

Unique decomposition of vectors Bearbeiten

We have already considered in the derivation that the decomposition of vectors is unique for the direct sum. We will prove this result here rigorously.

Theorem (Equivalent characterizations of the direct sum)

Let   be subspaces of  . Then the following statements are equivalent:

  1. The sum of   and   is direct (that is,  ).
  2.   and   have trivial intersection (that is,   is the trivial subspace).
  3. The representation of all elements of   is unique (that is, if   with   and  , then we already have   and  ).
  4. The representation of the zero is unique (that is, if   with   and  , then we already have  ).

Proof (Equivalent characterizations of the direct sum)

The definition of the inner direct sum is just  . We now show the implications  . The statement then readily follows.

Proof step:  

Let  . We must prove that   can be written uniquely as the sum of two elements of   and  .

Let   and   with the property that  . In order to prove uniqueness, we must show that these two representations of   are equal. "Equal" means that   and  .

Because  , we have  . This element lies in   (because of the representation on the left of " ") and in   (because of the representation on the right of " "). So   lies in the intersection  . According to the prerequisite,  . This means  . So   and  . This is exactly what we wanted to show.

Proof step:  

Let   and   with  . This is a representation of  .

On the other hand,   is also a representation of  .

Since representations are unique according to the requirements, we conclude   and  .

Proof step:  

Let  . Then, of course,   and  . Since   is a subspace, for each element   also its additive inverse element msut be in this space, i.e.,  . Therefore,  .

This gives us  . From the uniqueness of the representation of the zero, we conclude  . The intersection is therefore trivial, i.e.  .

Inner direct sum and disjoint union of sets Bearbeiten

We can imagine the sum of two subspaces as a structure-preserving union: Forming the sum is "structure-preserving" because the result is again a subspace. This means that the vector space structure is preserved when forming the sum. We can also think of this construction as a union because the sum contains both subspaces. The subspaces   and   are subsets of the sum  . The sum   is the smallest subspace that contains the two subspaces   and  . Just as you can form unions with sets, the sums of subspaces also work in the same way.

The direct sum is a special case of the sum of subspaces. This means that every direct sum is also a structure-preserving union. "Being direct" is a property of a sum of subspaces. We now want to see whether there is a property of the union of sets that corresponds to the directness of a sum.

Direct sums are characterized by the fact that the decomposition of the vectors in the sum is unique. If we have a vector   with  , where   and  , then the vectors   and   are unique. For a union   of sets   and  , each element   lies in   or in  . The element can also lie in both, which means that we generally do not clearly know where they lie. We cannot assign   unambiguously if  , i.e. in the intersection. This means that the assignment of elements   is unique if   is empty. In fact, this criterion corresponds exactly to the criterion for a sum to be direct: We want  , which is the smallest possible vector space, so the intersection contains nothing more from   and   (except the zero, which it must contain anyway as a vector space). This is exactly the definition of a disjoint union. In other words, the direct sum of subspaces intuitively corresponds to the disjoint union of sets.

Basis and dimension Bearbeiten

We have seen that the direct sum is a special case of a sum of subspaces. So we can transfer everything we know about the vector space sum to the direct sum. We have already seen that the union of bases of   and   is a generating system of  . This means if   is a basis of   and if   is a basis of  , then   is a generating system of  . If   and   are finite dimensional, we can use the dimension formula.

 

If the sum   is direct, i.e. if  , then we even have  . Since  , the following sum formula applies in the finite-dimensional case:

 

So the dimension of the sum space   is exactly the sum of the dimensions   and  . If   is a basis of   and if   is a basis of  , then we can conclude

 

Since  , the union of the bases of   and   is disjoint, i.e.  . Therefore, we get  . Because   is a generating system of   and because  , we conclude that   is a basis of  .

We have thus seen that in finite dimensions, the union of the bases of   and   is a basis of  . This also applies in general:

Theorem (Basis of the direct sum)

Let   and   be two subspaces of a  -vector space  . Assume that the sum of   and   is direct, that is, we can write  . Let   be a basis of   and   a basis of  . Then the union of   and   is disjoint and   is a basis of  .

Proof (Basis of the direct sum)

We have already seen that   is a generating system of  . Therefore, we only have to show that the union   is disjoint and linearly independent.

Proof step:  

Suppose we have  . Then  , so  . However, this is a contradiction to   and  , as a basis cannot contain the zero vector. Therefore, there can be no  , i.e.  .

Proof step:   is linearly independent

Let

 

for any  ,   and   pairwise different, as well as  . We must show that all   and   are equal to  . This corresponds exactly to the definition of the linear independence of  .

From

 

we conclude

 

This term is in   (as a linear combination of elements in  ) as well as in   (as a linear combination of elements in  ). Since   is a direct sum, we obtain

 

From the linear independence of   we conclude   for all   and from the linear independence of   we conclude   for all  .

From this theorem, we may immediately conclude that

 

Exercises Bearbeiten

Exercise

Let   and let  . Consider the two subspaces   and  . Show that   and determine   and   so that   holds.

Solution

To show  , we need to prove two things: First, that the sum of   and   is direct, i.e.  . Secondly, we must show that the sum of   and   equals  , i.e.  .

Proof step:  

Because   and   contain the zero vector as subspaces,   is obvious. For the proof of the reverse inclusion, let   be arbitrary. Then

 

for certain  . From the first line of the vectors we get  . So   and therefore  .

Proof step:  

By definition,  . The two vectors that span   are obviously linearly independent, so  . Furthermore,   and  . The dimension formula for subspaces then renders

 

The dimensions of the subspaces are therefore equal and   follows from  .

Alternatively, you could prove the equality by showing that every   can be written as the sum of a   and a  .

We want to write   as the sum of a vector in   and a vector in  . That means, we are looking for   with

 

We may write this as a linear system:

 

From the first line we conclude  . Plugging this into the second line gives  . Again plugging this into the third line finally yields  . Therefore,   holds with

 

For the following two exercises, you should know what a linear map is.

Exercise (Self-inverse linear maps and subspaces)

Let   be a  -vector space and   a linear map.

  1. Show that the subsets   and   are subspaces of  .
  2. Let additionally  , where   denotes the identity map on  . (A linear mapping with this property is called self-inverse.) Show that then   holds for the two subspaces from the first exercise part.

Solution (Self-inverse linear maps and subspaces)

Solution sub-exercise 1:

We use the subspace criterion and show that   and   are non-empty subsets of   that are closed under linear combinations. We only provide the proof for  . The proof for   works in the same way, you just have to replace all equations of the form " " with " ".

Proof step:  

This holds by definition of  .

Proof step:   is nonempty.

Since   we have  . So   is nonempty.

Proof step:   is closed under linear combinations.

Let   and   be arbitrary. Then

 

So the linear combination   also lies in  .

Solution sub-exercise 2:

In order to show  , we need to prove two things: First, that the sum of   and   is direct, i.e.  . Secondly, we must show that the sum of   and   equals  , i.e. that every vector   can be written as the sum of a vector   and a vector  .

Proof step:  

Because   and   contain the zero vector as subspaces,   is obvious. For the proof of the reverse inclusion, let   be arbitrary. Then

 

i.e.,  , so  . Because   was arbitrary, we have thus shown that  .

Proof step:  

Because   and   are subsets of  ,   is obvious. For the reverse inclusion, let   be arbitrary. Then the following then applies:

 

Because  , it follows from the linearity of   that

 

So  . Analogously, one shows  :

 

So   is a sum of a vector from   and a vector from  . Since   was arbitrary, we conclude   .

For this exercise, you need to know what the kernel and the image of a linear map are.

Exercise (Idempotent mappings)

Let   be a linear map with  . (A linear map with this property is called idempotent or a projection). Show:  .

Solution (Idempotent mappings)

We show that   und  . By definition of the direct sum , the sum of   is therefore indeed direct.

Proof step:  

Since both the kernel and the image of   are subspaces of  , we immediately get  . Let us now show the reverse inclusion  .

Let   be arbitrary. From the condition  , we get  , or in other words  . Due to the linearity of  , we conclude  . Therefore, the element   lies in the kernel of  . Furthermore,   lies in the image of   by definition. Thus

 

is the sum of an element from   and an element from  . So   is in  . Because   was arbitrary, we have shown  .

Proof step:  

Because   and   contain the zero vector as subspaces,   is obvious. For the proof of the reverse inclusion, let   be arbitrary. Then   is an element of the kernel of   and   applies. Because   is also in the image of  , there is a   so that  . Because  , we have

 

Since   was arbitrary, we have thus shown  .

In   we can illustrate the statement from the previous exercise:

Example (Projektions in  )

Let   be  . Then   is linear. In addition,   applies: For each vector  , we have

 

The map   is therefore a projection. Clearly,   projects vectors in   along the  -axis onto the first angle bisector  . In particular,  . Further,   maps the  -axis to the zero vector, i.e.  . Therefore, we indeed have   as proven in the exercise.