Complements of vector spaces – Serlo

Introduction Bearbeiten

We consider a vector space   with some subspace   of  . Can we then find a subspace   of   that complements   to  ? "That is, if we add   to  , we would like to get all of  . But at the same time, what we add shall not already have be in  .

We have already seen earlier how to add two vector spaces, and in this context we would like   to hold. Further,   shall not contain anything from  . We have already learned about this concept in the article on inner direct sum: We want   and   to form an inner direct sum. So   should apply.

To summarize, we are looking for a subspace   of   for which   holds. If   is written as a direct sum of subspaces, this is also called a decomposition of  . This is because we decompose   into "smaller" parts using the direct sum.

Definition Bearbeiten

Definition (Complement)

Let   be a field and   a   vector space. Let   be a subspace of  . Then a complement   of   in   is defined as a subspace of   such that   holds. This means   holds, and this sum is an inner direct sum.

Existence and Uniqueness Bearbeiten

Existence Bearbeiten

Suppose we have given   and a subspace  . How do we find a subspace   of   so that   holds? For example, let   and let the subspace   be the diagonal line through the origin with slope 1. According to the theorem on the basis of a direct sum, the following applies: If   holds, then a basis   of   together with a basis   of   will form a basis of  . So we first choose a basis   of  : For example, we can choose

 

According to the basis completion theorem, we can add a vector from   to a basis   of   by adding a vector that does not lie on the line  :

 

If we define   as the set of newly added basis vectors and  , then   should hold. In our example, we obtain the  -axis for  :

 

We can see that the sum is direct because the intersection of the two lines is the set  , while togwther, they span the entire vector space.

We may even prove that that this kind of construction always provides a complement of a given subspace of a vector space via the basis completion theorem:

Theorem (Complements always exist)

Let   be a  -vector space over a field  . Let further   be a subspace. Then there is a subspace   such that  , i.e.   is a complement of   in  .

How to get to the proof? (Complements always exist)

We know from the theorem about the basis of a direct sum that a basis of   together with a basis of   must result in a basis of  . Since   and   are given, we first choose a basis of   and add this to a basis of  . The span of the newly added basis vectors is then a canidate for the required subspace  . We only have to check that the sum of   and   is direct and results in  .

Proof (Complements always exist)

In this proof, we will use bases. These will be defined later in this series, but they are unavoidable here. There is no circular reasoning because we have not used complements in the articles on bases.

Let   be a subspace. We choose a basis   of  . According to the basis completion theorem, we can add   to a basis   of  . Let then  . This is by definition a subspace of  .

  holds, since   already contains the basis   of  .

It remains to show that  . Let  . Then   has representations as a linear combination of vectors in   on the one hand, and of vectors in   on the other. However, since   forms a basis of   and is therefore linearly independent, only   remains as an option.

Warning

Complements always exist in our setting. However, in your further studies, it may happen that the term "complement" is defined somewhat differently, e.g. in functional analysis. Then there are examples of subspaces that have no complement.

Hint

Strictly speaking, we have only shown the existence of complements for finite-dimensional  , because we have only proved the basis completion theorem in the finite-dimensional case. However, there is a more general version of the basis completion theorem that works for all vector spaces. This can be used to prove the above theorem in exactly the same way and obtain the existence of complements in infinite-dimensional vector spaces as well.

Complements are not unique Bearbeiten

Is the complement   from the last section unique? To define the complement, we used the basis addition theorem. Now we know that bases are in general not unique. Therefore, we could also complete a basis of   to another basis of  , which would lead to another subspace   as the complement. We will now try this out using an example:

Let's look at the example from the last section again: We consider   and the first angle bisector  . We already know that

 

is a basis of   and that we can add   to a basis of   by adding the vector  . We have thus seen that   is a complement of   in  . Another vector that is not in   is  . This means that we can also add   to the basis

 

and therefore,   is also a complement of   in  . We have thus found two complements:   and  . These vector spaces are the coordinate axes of   and therefore   holds. This means that   has no unique complement in   and complements are not unique.

Examples and exercises Bearbeiten

Example (Trivial complements)

Let   be a vector space. We always have  . Therefore,   is a complement of   in  .

The construction from the proof of the theorem on the existence of complements also works in this case: If  , then we do not need to add any vectors to the basis   of  . Then   is a complement, because the span of the empty set is the null space. It works in the same way in the case  : Then   and we may complete it to a basis of  .

Example (Complement of a plane in space)

We consider the plane  , which is spanned by the vectors   and  , i.e.

 

Our aim is to find a complement of  . We can proceed in a similar way to the theorem on the existence of complements. First, we choose a basis of  , then we complement it to a basis of the entire  . The two vectors that span  , namely   and  , are already linearly independent. Therefore, they already form a basis of  . To construct a complement of  , we only need one more vector, because   is a  -dimensional vector space. We therefore require a vector that is linearly independent of the vectors   and  . For instance, we may choose the vector  . It is easy to check that the three vectors are indeed linearly independent.

Question: Are the three vectors really linearly independent?

Let   with

 

We have to show that   has to hold. If we look at the vectors line by line, we get a system of equations with three equations:

 

From the first and third equation, we deduce  . Substituting this into the second equation, we obtain  . So   and therefore also  .

The three vectors   and (1,1,1) therefore form a basis of  . The new vector   spans a possible complement  :

 

Example (Decomposition of polynomials)

We consider the vector space   of the polynomials over  . Then,   is a subspace of  . We want to find a complement of   in  .

We can also write the condition that   differently: We can write  . Then   and such a polynomial   lies in   if  . In order to construct a complement of  , we must therefore find enough polynomials with  . One such polynomial is the constant polynomial  .

Have we already found enough polynomials with   to have a complement? To answer this question, we may check whether  . Let   be an arbitrary polynomial. Then   is contained in the span of  . Again,   is a polynomial with  . This means   with   and  . Therefore,  .

Furthermore, this sum is direct because we know that  . Thus, we have found a complement of   in  . The subspace   is exactly the subspace of constant polynomials. That means, we have just proved that every polynomial   can be decomposed into a polynomial   with   and a constant polynomial. The constant part is sometimes also also called the  -intercept.

Of course, we could also have generated a complement of   using any other polynomial   with  .

Exercise (Uniqueness of complements)

Let   be a  -vector space. Show that a subspace   has a unique complement in   if and only if either   or  .

Solution (Uniqueness of complements)

Proof step:  

Let   be a subspace with  . Note that this implies in particular  . (The only subspaces of a one-dimensional vector space are   and the space itself). Let   be a complement of   in  . We show that   is not unique by constructing another complement   of  .

Neither   nor   applies: In the first case,  , but this cannot be all of  , as otherwise  . In the second case,   would also be true. It therefore follows from the theorem on the union of subvector spaces that  . Hence there exist vectors in   that lie neither in   nor in  . Choose such a vector  . Because   is not in  ,   is linearly independent of all vectors in  . Because   is not in  ,   is also linearly independent of all vectors in  .

Now, choose a basis of  , replace one of the basis vectors with   and define   as the span of the new basis. Because   but  , we have  . In addition,   is also a complement to  : To show  , let   be arbitrary. Let   be the basis over which we have defined  . By construction, each of the vectors in   is linearly independent of all vectors in  . Let   be a basis of  . Then the following applies

 

for certain  . Rearranging the equation results in

 

and because the   are linearly independent,   follows for all  . Therefore,   and the sum is direct. The sum results in the whole  , because by construction we have  : Using the dimension formula for subspaces, we get

 

where the last equality holds because  . Since  , and the dimensions are equal, we must therefore have  .

Proof step:  

Suppose  . We know that   is a complement of   in  . Let   be another subspace with  . Since the sum of the two subspaces in particular results in  , we get  .

Assuming  . Then   is a complement of   in  . Let   be another subspace with  . Because the sum is direct, we have in particular  . However, since  , we conclude  .