Section 4.10 Bases and coordinates
In this section we generalize our understanding of the coordinates of a vector. We start with a little observation: consider the vector \(\vec v=(1,2,3)\) in \(\R^3\text{,}\) and the standard basis \(\{\vec e_1, \vec e_2, \vec e_3\}=\{(1,0,0),(0,1,0), (0,0,1)\}\text{.}\) then
Now suppose we take a different basis for \(\R^3\text{:}\) \(B=\{(0,1,1),(1,0,1),(1,1,0)\}\text{.}\) Then, using Theorem 4.9.16 there is a unique choice of \(r_1, r_2, r_3\) so that \(\vec v= (1,2,3)=r_1(0,1,1)+r_2(1,0,1)+r_3(1,1,0)\text{.}\) The values are determined by the three equations in three unknowns:
The unique solution is \((r_1,r_2,r_3)=(2,1,0)\text{.}\) We then call this triple the coordinates of \(\vec v\) with respect to the basis \(B\).
Definition 4.10.1. Coordinates with respect to a basis.
Let \(\vec v\) be a vector in \(\R^n\text{,}\) and let \(B=\{\vec x_1,\vec x_2,\ldots,\vec x_n\}\) be a basis. From Theorem 4.9.16, there is exactly one choice of \(r_1,r_2,\ldots,r_n\) so that \(\vec v=r_1\vec x_1+r_2\vec x_2+\cdots+r_n\vec x_n\text{.}\) The coordinates of \(\vec v\) with respect to the basis \(B\) is then \((r_1,r_2,\ldots,r_n)\text{.}\) When necessary to emphasize the basis, the notation \((r_1,r_2,\ldots,r_n)_B\) is used.
Observation 4.10.2. Order makes a difference.
If the vectors of a basis are reordered, then the coordinates of any vector with respect to that basis get reordered too. Sometimes the term ordered basis is used to emphasize the importance of order, but usually it is just tacitly understood and causes no problems.
Example 4.10.3. Coordinates with respect to a basis in \(\R^2\).
Consider the basis \(B=\{\vec u_1,\vec u_2\}\) in \(\R^2\) where \(\vec u_1=(2,1)\) and \(\vec u_2=(-1,1)\text{.}\) In addition, consider \(\vec w=(1,5)\text{.}\) We want the coordinates of \(\vec w\) with respect to the basis \(B\text{.}\) It follows easily from the equation \(\vec w=r_1\vec u_1+r_2\vec u_2\) that \(r_1=2\) and \(r_2=3\text{.}\) Hence \(\vec w=(2,3)_B\text{.}\) The figure below shows the geometric interpretation of this equation.
The line joining \(\vec u_1\) with the origin contains all scalar multiples of \(\vec u_1\text{.}\) Similarly, the line joining \(\vec u_2\) with the origin contains all scalar multiples of \(\vec u_2\text{.}\) The parallelogram rule is used to add these two scalar multiples together.
The nice part is that this reasoning is reversible. Given any basis \(B=\{\vec u_1,\vec u_2\}\text{,}\) the lines through \(\vec u_1\) and \(\vec u_2\) are considered axes and, for any other vector \(\vec w\text{,}\) lines parallel to these axes through \(\vec w\) can be used to make a parallelogram that determines \(r_1\) and \(r_2\text{.}\)
With the standard basis \(\{\vec e_1,\vec e_2\}\text{,}\) the coordinates of a vector \(\vec w\) are determined by dropping perpendiculars to the \(x\)-axis and to the \(y\)-axis. The parallelogram is actually a rectangle since the basis vectors are orthogonal.
It is reasonable to ask is why the consideration different bases is worthwhile. One answer is that it can make some computations much easier. The evaluation of projections from a point to a line in \(\R^2\) is given in Subsection 4.7.2. The following example gives a different quicker evaluation using bases.
Example 4.10.5. The projection of a point to a line in \(\R^2\) revisited.
Consider the computation of the projection of a point \((x,y)\) onto the line \(y=mx\text{.}\) The strategy is to first use the line as one axis for a basis. To that end we take a nonzero point on the line as the first basis element: \(\vec u_1=(1,m)\) . Next, we want a second basis element orthogonal to \(\vec u_1\) so that the parallelogram rule will be applied as a rectangle. An easy choice is \(\vec u_2=(-m,1)\) (why?), so the basis is \(B=\{(1,m),(-m,1)\}\text{.}\) The coordinates \((r_1,r_2)\) of \((x,y)\) with respect to \(B\) are found by solving
This is routine and, in particular, gives us \(r_1=\frac1{m^2+1}(x+my)\text{.}\) Note that \(r_1\vec u_1=\frac1{m^2+1}(x+my)(1,m)=\frac1{m^2+1}(x+my,mx+m^2y)\) is our desired point. Note also that we could evaluate \(r_2\text{,}\) but we don't need to because of our choice of basis (we only have to work half as hard!).
Subsection 4.10.1 Bases, coordinates and matrices.
Matrix multiplication has a nice role to play for the computation of coordinates. It rests on a straightforward relationship.
Observation 4.10.7. Linear combinations and matrices.
Let \(\vec x_1, \vec x_2,\ldots,\vec x_n\) be vectors in \(\R^n\text{,}\) and let \(A= \begin{bmatrix} \vec x_1 \amp\vec x_2 \amp\cdots \amp\vec x_n \end{bmatrix}\) be the matrix with \(\vec x_1, \vec x_2,\ldots,\vec x_n\) as columns. Then
if and only if
This is easily verified by evaluating \(\vec w_k\) in each case, for \(k=1,2,\ldots,n\text{.}\)
Now suppose that \(B=\{\vec x_1,\ldots,\vec x_n\}\) is a basis for \(\R^n\text{,}\) and \(A\) is the matrix with \(\vec x_1,\ldots,\vec x_n\) as columns. By Theorem 4.9.12 the matrix \(A\) is nonsingular, and this implies that \(A^{-1}\) exists.
Proposition 4.10.8.
Let \(\vec x_1, \vec x_2,\ldots,\vec x_n\) be vectors in \(\R^n\text{,}\) and let \(A= \begin{bmatrix} \vec x_1 \amp\vec x_2 \amp\cdots \amp\vec x_n \end{bmatrix}\) be the matrix with \(\vec x_1, \vec x_2,\ldots,\vec x_n\) as columns. Then the coordinates \((r_1,\ldots,r_n)\) of \(\vec w\) with respect to the basis \(B\) are given by
Proof.
Multiply both sides of the result in Observation 4.10.7 by \(A^{-1}\text{.}\)
Example 4.10.9. The projection of a point to a line in \(\R^2\) revisited again.
We can use this result to revisit Example 4.10.5. The matrix \(A\) of basis column vectors would then satisfy
Since \(\det(A)=m^2+1\text{,}\) by Theorem 3.5.3,
and
Example 4.10.10. An ellipse in the plane.
The standard equation for an ellipse in the plane is
Clearly the points \((\pm a,0)\) and \((0,\pm b)\) satisfy the equation and are on the ellipse. A typical instance is symmetric about the \(x\)-axis and \(y\)-axis:
Next we consider the points in the plane satisfying \(x^2-xy+y^2=1\text{.}\) Here is the graph:
It really looks like an ellipse, but the equation is not in our standard form, because the curve is not appropriately aligned with the \(x\)-axis and \(y\)-axis. What to do? We can change the axes by using a new basis: \(B=\{\vec u_1, \vec u_2\}\) where \(\vec u_1=(1,1)\) and \(\vec u_2=(-1,1)\text{.}\)
Now suppose \(\vec w=(u,v)\) is on the curve. Let \((x,y)\) be the coordinates of \(\vec w\) with respect to the basis \(B\text{.}\) Using Proposition 4.10.8,
and, since \((u,v)\) is on the curve,
Hence \(\frac{x^2}{a^2} + \frac{y^2}{b^2}=1\) where \(a=1\) and \(b=\frac1{\sqrt3}\text{,}\) and the curve is indeed an ellipse.
Subsection 4.10.2 Change of basis
Suppose we have two bases \(B_1=\{\vec x_1,\ldots,\vec x_n\}\) and \(B_2=\{\vec y_1,\ldots,\vec y_n\}\) and also \(\vec w \text{,}\) all in \(\R^n\text{.}\) If we know the coordinates of \(\vec w\) with respect to \(B_1\text{,}\) can we find the coordinates of \(\vec w\) with respect to \(B_2\) easily? If we use the right matrices, the answer is yes.
Proposition 4.10.14. Change of basis and matrix multiplication.
Suppose that \(B_1\) and \(B_2\) are bases of \(\R^n\text{.}\) Then there is a matrix \(C\) with the following property: If \(\vec w\) is any vector in \(\R^n\text{,}\) and \((r_1,r_2,\ldots,r_n)\) and \((s_1,s_2,\ldots,s_n)\) are the are the coordinates of \(\vec w\) with respect to the bases \(B_1\) and \(B_2\text{,}\) then
Proof.
From Observation 4.10.7, there exist matrices \(A_1\) and \(A_2\) satisfying
Setting \(C=A_2^{-1}A_1\text{,}\) we have
Theorem 4.10.15. Change of basis theorem.
Suppose that \(B_1=\{\vec x_1, \vec x_2, \ldots, \vec x_n\}\) and \(B_2=\{\vec y_1, \vec y_2, \ldots, \vec y_n\}\) are bases of \(\R^n\text{,}\) that \(\vec w\) is a vector in \(\R^n\text{,}\) and that \((r_1,r_2,\ldots,r_n)\) and \((s_1,s_2,\ldots,s_n)\) are the are the coordinates of \(\vec w\) with respect to the bases \(B_1\) and \(B_2\text{.}\) In addition, let \((c_{1,j}, c_{2,j},\ldots, c_{n,j})\) be the coordinates of \(\vec x_j\) with respect to \(B_2\text{,}\) that is,
Then, for \(C= \begin{bmatrix} c_{i,j} \end{bmatrix} \text{,}\)
Proof.
Using
and
we see that
Using Proposition 4.9.4, we have
which is identical to
Observation 4.10.16.
Notice that Theorem 4.10.15 gives an algorithm for constructing the desired matrix \(C\text{.}\) For each \(k=1,2,\ldots,n\text{,}\) let \(\vec z_k\) be the coordinates of \(\vec x_k\) with respect to \(B_2\text{.}\) Using column vectors, let \(C= \begin{bmatrix} \vec z_1\amp\vec z_2\amp\cdots\amp\vec z_n \end{bmatrix} \text{.}\) Then
Also, notice that Proposition 4.10.8 is a special case of Theorem 4.10.15 where \(B_1=\{\vec x_1,\ldots,\vec x_n\}\) and \(B_2\) is the standard basis.