Author Archives: abocadabro

Linear Transformations and Matrices

Considering structure preserving maps between vector spaces leads to the following definition.

Definition A linear transformation is any map \(T:V\mapto W\) between vector spaces \(V\) and \(W\) which preserves vector addition and scalar multiplication, \(T(au+bv)=aTu+bTv\), where \(a,b\in K\) and \(u,v\in V\). We’ll call a linear transformation from a vector space to itself, \(T:V\mapto V\), a linear operator on \(V\).

The kernel of such a linear transformation is \(\ker T=\{v\in V\mid Tv=0\}\) and the image is \(\img T=\{w\in W\mid w=Tv\}\). The linearity of \(T\) means they are vector subspaces of \(V\) and \(W\) respectively. The dimension of \(\ker T\) is called the nullity of \(T\) while that of the image of \(T\) is the rank of \(T\), \(\rank(T)\). They are related to the dimension of the vector space \(V\) as follows.

Theorem (Rank-nullity theorem) If \(T:V\mapto W\) is a linear transformation and \(V\) is finite dimensional then,\begin{equation}
\dim \ker T +\dim \img T =\dim V .\label{equ:dimension equation}
\end{equation}

Proof  Take \(\{k_i\}\), \(1\leq i\leq r\), to be a basis of \(\ker T\). We know that we can extend this to a basis of \(V\), \(\{k_1,\dots,k_r,h_1,\dots,h_s\}\). Consider then the set \(\{h’_i\}\), \(1\leq i\leq s\), the elements of which are defined according to \(Th_i=h’_i\). Any element of \(\img T\) is of the form \(T(c^1k_1+\dots+c^rk_r+d^1h_1+\dots+d^sh_s)= d^1h’_1+\dots+d^sh’_s\) so \(\Span(h’_1,\dots,h’_s)=\img T\). Furthermore, suppose we could find \(c^i\), not all zero, such that \(c^1h’_1+\dots+c^sh’_s=0\). Then we would have \(T(c^1h_1+\dots+c^sh_s)=0\), that is, \(c^1h_1+\dots+c^sh_s\in\ker T\), but this would contradict the linear independence of the basis of \(V\). Thus \(\{h’_i\}\) for \(1\leq i\leq s\) is a basis for \(\img T\) and the result follows.\(\blacksquare\)

It is of course the case that \(T\) is one-to-one (injective) if and only if \(\ker T=\{0\}\) and is onto (surjective) if and only if \(\img T =W\). This theorem thus tells us that if \(V\) and \(W\) have the same dimension then \(T\) is one-to-one if and only if it is onto.

\(T\) is said to be invertible if there exists a linear transformation \(S:W\mapto V\) such that \(TS=\id_W\), the identity operator on \(W\), and \(ST=\id_V\), the identity operator on \(V\). In this case \(S\) is called the (there can only be one) inverse of \(T\) and denoted \(T^{-1}\). \(T\) is invertible if and only if it is both one-to-one and onto, in which case we call it an isomorphism. Notice that one-to-one and onto are equivalent respectively to \(T\) having a left and right inverse. Indeed, if \(T\) has a left inverse, \(S:W\mapto V\) such that \(ST=\id_V\), then for any \(v,v’\in V\) \(Tv=Tv’\Rightarrow STv=STv’\Rightarrow v=v’\). Conversely, if \(T\) is one-to-one then we can define a map \(S:W\mapto V\) which on the restriction to \(\img T\) is such that for any \(w\in\img T\), with \(w=Tv\), \(Sw=v\). \(T\) being one-to-one, this map is well-defined (single-valued). If \(T\) has a right inverse such that \(TS=\id_W\) then for any \(w\in W\) we can write \(w=\id_W w=TSw\) so \(T\) is certainly onto. Conversely, if \(T\) is onto then the existence of a right inverse is equivalent to the axiom of choice.

It is not difficult to see that two finite dimensional vector spaces \(V\) and \(W\) are isomorphic if and only if \(\dim(V)=\dim(W)\) and a linear transformation \(T:V\mapto W\) between two such spaces is an isomorphism if and only if \(\rank(T)=n\) where \(n\) is the common dimension. In other words, we have the following characterisation.

Finite dimensional vector spaces are completely classified in terms of their dimension.

Indeed, any \(n\)-dimensional vector space over \(K\) is isomorphic to \(K^n\), the space of all \(n\)-tuples of elements of the field \(K\). Explicitly, given a basis \(\{e_i\}\) of a vector space \(V\), this isomorphism identifies a vector \(v\) with the column vector \(\mathbf{v}\) of its components with respect to that basis,\begin{equation*}
v=v^ie_i\longleftrightarrow
\begin{pmatrix}
v^1\\
\vdots\\
v^n
\end{pmatrix}.
\end{equation*}

Clearly, a linear transformation \(T:V\mapto W\) is uniquely specified by its action on basis elements. If bases \(\{e_i\}\) and \(\{f_i\}\) are chosen for the respectively \(n\) and \(m\) dimensional vector spaces \(V\) and \(W\), then we can write any vector \(v\in V\) as \(v=v^ie_i\). For any such \(v\in V\) there is an element \(w\in W\) such that \(Tv=w\) and of course we can write it as \(w=w^if_i\). But there must also exist numbers \(T_i^j\) such that \(Te_i=T_i^jf_j\) so we have \(Tv=v^iT_i^jf_j=w^if_i=w\) which can be summarised in terms of matrices as
\begin{equation}
\begin{pmatrix}
w^1\\
\vdots\\
w^m
\end{pmatrix}=\begin{pmatrix}
T_1^1&\dots&T_n^1\\
\vdots&\ddots&\vdots\\
T_1^m&\dots&T_n^m
\end{pmatrix}\begin{pmatrix}
v^1\\
\vdots\\
v^n
\end{pmatrix}.
\end{equation}
That is, \(\mathbf{w}=\mathbf{T}\mathbf{v}\), is the matrix version of \(w=Tv\). The matrix \(\mathbf{T}\) is called the matrix representation of the linear transformation \(T\) and addition, scalar multiplication and composition of linear transformations correspond respectively to matrix addition, multiplication of a matrix by a scalar and matrix multiplication respectively.

Conversely, given a choice of bases \(\{e_i\}\) and \(\{f_i\}\) for vector spaces \(V\) and \(W\) of dimensions \(n\) and \(m\) respectively, any \(m\times n\) matrix \(\mathbf{A}\) gives rise to a linear transformation \(L_\mathbf{A}:V\mapto W\) defined by \(L_\mathbf{A}v=L_\mathbf{A}(v^ie_i)=A_i^jv^if_j\) for all \(v\in V\). Of course, having chosen bases for \(V\) and \(W\) we also have isomorphisms \(V\cong K^n\) and \(W\cong K^m\) so the following diagram commutes:
\begin{equation}
\begin{CD}
K^n @>\mathbf{A}>> K^m\\
@VV\cong V @VV\cong V\\
V @>L_{\mathbf{A}}>> W
\end{CD}
\end{equation}

Denoting by, \(\mathcal{L}(V,W)\), the set of linear transformations between vector spaces \(V\) and \(W\), it is clear that \(\mathcal{L}(V,W)\) is a vector space and we may summarise the preceding discussion in the following theorem.

Theorem A choice of bases for vector spaces \(V\) and \(W\), of dimensions \(n\) and \(m\) respectively, defines a vector space isomorphism \(\mathcal{L}(V,W)\cong\text{Mat}_{m,n}(K)\).

A consequence of this is that,
\begin{equation}
\dim\mathcal{L}(V,W)=\dim\text{M}_{m,n}(K)=nm=\dim V\dim W.
\end{equation}

A linear operator \(T:V\mapto V\) is called an automorphism if it is an isomorphism. The set of all linear operators on a vector space \(V\) is denoted \(\mathcal{L}(V)\) and is of course a vector space in its own right. The automorphisms of a vector space \(V\), denoted \(\text{GL}(V)\), form a group called the general linear group of \(V\). If \(T\in\text{GL}(V)\) and \(\{e_i\}\) is some basis of \(V\) then clearly \(\{Te_i\}\) is also a basis, identical to the original if and only if \(T=\id_V\) and conversely if \(\{e’_i\}\) is some other basis of \(V\) then the linear operator \(T\) defined by \(Te_i=e’_i\) is an isomorphism.

The invertibility of a linear transformation \(T\in\mathcal{L}(V,W)\) is equivalent to the invertibility, once a basis is chosen, of the matrix representation of the transformation. Indeed the invertibility of any matrix \(\mathbf{A}\) is equivalent to the invertibility of the corresponding linear transformation \(L_\mathbf{A}\) (which in turn means an invertible matrix must be square and of rank \(n=\dim V\)). We denote by \(\text{GL}_n(K)\) the group of automorphisms of \(K^n\), that is, the group of invertible \(n\times n\) matrices over \(K\). It is not difficult to see that the isomorphism \(V\cong K^n\) induces an isomorphism \(\text{GL}(V)\cong\text{GL}_n(K)\).

 

Basic Definitions and Examples

At school we learn that ‘space’ is 3-dimensional. We can specify its points in terms of coordinates \((x,y,z)\) and we think of ‘vectors’ as arrows from one point to another. For example, in the diagram below,
IMG_0192
the points \(P\) and \(Q\) might be specified as \(P=(p_1,p_2,p_3)\) and \(Q=(q_1,q_2,q_3)\) respectively. The vectors \(\mathbf{OP}\) and \(\mathbf{OQ}\) are the arrows from the origin to the respective points and it’s typical to write their components as column vectors,
\begin{equation*}
\mathbf{OP}=\begin{pmatrix}p_1\\p_2\\p_3\end{pmatrix}\quad\text{and}\quad\mathbf{OQ}=\begin{pmatrix}q_1\\q_2\\q_3\end{pmatrix}.
\end{equation*}
We don’t distinguish between \(\mathbf{OP}\) and any other arrow of the same length and direction. We can think of the vectors emanating from the origin as the representatives of an equivalence class of arrows of the same length and orientation positioned anywhere in space. In particular, the arrow from \(Q\) to \(S\) which we get by transporting \(\mathbf{OP}\), keeping its length and orientation the same so that its ‘tail’ meets the ‘head’ of \(\mathbf{OQ}\), belongs to the same equivalence class as \(\mathbf{OP}\). In fact what we obtain in this way is the geometric construction of \(\mathbf{OS}\) as the sum of \(\mathbf{OP}\) and \(\mathbf{OQ}\). Similarly, \(\mathbf{PQ}\), is equivalently taken to be the arrow from \(P\) to \(Q\) or, as in the diagram, the vector from the origin to the point reached by joining the tail of \(-\mathbf{OP}\), the vector of the same length as \(\mathbf{OP}\) but opposite direction, to the head of \(\mathbf{OQ}\), that is, \(\mathbf{PQ}=\mathbf{OQ}-\mathbf{OP}\). Generally, given any vector \(\mathbf{v}\) and any real number \(a\) \(a\mathbf{v}\) is another vector \(\abs{a}\) times as long as \(\mathbf{v}\) and pointing in the same direction when \(a\) is positive and opposite direction when \(a\) is negative. The algebraic structure we have here is perhaps the most familiar example of the abstract notion of a vector space.

Vectors in space arise of course in physics as the mathematical representation of physical quantities such as force or velocity. But this geometric setting also clarifies the discussion of the solution of simultaneous of linear equations in three unknowns.

Recall that to specify a plane in space we need a point \(P=(p_1,p_2,p_3)\) and a normal vector
\begin{equation*}
\mathbf{n}=\begin{pmatrix}n_1\\n_2\\n_3\end{pmatrix}.
\end{equation*}
The plane is then the set of points \(X=(x,y,z)\) such that the scalar product of the vector
\begin{equation*}
\mathbf{PX}=\begin{pmatrix}x-p_1\\y-p_2\\z-p_3\end{pmatrix},
\end{equation*}
between our chosen point \(P\) and \(X\), with the normal vector, \(\mathbf{n}\), is zero, that is, \(\mathbf{PX}\cdot\mathbf{n}=0\). This is equivalent to the equation
\begin{equation*}
n_1x+n_2y+n_3z=c
\end{equation*}
where \(c=\mathbf{OP}\cdot\mathbf{n}\), a single linear equation in 3 unknowns for which there are an infinite number of solutions, namely, all the points of the plane. This solution ‘subspace’ is clearly a 2-dimensional space within the ambient 3-dimensional space. Now consider a pair of such equations, assuming they aren’t simply a constant multiple of one another, there are two possibilities. In the case that the equations correspond to a pair of parallel planes there are no solutions. For example the pair
\begin{align*}
x-y+3z&=-1\\
-2x+2y-6z&=3
\end{align*}
corresponds geometrically to,
twoplanesparallel
The other possibility is that the equations correspond to a pair of intersecting planes in which case there are an infinite number of solutions corresponding to all points on the line of intersection. For example the pair
\begin{align*}
x-y+3z&=-1\\
2x-y-z&=1
\end{align*}
correspond geometrically to
twoplanes
The line of intersection here, found by solving the pair of equations, may be expressed as
\begin{equation*}
\begin{pmatrix}x\\y\\z\end{pmatrix}=\lambda\begin{pmatrix}4\\7\\1\end{pmatrix}+\begin{pmatrix}2\\3\\0\end{pmatrix}.
\end{equation*}
Its direction vector could have been found as the cross product of the respective normal vectors — equivalent to solving the homogeneous system,
\begin{align*}
x-y+3z&=0\\
2x-y-z&=0,
\end{align*}
with the triple \((2,3,0)\) a particular solution of the inhomogeneous system. In dimensions higher than 3, that is, for systems of linear equations involving more than 3 variables, we can no longer think in terms of planes intersecting in space but the abstract vector space setting continues to provide illumination.

Definition A vector space \(V\) over a field 1 \(K\) (from the point of view of physical applications \(\mathbb{R}\) or \(\mathbb{C}\) will be most relevant), the elements of which will be referred to as scalars or numbers, is a set in which two operations, addition and multiplication by an element of \(K\) are defined. The elements of \(V\), called vectors, satisfy:

  • \(u+v=v+u\)
  • \((u+v)+w=u+(v+w)\)
  • There exists a zero vector \(0\) such that \(v+0=v\)
  • For any \(u\), there exists \(-u\), such that \(u+(-u)=0\)

Thus \((V,+)\) is an abelian group, and is further equipped with a scalar multiplication satisfying:

  • \(c(u+v)=cu+cv\)
  • \((c+d)u=cu+du\)
  • \((cd)u=c(du)\)
  • \(1u=u\)

where \(u,v,w\in V\), \(c,d\in K\) and 1 is the unit element of \(K\).

Example The canonical example is the space of \(n\)-tuples, \(x=(x^1,\dots,x^n)\), \(x^i\in K\), denoted \(K^n\). Its vector space structure is given by \(x+y=(x^1+y^1,\dots,x^n+y^n)\) and \(ax=(ax^1,\dots,ax^n)\).

Example The polynomials of degeree \(n\) over \(\mathbb{R}\), \(P_n\), is a real vector space. In this case, typical vectors would be \(p=a_nx^n+a_{n-1}x^{n-1}+\dots+a_1x+a_0\) and \(q=b_nx^n+b_{n-1}x^{n-1}+\dots+b_1x+b_0\) with vector space structure given by \(p+q=(a_n+b_n)x^n+\dots+(a_0+b_0)\) and \(cp=ca_nx^n+\dots+ca_0\). More generally we have \(F[x]\), the space of all polynomials in \(x\) with coefficients from the field \(F\).

Example Continuous real-valued functions of a single variable, \(C(\RR)\), form a vector space with the natural vector addition and scalar multiplication.

Example The \(m\times n\) matrices over \(K\), \(\text{Mat}_{m,n}(K)\), form a vector space with the usual matrix addition and scalar multiplication. We denote by \(\text{Mat}_n(K)\) the vector space of \(n\times n\) matrices.

Definition A subspace, \(U\), of a vector space \(V\), is a subset of \(V\) which is closed under vector addition and scalar multiplication.

Example A plane through the origin is a subspace of \(\RR^3\). Note though that any plane which does not contain the origin cannot be a subspace since it does not contain the zero vector.

Example The solution set of any system of homogeneous linear equations in \(n\) variables over the field \(K\) is a subspace of \(K^n\). Incidentally, it will be useful to note here that any system of homogeneous linear equations has at least one solution, namely the zero vector and that also, an underdetermined system of homogeneous linear equations has infinite solutions. A simple induction argument on the number of variables establishes the latter. Suppose \(x_n\) is the \(n\)th variable. First we deal with the special case that in each equation the coefficient of \(x_n\) is zero. In this case, \(x_n\) can take any value, and we may set all other variables to zero. We thus have an infinite number of solutions. If, however, one or more equations have non-zero coefficients of \(x_n\) then choose one of them and use it to obtain an expression for \(x_n\) in terms of the other variables. We then use this expression twice. First, we eliminate \(x_n\) from all other equations to arrive at an underdetermined homogeneous system in \(n-1\) variables. By the induction hypothesis, this has infinite solutions. Then, to each such solution, we use the expression for \(x_n\) to obtain a solution of the original system.

A set of \(n\) vectors \(\{e_i\}\) in a vector space \(V\) is linearly dependent if there exist numbers \(c^i\), not all zero, such that \(c^1e_1+c^2e_2+…+c^ne_n=0\). They are linearly independent if they are not linearly dependent. The span of a set of vectors \(S\) in a vector space \(V\), \(\Span(S)\), is the set of all linear combinations of elements in \(S\).

Definition A set of vectors \(S\) is a basis of \(V\) if it spans \(V\), \(\Span(S)=V\), and is also linearly independent.

Throughout the Linear Algebra section of the Library, vector spaces will be assumed to be finite dimensional, that is, spaces \(V\) in which there exists a finite set \(S\) such that \(\Span(S)=V\). In this case it is not difficult to see that \(S\) must have a subset which is a basis of \(V\). In particular, any finite dimensional vector space has a basis.

The following fact will be used repeatedly in what follows.

Theorem Any linearly independent set of vectors, \(e_1,\dots,e_r\), in \(V\) can be extended to a basis of \(V\).

Proof For \(v\in V\), \(v\notin\Span(e_1,\dots,e_r)\) if and only if \(e_1,\dots,e_r,v\) are linearly independent (the ‘if’ follows since \(v\in\Span(e_1,\dots,e_r)\) implies that \(e_1,\dots,e_r,v\) are linearly dependent, the ‘only if’ since if \(v\notin\Span(e_1,\dots,e_r)\) and we had numbers \(c^i,b\) not all zero such that \(c^1e_1+\dots+c^re_r+bv=0\) then with \(b=0\) we contradict the linear independence of the \(e_i\) while with \(b\neq 0\) we contradict \(v\notin\Span(e_1,\dots,e_r)\)). Now, since \(V\) is finite dimensional, there is a spanning set \(S=\{f_1,\dots,f_d\}\) and if each \(f_i\in\Span(e_1,\dots,e_r)\) then \(e_1,\dots,e_r\) is already a basis. On the other hand if \(f_i\notin\Span(e_1,\dots,e_r)\) then we’ve seen that \(e_1,\dots,e_r,f_i\) is linearly independent and so considering each \(f_i\) in turn we may construct a basis for \(V\).\(\blacksquare\)

Theorem If a vector space \(V\) contains a finite basis which consists of \(n\) elements then any basis of \(V\) must consist of exactly \(n\) elements.

Proof We first establish that if we have \(V=\Span(e_1,\dots,e_m)\), with the \(e_i\) linearly independent, then for any linearly independent set of vectors, \(\{f_1,\dots,f_n\}\), \(m\geq n\). One way to see this is by observing that by assumption we can express each \(f_i\) as a linear combination of \(e_i\)s, \(f_i=\sum_{j=1}^mA_i^je_j\) for some \(A_i^j\in K\). Now the linear independence of the \(f_i\)s means that if we have numbers \(c^i\) such that \(\sum_{i=1}^nc^if_i=0\) then \(c^i=0\) for all \(i\). But \(\sum_{i=1}^nc^if_i=\sum_{i=1}^n\sum_{j=1}^mc^iA_i^je_j\), with the coefficient of \(e_j\) being the \(j\)th element of the \(m\times 1\) column vector \(\mathbf{A}\mathbf{c}\) (\(\mathbf{A}\) is here the \(m\times n\) matrix with \(A^i_j\) in the \(i\)th row and \(j\)th column and \(\mathbf{c}\) the \(n\times 1\) column vector with elements \(c^i\)) and if \(m{<}n\), then we know from the discussion in Example that there exists a \(\mathbf{c}\neq 0\) such that \(\mathbf{A}\mathbf{c}=0\). This contradicts the linear independence of the \(f_i\) and so we must indeed have \(m\geq n\). This result may now be applied to any pair of bases to establish the uniqueness of the dimension.\(\blacksquare\)

This allows us to define the dimension, \(n\), of a vector space \(V\) as the number of vectors in any basis of \(V\).

From now on we will, unless stated otherwise, employ the summation convention. That is, if in any term an index appears both as a subscript and a superscript then it is assumed to be summed over from 1 to \(n\) where \(n\) is the dimension of the space. Thus if \(\{e_i\}\) is a basis for \(V\) then any \(v\in V\) can be expressed uniquely as \(v=v^ie_i\). The numbers \(v^i\) are then called the components of \(v\) with respect to the basis \(\{e_i\}\).

Example The vector space \(K^n\) of \(n\)-tuples has a basis, which could reasonably be called ‘standard’, given by the vectors \(e_i=(0,\dots,1,\dots,0)\) with the \(1\) in the \(i\)th place. So in this special basis the components of a vector \(x=(x^1,\dots,x^n)\) are precisely the \(x^i\). It is common to take the elements of \(K^n\) to be \(n\)-dimensional column vectors and to denote vectors using bold face, as in,
\begin{equation}
\mathbf{x}=\begin{pmatrix}
x^1\\
\vdots\\
x^n
\end{pmatrix},
\end{equation}
with the standard basis vectors, \(\{\mathbf{e}_i\}\).

 

Notes:

  1. Recall that a ring is an (additive) group with a multiplication operation which is associative and distributes over addition. A field is a ring such that the multiplication also satisfies all the group properties (after throwing out the additive identity); i.e. it has multiplicative inverses, multiplicative identity, and is commutative.

Why Linear Algebra — An Overview

Apparently 1, Raoul Bott said that “Eighty percent of mathematics is linear algebra.” Whether he actually said it or not, it’s probably approximately true. Making complicated systems mathematically tractable through some kind of linear approximation is ubiquitous across the mathematical sciences. Even if a system is modelled non-linearly, it is likely that linear approximation will be involved in understanding its qualitative features. Often, this boils down to something analogous to approximating a function at a point by its tangent. A more subtle linearisation occurs in the study of Lie groups through their representation theory. In this case we learn something about a non-linear object, the group, by making it act linearly through its representation. In any case, pick up any advanced undergraduate or graduate level text in mathematics or theoretical physics and linear algebra will almost certainly be a prerequisite. It is essential and foundational, so let’s get cracking!

In Vector Spaces we introduce finite dimensional vector spaces and the linear transformations between them. Linear transformations are specified by their action on bases and through this action, bases provide the link between maps and matrices. Changing the bases of the spaces related by a linear map changes the matrix representation of that map but we should hardly distinguish between such different representations of the same map. This leads to the notion of equivalence of matrices and linear transformations and to the observation that the only property of linear maps between vector spaces which doesn’t depend on the respective choice of bases is their rank. Staying at this high level of generality we also review the notions of sums and intersections of vector spaces, the passage between real and complex spaces, and their duals.

Remarkably, if we focus on linear maps of a vector space to itself, which we’ll call linear operators, then asking almost the same question, namely, what can linear operators look like up to a choice of basis for the space being operated on sees a rich collection of new ideas emerge. This is the subject of the section Linear Operators. Here we encounter, for example, the determinant and trace of a linear operator, both examples of invariants, properties immune to change of basis. In particular, whether or not the determinant is non-zero characterises the invertibility of linear operators and provides a convenient means to exploit the key idea, that linear operators should be investigated by considering their behaviour on small subspaces called eigenspaces. The culmination of this analysis is the Jordan normal form, the simplest form to which a linear operator can be brought through change of basis.

In ‘nature’ vector spaces arise as geometric objects and thus come equipped with some extra structure, an inner product. In the case of Euclidean geometry this is just the familiar scalar or dot product of vectors, we’ll call such geometries orthogonal. Other variants are Hermitian and symplectic geometries. Such spaces are the subject of Inner Product Spaces. Vector space isomorphisms respecting the geometric structure are called isometries. We classify inner product spaces up to isometry, and, in the case of orthogonal and Hermitian geometries, make the connection with normed vector spaces, consider the construction of the useful orthogonal bases and establish important properties of linear operators on such spaces.

This is then the appropriate point to make contact with the Dirac notation so well-suited to the linear algebra of quantum mechanics and establish in this context the important fact that commuting normal operators can be simultaneously diagonalised.

Finally, we introduce and study tensor product spaces, the tensor algebra, symmetric and skew-symmetric tensors and the symmetric and exterior algebras.