Why Linear Algebra — An Overview

Apparently 1, Raoul Bott said that “Eighty percent of mathematics is linear algebra.” Whether he actually said it or not, it’s probably approximately true. Making complicated systems mathematically tractable through some kind of linear approximation is ubiquitous across the mathematical sciences. Even if a system is modelled non-linearly, it is likely that linear approximation will be involved in understanding its qualitative features. Often, this boils down to something analogous to approximating a function at a point by its tangent. A more subtle linearisation occurs in the study of Lie groups through their representation theory. In this case we learn something about a non-linear object, the group, by making it act linearly through its representation. In any case, pick up any advanced undergraduate or graduate level text in mathematics or theoretical physics and linear algebra will almost certainly be a prerequisite. It is essential and foundational, so let’s get cracking!

In Vector Spaces we introduce finite dimensional vector spaces and the linear transformations between them. Linear transformations are specified by their action on bases and through this action, bases provide the link between maps and matrices. Changing the bases of the spaces related by a linear map changes the matrix representation of that map but we should hardly distinguish between such different representations of the same map. This leads to the notion of equivalence of matrices and linear transformations and to the observation that the only property of linear maps between vector spaces which doesn’t depend on the respective choice of bases is their rank. Staying at this high level of generality we also review the notions of sums and intersections of vector spaces, the passage between real and complex spaces, and their duals.

Remarkably, if we focus on linear maps of a vector space to itself, which we’ll call linear operators, then asking almost the same question, namely, what can linear operators look like up to a choice of basis for the space being operated on sees a rich collection of new ideas emerge. This is the subject of the section Linear Operators. Here we encounter, for example, the determinant and trace of a linear operator, both examples of invariants, properties immune to change of basis. In particular, whether or not the determinant is non-zero characterises the invertibility of linear operators and provides a convenient means to exploit the key idea, that linear operators should be investigated by considering their behaviour on small subspaces called eigenspaces. The culmination of this analysis is the Jordan normal form, the simplest form to which a linear operator can be brought through change of basis.

In ‘nature’ vector spaces arise as geometric objects and thus come equipped with some extra structure, an inner product. In the case of Euclidean geometry this is just the familiar scalar or dot product of vectors, we’ll call such geometries orthogonal. Other variants are Hermitian and symplectic geometries. Such spaces are the subject of Inner Product Spaces. Vector space isomorphisms respecting the geometric structure are called isometries. We classify inner product spaces up to isometry, and, in the case of orthogonal and Hermitian geometries, make the connection with normed vector spaces, consider the construction of the useful orthogonal bases and establish important properties of linear operators on such spaces.

This is then the appropriate point to make contact with the Dirac notation so well-suited to the linear algebra of quantum mechanics and establish in this context the important fact that commuting normal operators can be simultaneously diagonalised.

Finally, we introduce and study tensor product spaces, the tensor algebra, symmetric and skew-symmetric tensors and the symmetric and exterior algebras.