Component Representation of Tensors

If \(\{e_i\}\) is a basis for \(V\) with \(\{e^i\}\) the dual basis of \(V^*\). Then any tensor, \(T\), of type \((r,s)\) can be expressed as the linear combination,
\begin{equation}
T=\sum_{\substack{i_1,\dots,i_r\\j_1,\dots,j_s}}T^{i_1\dots i_r}_{j_1\dots j_s}e_{i_1}\otimes\dots\otimes e_{i_r}\otimes e^{j_1}\otimes\dots\otimes e^{j_s},
\end{equation}
or, employing the summation convention,
\begin{equation}
T=T^{i_1\dots i_r}_{j_1\dots j_s}e_{i_1}\otimes\dots\otimes e_{i_r}\otimes e^{j_1}\otimes\dots\otimes e^{j_s},
\end{equation}
with the \(T^{i_1\dots i_r}_{j_1\dots j_s}\) the components of \(T\) with respect to the chosen basis of \(V\). In physics literature it is common for the collection of components, \(T^{i_1\dots i_r}_{j_1\dots j_s}\), to be actually referred to as “the” tensor. If we were to choose another basis for \(V\), say \(\{e’_i\}\), related to the first according to, \(e’_i=A_i^je_j\), then the new dual basis, \(\{e’^i\}\), is related to the old one by, \(e’^i=(A^{-1})^i_je^j\) (\(e’^i(e’_j)=(A^{-1})^i_ke^k(A_j^le_l)=(A^{-1})^i_kA_j^l\delta_l^k=\delta_j^i\)). With respect to this new pair of dual bases, the tensor \(T\) is given by
\begin{equation}
T=T^{i_1\dots i_r}_{j_1\dots j_s}A_{i_1}^{k_1}\cdots A_{i_r}^{k_r}(A^{-1})^{j_1}_{l_1}\cdots(A^{-1})^{j_r}_{l_r}e_{k_1}\otimes\dots\otimes e_{k_s}\otimes e^{l_1}\otimes\dots\otimes e^{l_r},
\end{equation}
so that the components of \(T\) with respect to the new basis, \({T’}^{k_1\dots k_r}_{l_1\dots l_s}\) say, are given by
\begin{equation}
{T’}^{k_1\dots k_r}_{l_1\dots l_s}=T^{i_1\dots i_r}_{j_1\dots j_s}A_{i_1}^{k_1}\cdots A_{i_r}^{k_r}(A^{-1})^{j_1}_{l_1}\cdots(A^{-1})^{j_r}_{l_r}.
\end{equation}
When treating tensors “as” their components the question naturally arises of how to distinguish between components with respect different bases of a single tensor. The usual approach, sometimes called kernel-index notation, maintains a “kernel” letter indicating the tensor with primes on the indices indicating that the components are with respect to another basis. For example in the case of a vector \(v\), \(v^i\) and \(v^{i’}\) are the same vector expressed with respect to two different bases. \(v^{i’}\) and \(v^{i}\) are thus related according to \(v^{i’}=(A^{-1})^{i’}_iv^i\).

A vector is sometimes defined as an object whose components transform in this way, that is contravariantly (with the inverse of the matrix relating the basis vectors) 1. Likewise, a covariant vector, \(v_i\), in other words, the components of a vector \(v\) with respect to the dual basis \(e^i\), transforms according to \(v_{i’}=A_{i’}^{i}v_i\). More generally, tensors of rank \((r,s)\) are then defined to by objects whose \(r+s\) coordinates, \(T^{i_1\dots i_r}_{j_1\dots j_s}\), transform as you’d expect based on the `upstairs’ or `downstairs’ position of its indices, as
\begin{equation}
T^{i’_1\dots i’_r}_{j’_1\dots j’_s}=T^{i_1\dots i_r}_{j_1\dots j_s}A_{i_1}^{i’_1}\cdots A_{i_r}^{i’_r}(A^{-1})^{j_1}_{j’_1}\cdots(A^{-1})^{j_r}_{j’_r}.
\end{equation}

Recall the notion of contraction. If we have a tensor of type \((r,s)\), \(T^{i_1\dots i_r}_{j_1\dots j_s}\), then contraction corresponds to forming a new \((r-1,s-1)\) tensor, say \(S^{i_1\dots i_{r-1}}_{j_1\dots j_{s-1}}\) as
\begin{equation}
S^{i_1\dots i_{r-1}}_{j_1\dots j_{s-1}}=T^{i_1\dots i_{a-1}ki_{a+1}\dots i_r}_{j_1\dots j_{b-1}kj_{b+1}\dots j_s}.
\end{equation}
In this case we have contracted over the \((i_a,j_b)\) pair of indices.

If the underlying vector space is equipped with a symmetric, non-degenerate inner product then this inner product can be regarded as a \((0,2)\) tensor. This is called the metric tensor, conventionally denoted \(g\). With respect to a given basis it has components, \(g_{ij}\), which are of course the elements of what we previously called the Gram matrix. The inner product provides us with a natural isomorphism \(V\mapto V^*\) such that \(v\mapsto \alpha_v\) with \(\alpha_v(w)=(v,w)\) for any \(w\in V\), that is, to uniquely associate a covariant vector with each contravariant vector and vice versa. In terms of a basis \(e_i\) of \(V\) with dual basis \(e^i\) of \(V^*\), we have \(e_i\mapsto\alpha_{e_i}\) which we could write as \(\alpha_{e_i}=\alpha_{ij}e^j\) with the \(\alpha_{ij}\) determined by \(\alpha_{e_i}(e_j)=g_{ij}=\alpha_{ik}e^k(e_j)=\alpha_{ij}\). So an arbitrary vector \(v^ie_i\) is mapped to \(v^ig_{ij}e^j\), or in other words, by applying the metric tensor to the contravariant vector \(v^i\) we obtain the covariant vector \(v_i\) given by
\begin{equation}
v_i=g_{ij}v^j.
\end{equation}
In the other direction, we have the inverse map, \(V^*\mapto V\), which we’ll write as \(e^i\mapsto g^{ij}e_j\) with the \(g^{ij}\) determined by \(v^ig_{ij}e^j\mapsto v^ig_{ij}g^{jk}e_k=v^ie_i\). That is,
\begin{equation}
g_{ij}g^{jk}=\delta_i^k,
\end{equation}
that is, \(g^{ij}\) is the inverse of the matrix \(g_{ij}\), and given a covariant vector \(v_i\) we obtain a contravariant vector \(v^i\) as
\begin{equation}
v^i=g^{ij}v_j.
\end{equation}
What we have here then is a way of raising and lowering indices which of course generalises to arbitrary tensors.

Let us note here, that in physical applications vectors and tensors very often arise as vector or tensor fields at a particular point in some space. Thus we might have, for example, a vector \(V(x)\) or tensor \(T(x)\) at some point \(x\). The components of \(V(x)\) or \(T(x)\) obviously depend on a choice of basis vectors. This in turn corresponds to a choice of coordinate system and as explained in the appendix on vector calculus, for non-cartesian coordinate systems, the corresponding basis vectors depend on the point in space, \(x\), so that the change of basis matrices relating components of \(V(x)\) or \(T(x)\) for different coordinate systems will also depend on \(x\).

Notes:

  1. In physics contexts there is often a restriction placed on the kinds of basis change considered. For example it is typical to see vectors defined as objects whose components transform contravaraintly with respect to spatial rotations.