Self-Adjoint, Unitary and Orthogonal Operators

We continue to focus on real orthogonal and complex Hermitian spaces with positive definite inner products, now considering maps between them. Suppose \(V\) and \(W\) are such spaces, of dimensions \(n\) and \(m\) respectively, and \(T\in\mathcal{L}(V,W)\). Then the inner product allows us to uniquely associate a linear map \(T^\dagger\in\mathcal{L}(W,V)\) with \(T\), by defining it to be such that \((v,T^\dagger w)=(Tv,w)\). This is the adjoint of \(T\) (in the case of Hermitian spaces, sometimes the Hermitian adjoint). To see that it is indeed unique, notice that if there were distinct \(v’,v”\in V\) such that \((Tv,w)=(v,v’)\) and \((Tv,w)=(v,v”)\) then \((v,v’-v”)=0\) for all \(v\in V\) so \(v’=v”\). Its existence follows since if \(\{e_i\}\) is an orthonormal basis of \(V\), then any \(v\in V\) can be expressed as \(v=\sum v^ie_i\) with \(v^i=(e_i,v)\). Then \((Tv,w)=\sum(v,e_i)(Te_i,w)=(v,\sum(Te_i,w)e_i)\), so \(T^\dagger w=\sum(Te_i,w)e_i\). Indeed, if the matrix representation of \(T\) is \(\mathbf{T}\), then the matrix representation of \(T^\dagger\) is \(\mathbf{T}^\dagger\).

Of particular interest are operators on inner product spaces which coincide with their adjoint.

Definition A linear operator \(T\in\mathcal{L}(V)\) is self-adjoint or Hermitian if \(T=T^\dagger\).

Remark In terms of an orthonormal basis for \(V\), this means that the matrix representation of \(T\) is such that \(\mathbf{T}=\mathbf{T}^\dagger\). That is, if \(K=\RR\) it is symmetric whilst if \(K=\CC\) it is Hermitian.

Remark We’ll typically use the word Hermitian in the specific context of a complex Hermitian space and self-adjoint when the underlying vector space could be either real orthogonal or complex Hermitian.

Remark Recall Example. For any \(T\in\mathcal{L}(V)\), \(\ker T^\dagger=(\img T)^\perp\). Indeed if \(u\in\ker T^\dagger\) then for any element \(w\in\img T\), \((u,w)=(u,Tv)\), for some \(v\in V\), and \((u,Tv)=(T^\dagger u,v)=0\), so \(u\in(\img T)^\perp\). Conversely, if \(u\in(\img T)^\perp\), then for any \(v\in V\), \((T^\dagger u,v)=(u,Tv)=0\), so \(u\in\ker T^\dagger\). In particular, if \(T\) is self-adjoint, then \(\ker T=(\img T)^\perp\), and we have the orthogonal direct sum decomposition, \(V=\ker T\oplus\img T\).

The defintion of an Hermitian operator looks somewhat like an operator/matrix version of the condition on complex numbers by which we restrict to the reals. Though seemingly a trivial observation there turns out to be a rather remarkable analogy between linear operators and complex numbers and in this analogy, real numbers do indeed correspond to self-adjoint operators. In due course we’ll see analogs for modulus 1 and positive numbers as well as the polar decomposition of complex numbers. First though we obtain some results which make the real number/self-adjoint operator correspondence particularly compelling.

It is not difficult to see that for any linear operator \(T\) on a positive definite inner product space, \(T=0\) if and only if \((Tv,w)=0\) for all \(v,w\in V\). In fact we can do somewhat better than this.

Theorem A linear operator \(T\) on a complex Hermitian space \(V\) is zero, \(T=0\), if and only if \((Tv,v)=0\) for all \(v\in V\).

Proof Observe that generally, we have that,
\begin{equation}
(T(av),bw)+(T(bw),av)=(T(av+bw),av+bw)-\abs{a}^2(Tv,v)-\abs{b}^2(Tw,w)
\end{equation}
for all \(v,w\in V\) and \(a,b\in\CC\). In particular, if \((Tv,v)=0\) for all \(v\in V\), then with \(a=b=1\),
\begin{equation}
(Tv,w)+(Tw,v)=0,
\end{equation}
and choosing \(a=1\) and \(b=i\) then dividing by \(i\),
\begin{equation}
(Tv,w)-(Tw,v)=0,
\end{equation}
so that \((Tv,w)=0\) for all \(v,w\in V\) and we conclude that \(T=0\).\(\blacksquare\)

Note that we made essential use of the fact that \(V\) is a complex vector space here. The result is not generally valid for real vector spaces.

Theorem A linear operator \(T\in\mathcal{L}(V)\) on a complex Hermitian space \(V\) is Hermitian if and only if \((Tv,v)\) is real for all \(v\in V\).

Proof If \(T\) is Hermitian, then \((Tv,v)=(v,Tv)=(Tv,v)^*\) so \((Tv,v)\) is real. Conversely, if \((Tv,v)\) is real then \((Tv,v)=(Tv,v)^*=(v,T^\dagger v)^*=(T^\dagger v,v)\) so that \(((T-T^\dagger)v,v)=0\). But we’ve already seen that in this case we must have \(T-T^\dagger=0\) so \(T=T^\dagger\).\(\blacksquare\)

The following result provides an even stronger expression of the ‘realness’ of self-adjoint operators.

Theorem \(T\in\mathcal{L}(V)\) is a self-adjoint operator on a real orthogonal or complex Hermitian inner product space with positive definite inner product if and only if

  1. All eigenvalues of \(T\) are real.
  2. Eigenvectors with distinct eigenvalues are orthogonal.
  3. There exists an orthonormal basis of eigenvectors of \(T\). In particular, \(T\) is diagonalisable.

Proof The if is straightforward so we concentrate on the only if.

  1. Assuming \(K=\CC\), if \(Tv=\lambda v\) for some \(\lambda\in\CC\) and a non-zero vector \(v\in V\), then, \(\lambda^*(v,v)=(\lambda v,v)=(Tv,v)=(v,Tv)=\lambda(v,v)\), so \(\lambda\) must be real. Now suppose \(K=\RR\), but let us pass to the complexification, \(V_\CC\), defined in Realification and Complexification. To avoid confusion with the inner product, let us abuse notation and write an element of \(V_\CC\) as \(v+iv’\) rather than \((v,v’)\), where \(v,v’\in V\). Then, given the symmetric inner product on \(V\), \((\cdot,\cdot)\) we can define an Hermitian inner product on \(V_\CC\) according to \((u+iu’,v+iv’)=(u,v)+(u’,v’)-i(u’,v)+i(u,v’)\), where \(u,u’,v,v’\in V\). Clearly, \(T_\CC\), which acts on an element, \(v+iv’\), of \(V_\CC\) as \(T_\CC(v+iv’)=Tv+iTv’\), is self-adjoint with respect to this inner product and since, as we already know, the matrices of \(T\) and \(T_\CC\) with respect to a basis of \(V\) (which is also a basis of \(V_\CC\)) are identical, it follows that the eigenvalues of \(T\) must be real.
  2. Suppose \(Tv_1=\lambda_1 v_1\) and \(Tv_2=\lambda_2 v_2\) with \(\lambda_1\neq\lambda_2\). Then \(\lambda_1^*(v_1,v_2)=(Tv_1,v_2)=(v_1,Tv_2)=(v_1,v_2)\lambda_2\), so that if, for contradiction, \((v_1,v_2)\neq 0\) then \(\lambda_1^*=\lambda_1=\lambda_2\) contradicting the initial assumption.
  3. In the case \(\dim V=1\) the result is trivial so we proceed by induction on the dimension of \(V\) and assume the result holds for \(n=\dim V>1\). We know there is a real eigenvalue \(\lambda\) and eigenvector \(v_1\in V\) such that \(Tv_1=\lambda v_1\) and since by assumption the inner product of \(V\) is non-degenerate we have the decomposition \(V=\Span(v_1)\oplus\Span(v_1)^\perp\). Now since for any \(w\in\Span(v_1)^\perp\), we have \((Tw,v_1)=(w,Tv_1)=(w,v_1)\lambda=0\), \(\Span(v_1)^\perp\) is \(T\)-invariant. Thus, by the induction hypothesis, we can assume the result for \(\Span(v_1)^\perp\) and take \(\hat{v}_2,\dots,\hat{v}_n\) as its orthonormal basis. Then defining \(\hat{v}_1=v_1/\norm{v_1}\), \(\hat{v}_1,\dots,\hat{v}_n\) is an orthonormal basis of eigenvectors of \(T\).
  4. \(\blacksquare\)

Since the eigenspaces corresponding to the \(r\) distinct eigenvalues of a self-adjoint operator \(T\) decompose \(V\) into an orthogonal direct sum, \(V=\oplus_iV_i\), there correspond orthogonal projectors, \(P_i\), such that, \(\id_V=\sum_iP_i\). Thus for any \(v\in V\) we have \(Tv=T(\sum_iP_iv)=\sum_i\lambda_iP_iv\), that is,
\begin{equation}
T=\sum_{i=1}^r\lambda_iP_i,
\end{equation}
the spectral decomposition of \(T\), of which we’ll see a great deal more later. This decomposition is unique. Suppose we have \(r\) orthogonal projectors, \(Q_i\), complete in the sense that \(\id_V=\sum_iQ_i\), together with \(r\) real numbers, \(\mu_i\), such that \(T=\sum_i\mu_iQ_i\). If \(v\in\img Q_i\), that is, \(v=Q_iv\), we must have \(Tv=\mu_iv\). That is, \(\mu_i\) is an eigenvalue of \(T\) and any \(v\in\img Q_i\) belongs to the eigenspace of \(\mu_i\). Conversely, if \(Tv=\lambda v\) for some \(v\in V\), then since \(v=\sum_iQ_iv\), writing \(v_i=Q_iv\) we have \(\sum_i(\lambda-\mu_i)v_i=0\). Those \(v_i\) which are non-zero are orthogonal and since \(v\neq0\) at least one must be non-zero, so there must be some \(i\) such that \(\lambda=\mu_i\). Let us suppose we have relabelled the \(\mu_i\) such that \(\lambda_i=\mu_i\). Clearly, for any polynomial \(p\), \(p(T)=\sum_ip(\lambda_i)P_i=\sum_ip(\lambda_i)Q_i\). In particular, if we define a polynomial \(p_j(x)=\prod_{i\neq j}(x-\lambda_i)/(\lambda_j-\lambda_i)\) then \(p_j(\lambda_j)=1\) but \(p_j(\lambda_i)=0\) for all \(i\neq j\). These polynomials allow us then to establish \(P_i=Q_i\) for all \(i=1,\dots,r\).

Generally, a projector \(P\in\mathcal{L}(V)\) on a real orthogonal or complex Hermitian inner product space with positive definite inner product is an orthogonal projector if and only if \(P\) is self-adjoint. Indeed, the eigenvalues of a projection operator are either 0 or 1 and the corresponding eigenspaces are \(\ker P\) and \(\img P\) respectively. Thus, if \(P\) is self-adjoint, these eigenspaces are orthogonal. Conversely, if \(P\) is an orthogonal projection, we have, \(V=\ker P\oplus\img P\), with \(\ker P\) and \(\img P\) orthogonal. So choosing an orthonormal basis for \(V\) as the sum of of orthonormal bases for \(\ker P\) and \(\img P\) we have a basis which is precisely an orthonormal basis of eigenvectors of \(P\), so \(P\) is self-adjoint.

Earlier, we classified inner product spaces up to isometry. Focusing on real orthogonal and Hermitian spaces with non-degenerate inner products, let us now consider automorphisms, \(f:V\mapto V\), of these spaces which are also isometries, that is, such that, \((v,w)=(f(v),f(w))\), \(\forall v,w\in V\). Given our definition of the adjoint, this means that, \(f^\dagger f=\id_V\). If \(f\) is an isometry of a real orthogonal geometry it is called an orthogonal operator whilst an isometry of an Hermitian geometry is called a unitary operator.

Isometries of course form a group and in the case of a real orthogonal space whose inner product has signature, \((p,n-p,0)\), that group is called the orthogonal group of the inner product, \(O(V,p,n-p)\). Choosing an orthonormal basis for \(V\), \(\{e_i\}\), such that \((e_i,e_j)=\epsilon_i\delta_{ij}\) (no summation) with \(\epsilon_i=1\), \(1\leq i\leq p\) and \(\epsilon_i=-1\), \(p+1\leq i\leq n\), and defining the matrix, \(\mathbf{I}_{p,q}\), to be
\begin{equation*}
\mathbf{I}_{p,q}=
\begin{pmatrix}
\mathbf{I}_p & \mathbf{0}\\
\mathbf{0} & -\mathbf{I}_q
\end{pmatrix},
\end{equation*}
then its not difficult to see that we have a group isomorphism, \(O(V,p,n-p)\cong O(p,n-p)\), where \(O(p,n-p)\) is the matrix group,
\begin{equation*}
O(p,n-p)=\{\mathbf{O}\in\text{GL}_n(\CC)\mid \mathbf{O}^\mathsf{T}\mathbf{I}_{p,n-p}\mathbf{O}=\mathbf{I}_{p,n-p}\}.
\end{equation*}
In particular, when the inner product is positive definite, then the group of isometries is denoted simply, \(O(V)\), and we have the isomorphism, \(O(V)\cong O(n)\), where,
\begin{equation*}
O(n)=\{\mathbf{O}\in\text{GL}_n(\CC)\mid \mathbf{O}^\mathsf{T}\mathbf{O}=\mathbf{I}_n\}.
\end{equation*}

Similarly, in the case of Hermitian geometries the group of isometries is called the unitary group of the inner product. If the signature of the inner product is \((p,n-p,0)\) then it is denoted, \(U(V,p,n-p)\), and we have an isomorphism, \(U(V,p,n-p)\cong U(p,n-p)\), where \(U(p,n-p)\) is the matrix group defined by,
\begin{equation*}
U(p,n-p)=\{\mathbf{U}\in\text{GL}_n(\CC)\mid \mathbf{U}^\dagger\mathbf{I}_{p,n-p}\mathbf{U}=\mathbf{I}_{p,n-p}\}.
\end{equation*}
In particular, when the inner product is positive definite, a choice of an orthonormal basis provides an isomorphism \(U(V)\cong U(n)\) where,
\begin{equation*}
U(n)=\{\mathbf{U}\in\text{GL}_n(\CC)\mid \mathbf{U}^\dagger\mathbf{U}=\mathbf{I}_n\}.
\end{equation*}

In the spirit of the analogy, already discussed, between complex numbers and linear operators, unitary operators look like they should correspond to complex numbers of unit modulus. Indeed, as the following result, similar to Theorem, demonstrates, the spectra of such operators justifies the analogy.

Theorem \(U\) is a unitary operator on an Hermitian inner product space over \(\CC\) with positive definite inner product if and only if,

  1. All eigenvalues \(\lambda\) of \(U\) are such that \(|\lambda|=1\).
  2. Eigenvectors with distinct eigenvalues are orthogonal.
  3. There exists an orthonormal basis of eigenvectors of \(U\). In particular \(U\) is diagonalisable.

Proof The if is straightforward so we concentrate on the only if.

  1. If \(Uv=\lambda v\) for some \(\lambda\in\CC\) and a non-zero vector \(v\in V\) then \((Uv,Uv)=\lambda^*\lambda(v,v)=(v,v)\) so \(|\lambda|=1\).
  2. Suppose \(Uv_1=\lambda_1 v_1\) and \(Uv_2=\lambda_2 v_2\) with \(\lambda_1\neq\lambda_2\). Then \(\lambda_1^*(v_1,v_2)=(Uv_1,v_2)=(v_1,U^{-1}v_2)=(v_1,v_2)\lambda_2^{-1}=(v_1,v_2)\lambda_2^*\). That is \((v_1,v_2)=0\).
  3. In the case \(\dim V=1\) the result is trivial so we proceed by induction on the dimension of \(V\) and assume the result holds for \(n=\dim V>1\). We know there is some eigenvalue \(\lambda\) and eigenvector \(v_1\in V\) such that \(Uv_1=\lambda v_1\) and since by assumption the inner product of \(V\) is non-degenerate we have the decomposition \(V=\Span(v_1)\oplus\Span(v_1)^\perp\). Now since for every \(w\in\Span(v_1)^\perp\), we have \((Uw,v_1)=(w,U^{-1}v_1)=(w,v_1)\lambda^*=0\), \(\Span(v_1)^\perp\) is \(U\)-invariant. Thus, by the induction hypothesis, we can assume the result for \(\Span(v_1)^\perp\) and take \(\hat{v_2},\dots,\hat{v}_n\) as its orthonormal basis. Then defining \(\hat{v}_1=v_1/\norm{v_1}\), \(\hat{v}_1,\dots,\hat{v}_n\) is an orthonormal basis of eigenvectors of \(U\).
  4. \(\blacksquare\)

The corresponding result for orthogonal operators is a little different.

Theorem \(O\) is an orthogonal operator on a real orthogonal inner product space over \(\RR\) with positive definite inner product if and only if there exists an orthonormal basis in terms of which the matrix representation of \(O\) has the form,
\begin{equation}\label{orthog operator}
\begin{pmatrix}
\mathbf{R}(\theta_1)&\mathbf{0}& & & & & & & \\
\mathbf{0}&\ddots & & & & & & & \\
& &\mathbf{R}(\theta_r) && & & & & \\
& & &1 &\mathbf{0} & & & &\\
& & & \mathbf{0}&\ddots & & & & \\
& & & && 1& & & & \\
& & & & & &-1& \mathbf{0}& \\
& & & & & &\mathbf{0}&\ddots & \\
& & & & & & & &-1
\end{pmatrix}.
\end{equation}

Proof As ever, the if is straightforward so we focus on the only if. Dimension 1 is trivial. Consider dimension \(2\). A choice of orthonormal basis tells us that any orthogonal operator, \(O\), has a matrix representation \(\mathbf{O}\), such that \(\mathbf{O}^\mathsf{T}\mathbf{O}=\mathbf{I}_2\). Considering the determinant of this, we see that \(\det\mathbf{O}=\pm1\) and so \(\mathbf{O}\) must be of the form,
\begin{equation*}
\begin{pmatrix}
a&b\\
c&d
\end{pmatrix},
\end{equation*}
with \(a^2+c^2=1=b^2+d^2\), \(ab+cd=0\) and \(ad-bc=\pm1\). So in the case of determinant 1 we have \(b=-c\) and \(a=d\), and any such matrix can be written as
\begin{equation}
\begin{pmatrix}
\cos\theta&-\sin\theta\\
\sin\theta&\cos\theta
\end{pmatrix},
\end{equation}
for \(0\leq\theta<\pi\), that is, a rotation through an angle \(\theta\). Notice that for \(0<\theta<\pi\) this has no eigenvalues in \(\RR\). In the determinant -1 case we have \(a=-d\) and \(b=c\) and any such matrix can be written
\begin{equation}
\begin{pmatrix}
\cos\theta&\sin\theta\\
\sin\theta&-\cos\theta
\end{pmatrix},
\end{equation}
for \(0\leq\theta<\pi\), that is, a reflection in the line with unit vector \(\cos\frac{\theta}{2}\mathbf{e}_1+\sin\frac{\theta}{2}\mathbf{e}_2\). In contrast to the rotation matrix, this matrix has eigenvalues \(\pm1\) and so can be diagonalised. We conclude that the result holds for dimensions 1 and 2 and proceed, as in the unitary case, by induction on the the dimension of \(V\), assuming the result holds for some some \(n=\dim V>2\).
If \(O\) has a real eigenvalue then by the same argument argument as in the unitary case the result follows. Otherwise, consider \(O_\CC\), the complexification of \(O\), which must have pairs of complex eigenvalues. We recall from the discussion of the real Jordan normal form in The Real Jordan Normal Form that each pair of complex eigenvalues corresponds to a \(2\)-dimensional \(O\)-invariant subspace of \(V\). Choose such a subspace, \(V_0\). Then in an orthonormal basis we know that the matrix representation of the restriction of \(O\) to \(V_0\) must have the form \(\mathbf{R}(\theta)\) where,
\begin{equation}
\mathbf{R}(\theta)=\begin{pmatrix}
\cos\theta&-\sin\theta\\
\sin\theta&\cos\theta
\end{pmatrix}.
\end{equation}
Similar reasoning to the unitary case makes it clear that \(V_0^\perp\) is also \(O\)-invariant and so the result follows.\(\blacksquare\)

So in summary, an operator \(O\) on a real orthogonal space \(V\) with a positive definite inner product is an isometry, that is, an orthogonal operator, if and only if its matrix representation with respect to some orthonormal basis of \(V\) has the form,~\eqref{orthog operator}. An operator \(U\) on a (complex) Hermitian space \(V\) with a positive definite inner product is an isometry, that is, a unitary operator, if and only if its matrix representation with respect to some orthonormal basis of \(V\) is diagonal with its diagonal elements all belonging to the unit circle in \(\CC\). We’ve also seen that operators \(T\) on real orthogonal or Hermitian spaces with positive definite inner products are self-adjoint if and only if they are diagonalisable with all diagonals real.

It’s also worth noting that in an inner product space, \(V\), the orbit of a linear operator \(A\in\mathcal{L}(V)\) which is self-adjoint, under the usual action of vector space automorphisms \(P\in\text{GL}(V)\), \(P^{-1}AP\), does not consist only of other self-adjoint operators. However, if we consider instead the action of isometries that is, elements \(U\in U(V)\) when \(V\) is over \(\CC\) or of elements \(O\in O(V)\) when working over \(\RR\), then the orbits consist exclusively of self-adjoint operators.

Proposition If \(\mathbf{A}\in\text{Mat}_n(\CC)\) is Hermitian, that is, \(\mathbf{A}^\dagger=\mathbf{A}\), then there exists a \(\mathbf{P}\in U(n)\) such that \(\mathbf{P}^{-1}\mathbf{A}\mathbf{P}\) is real and diagonal. Similarly, if \(\mathbf{A}\in\text{Mat}_n(\RR)\) is symmetric, that is, \(\mathbf{A}^\mathsf{T}=\mathbf{A}\), then there exists a \(\mathbf{P}\in O(n)\) such that \(\mathbf{P}^{-1}\mathbf{A}\mathbf{P}\) is real and diagonal.

Proof We use Theorem and treat both \(K=\CC\) and \(K=\RR\) simultaneously. So assume \(\mathbf{A}\in\text{Mat}_n(K)\), then \(L_\mathbf{A}\in\mathcal{L}(K^n)\) is self-adjoint with respect to the standard inner product on \(K^n\), and from Theorem there is an orthonormal basis of eigenvectors for \(L_\mathbf{A}\). That is, there are some \(\lambda_1,\dots,\lambda_n\in\RR\) such that \(L_\mathbf{A}v_i=\lambda_i v_i\) or in terms of matrices,
\begin{equation}
\mathbf{A}(\mathbf{v}_1\dots\mathbf{v}_n)=(\lambda_1\mathbf{v}_1\dots\lambda_n\mathbf{v}_n)
\end{equation}\(\blacksquare\)


Theorem and Theorem are clearly very similar. Indeed, in the context of an Hermitian inner product space they can be ‘unified’ through the notion of a normal linear operator \(T\), that is, one which commutes with its own adjoint, \(TT^\dagger=T^\dagger T\). Self-adjoint operators and unitary operators are clearly both examples of normal operators. Now, for a normal operator \(T\), we have \((Tv,Tv)=(v,T^\dagger Tv)=(v,TT^\dagger v)=(T^\dagger v,T^\dagger v)\). Also, if \(T\) is normal then so is \((T-\lambda\id_V)^\dagger=T^\dagger-\lambda^*\id_V\) for any \(\lambda\in\CC\). So for any normal operator \(T\), if \(\lambda\) is an eigenvalue with eigenvector \(v\), \(Tv=\lambda v\), then \(\lambda^*\) is an eigenvalue of \(T^\dagger\) with eigenvector \(v\), \(T^\dagger v=\lambda^* v\).
Then, if \(Tv_1=\lambda_1v_1\) and \(Tv_2=\lambda_2v_2\) with \(\lambda_1\neq\lambda_2\), \(\lambda_1^*(v_1,v_2)=(Tv_1,v_2)=(v_1,T^\dagger v_2)=(v_1,v_2)\lambda_2^*\), so \((v_1,v_2)=0\). Finally, if \(v_1\) is an eigenvector of a normal operator \(T\) with eigenvalue \(\lambda_1\) then \(\Span(v_1)^\perp\) is \(T\)-invariant since for every \(w\in\Span(v_1)^\perp\) we have \((Tw,v_1)=(w,T^\dagger v_1)=(w,v_1)\lambda_1^*\). So we have,

Theorem \(T\) is a normal operator on an Hermitian inner product space with positive definite inner product if and only if,

  1. Eigenvectors with distinct eigenvalues are orthogonal.
  2. There exists an orthonormal basis of eigenvectors of \(T\). In particular \(T\) is diagonalisable.

Thus, in the case of Hermitian inner product spaces, we have a generalisation of the spectral decomposition result for self-adjoint operators. Any normal operator \(T\) has a spectral decomposition,
\begin{equation}
T=\sum_i\lambda_iP_i,
\end{equation}
where as before, the orthogonal projectors, \(P_i\), correspond to the eigenspaces, \(V_i\), of the (distinct) eigenvalues \(\lambda_i\) of \(T\) in the orthogonal decomposition \(V=\oplus_iV_i\).