Self-adjoint operator
On a finite dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. By the finite-dimensional spectral theorem such operators have an orthonormal basis in which the operator can be represented as a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the fact that in the Dirac, von Neumann formulation of quantum mechanics, physical observables such as position, momentum, angular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian
which as an observable corresponds to the total energy of a particle of mass m in a potential field V.
The structure of self-adjoint operators on infinite dimensional Hilbert spaces is complicated somewhat by the fact that the operators may be partial functions, that is defined on a proper subspace of the Hilbert space. Such is the case for differential operators.
Symmetric operators
A partially defined linear operator A on a Hilbert space H is called symmetric iff
for all elements x and y in the domain of A. This usage is fairly standard in the functional analysis literature.
By the Hellinger-Toeplitz theorem, a symmetric everywhere defined operator is bounded.
Bounded symmetric operators are also called Hermitian.
The previous definition agrees with the one for matrices given in the introduction to this article, if we take as H the Hilbert space Cn with the standard dot product and interpret a square matrix as a linear operator on this Hilbert space. It is however much more general as there are important infinite-dimensional Hilbert spaces.
The spectrum of any bounded symmetric operator is real; in particular all its eigenvalues are real, although a symmetric operator may not have any eigenvalues.
A general version of the spectral theorem also applies to bounded symmetric operators is stated below; while the eigenvectors corresponding to different eigenvalues are orthogonal, in general it is not true that the Hilbert space H admits an orthonormal basis consisting only of eigenvectors of the operator. In fact, bounded symmetric operators need not have any eigenvalues or eigenvectors at all.
Example. Consider the complex Hilbert space L2[0,1] and the differential operator A = d2 / dx2, defined on the subspace consisting of all differentiable functions f : [0,1] → C with f(0) = f(1) = 0. Then integration by parts easily proves that A is symmetric. Its eigenfunctions are the sinusoids sin(nπx) for n = 1,2,..., with the real eigenvalues n2π2; the well-known orthogonality of the sine functions follows as a consequence of the property of being symmetric.
We consider generalizations of this operator below.
Self-adjoint operators
Given a densely defined linear operator A on H, its adjoint A* is defined as follows:
- The domain of A* consists of vectors x in H such that
- (which is a densely defined linear map) is a continuous linear functional. By continuity and density of the domain of A, it extends to a unique continuous linear functional on all of H.
- By the Riesz representation theorem for linear functionals, if x is in the domain of A*, there is a unique vector z in H such that
- This vector z is defined to be A* x. It can be shown that the dependence of z on x is linear.
There is a useful geometrical way of looking at the adjoint of an operator A on H as follows: we consider the graph G(A) of A defined by
Theorem. Let J be the mapping
given by
Then
The following is an alternate characterization of symmetric operator: A densely defined operator A is symmetric iff
An operator A is self-adjoint iff A = A*.
Example. Consider the complex Hilbert space L2(R), and the operator which multiplies a given function by x:
The domain of A is the space of all L2 functions for which the right-hand-side is square-integrable. A is a symmetric operator without any eigenvalues and eigenfunctions. In fact it turns out that the operator is self-adjoint, as follows from the theory outlined below.
As we will see later, self-adjoint operators have very important spectral properties; they are in fact multiplication operators on general measure spaces.
Spectral theorem
Partially define operators A, B on Hilbert spaces H, K are unitarily equivalent iff there is a unitary operator U:H → K such that
- U maps dom A bijectively onto dom B,
A multiplication operator is defined as follows: Let be a σ-finite countably additive measure space and f a real-valued measurable function on X. An operator T of the form
whose domain is the space of ψ for which the right had side above is in L2 is called a multiplication operator.
Theorem. Any multiplication operator is a (densely defined) self-adjoint operator. Any self-adjoint operator is unitarily equivalent to a multiplication operator.
Remark. Note that the σ-finiteness property is needed in order for T to be densely defined.
This version of the spectral theorem for self-adjoint operators can be proved by reduction to the spectral theorem for unitary operators. This reduction uses the Cayley transform for self-adjoint operators which is defined in the next section.
Borel functional calculus
Given the representation of T as a multiplication operator, it is then easy to explain how the Borel functional calculus should operate: If h is a bounded real-valued Borel function on R, then h(T) is the operator of multiplication by the composition . In order for this to be well-defined, we need to show that it is the unique operation on bounded real-valued Borel functions satisfying a number of conditions.
Resolution of the identity
It has been customary to introduce the following notation
The family of operators ET(λ) is called resolution of the identity for T. Moreover, the following Stieltjes integral representation for T can be proved:
In more modern treatments, this representation is usually avoided, since most technical problems can be dealt with by the functional calculus.
Extensions of symmetric operators
If an operator A on the Hilbert space H is symmetric, when does it have self-adjoint extensions? The answer is provided by the Cayley transform of a self-adjoint operator and the deficiency indices.
Theorem. Suppose A is a symmetric operator. Then there is a unique partial defined operator linear
such that
W(A) is isometric on its domain. Moreover, the range of 1 - W(A) is dense in H.
Conversely, given any partially defined operator U which is isometric on its domain (which is not necessarily closed) and such that 1 - U is dense, there is a (unique) operator S(U)
such that
The operator S(U) is densely defined and symmetric.
The mappings W and S are inverses of each other.
The mapping W is called the Cayley transform. It associates a partial defined isometry to any symmetric densely-defined operator. Note that the mappings W and S are monotone: This means that if B is a symmetric operator that extends the densely defined symmetric operator A, then W(B) extends W(A), and similarly for S.
Theorem. A necessary and sufficient condition A be self-adjoint is that its Cayley transform W(A) be unitary.
This immediately gives us a necessary and sufficent condition for A to have a self-adjoint extension, as follows:
Theorem. A necessary and sufficient condition A have a self adjoint extension is that W(A) have a unitary extension.
A partially defined isometric operator V on a Hilbert space H has a unique isometric extension to the norm closure of dom(V). A partially defined isometric operator with closed domain is called a partial isometry.
Given a partial isometry V, the deficiency indices of V are defined as follows:
Theorem. A partial isometry V has a unitary extension iff the deficiency indices are identical. Moreover, V has a unique unitary extension iff the both deficiency indices are zero.
An operator which has a unique self-adjoint extension is said to be essentially self-adjoint. Such operators have a well defined Borel functional calculus. Symmetric operators which are not essentially self-adjoint may still have a canonical self-adjoint extension. Such is the case for non-negative symmetric operators (or more generally,operators which are bounded below). These operators always have a canonically defined Friedrichs extension and for these operators we can define a canonical functional calculus. Many operators that occur in analysis are bounded below (such as the negative of the Laplacian operator), so the issue of essential adjointness for these operators is less critical.
Von Neumann's formulas
Suppose A is symmetric; any symmetric extension of A is a restriction of A*; Indeed if B is symmetric
Theorem. Suppose A is a densely defined symmetric operator. Let
Then
and
where the decomposition is orthogonal relative to the graph inner product of dom(A*):
These are referred to as von Neumann's formulas in the Akhiezer and Glazman refernce.
Examples
We give the example of differential operators with constant coefficients. Let
be a polynomial on Rn with real coefficients, where α ranges over a (finite) set of multi-indices. Thus
and
We also use the notation:
Then the operator P(D) defined on the space of infinitely differentiable functions of compact support on Rn by
is essentially self-adjoint on L2(Rn).
Theorem. Let P a polynomial function on Rn with real coefficients, F the Fourier transform considered as a unitary map L2(Rn) → L2(Rn). Then F* P(D) F is essentially self-adjoint and its unique self-adjoint extension is the operator of multiplication by the function P.
Spectral multiplicity theory
The multiplication representation of a self-adjoint operator, though extremely useful, is not a canonical representation. This suggests that it is not easy to extract from this representation a criterion to determine when self-adjoint operators A and B are unitarily equivalent. The finest grained representation which we now discuss involves spectral multiplicity.
We first define uniform multiplicity:
Definition. A self-adjoint operator A has uniform multiplicty n where n is such that 1 ≤ n ≤ ω iff A is unitarily equivalent to the operator of Mf multiplication by the function f(λ) = λ on
where Hn is a Hilbert space of dimension n. The domain of Mf consists of vector-valued functions ψ on R such that
Non-negative countably additive measures μ, ν are mutually singular iff they are supported on disjoint Borel sets.
Theorem. Let A be a self-adjoint operator on a separable Hilbert space H. The there is an ω sequence of countably additive finite measures on R (some of which may be identically 0)
such that the measures are pairwise singular and A is unitarily equivalent to the operator of multiplication by the function f(λ) = λ on
This representation is unique in the following sense: For any two such representations of the same A, the corresponding measures are equivalent in the sense that they have the same sets of measure 0.
Example: structure of the Laplacian
The Laplacian on Rnis the operator
As remarked above, the Laplacian is diagonalized by the Fourier transform. Actually it is more natural to consider the negative of the Laplacian - Δ since as an operator it is non-negative; (see elliptic operator).
Theorem. If n=1, the - Δ has uniform multiplicity mult=2, otherwise - Δ has uniform multiplicity mult=ω. Morover, the measure μmult is Borel measure on [0, ∞).
References
- N.I. Akhiezer and I. M. Glazman, Theory of Linear Operators in Hilbert Space (two volumes), Pitman, 1981.
- K. Yosida, Functional Analysis, Academic Press, 1965.
- M. Reed and B. Simon, Methods of Mathematical Physics vol 2, Academic Press, 1972.