Skip to content

Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors reveal the “hidden structure” of a linear transformation. They describe the special directions where the transformation simply scales, without rotating or distorting.

Let AA be a square matrix. A non-zero vector v\mathbf{v} is an eigenvector of AA if:

Av=λvA\mathbf{v} = \lambda \mathbf{v}

where λ\lambda (lambda) is a scalar called the eigenvalue.

In plain terms: when you apply AA to an eigenvector, the result is just a scaled version of that same vector. The direction stays the same (or flips if λ\lambda is negative). Only the length changes.

Solve the characteristic equation:

det(AλI)=0\det(A - \lambda I) = 0

This gives you the possible λ\lambda values. Then for each λ\lambda, solve (AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0} to find the eigenvectors.

  • λ=1\lambda = 1: the vector is unchanged
  • λ=2\lambda = 2: the vector is stretched to double length
  • λ=1\lambda = -1: the vector is flipped (reversed direction)
  • λ=0\lambda = 0: the vector is squashed to zero (the transformation collapses that direction)
Eigenvectors of A (scaling 2x, 0.5y) 1 2 3 1 2 x-eigen: λ = 2 y-eigen: λ = 0.5 before: (1, 1) after: (2, 0.5) direction changed (not an eigenvector) before A after A (eigen) not eigen

In the diagram: the matrix AA scales the x-direction by 2 (blue eigenvector doubles in length) and the y-direction by 0.5 (green eigenvector shrinks to half). The dashed arrows show the original vectors, the solid arrows show the result after applying AA. Every other vector would get rotated and distorted, but eigenvectors just scale cleanly.

Example 1: Diagonal Matrix

A=[2003]A = \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix}

For v=1,0\mathbf{v} = \langle 1, 0 \rangle: Av=2,0=21,0A\mathbf{v} = \langle 2, 0 \rangle = 2\langle 1, 0 \rangle, so λ=2\lambda = 2.

For w=0,1\mathbf{w} = \langle 0, 1 \rangle: Aw=0,3=30,1A\mathbf{w} = \langle 0, 3 \rangle = 3\langle 0, 1 \rangle, so λ=3\lambda = 3.

Diagonal matrices make eigenvalues obvious: they’re just the diagonal entries.

Example 2: Finding Eigenvalues of a 2x2

A=[4123]A = \begin{bmatrix} 4 & 1 \\ 2 & 3 \end{bmatrix}

Characteristic equation: det(AλI)=(4λ)(3λ)(1)(2)=0\det(A - \lambda I) = (4 - \lambda)(3 - \lambda) - (1)(2) = 0

λ27λ+10=0\lambda^2 - 7\lambda + 10 = 0

(λ5)(λ2)=0(\lambda - 5)(\lambda - 2) = 0

Eigenvalues: λ1=5\lambda_1 = 5, λ2=2\lambda_2 = 2.

Example 3: Geometric Meaning

If λ>1\lambda \gt 1: the eigenvector direction gets stretched.

If 0<λ<10 \lt \lambda \lt 1: it gets compressed.

If λ<0\lambda \lt 0: it gets flipped and scaled.

λ = 2 (stretch) before after (2x longer) λ = 0.5 (compress) before after (half length) λ = -1 (flip) 0 before after (flipped) before stretched compressed flipped

The three panels show what different eigenvalues do to a vector: λ=2\lambda = 2 doubles its length, λ=0.5\lambda = 0.5 halves it, and λ=1\lambda = -1 flips it to point the opposite direction.

Eigenvalues and eigenvectors are used everywhere:

  • Machine learning: PCA (Principal Component Analysis) finds the eigenvectors of the covariance matrix to identify the most important directions in data
  • Google’s PageRank: the dominant eigenvector of the web’s link matrix determines page importance
  • Game development: stability analysis in physics simulations, animation compression, vibration modes
  • Quantum mechanics: observable quantities are eigenvalues of operators

Example: In character animation, eigen-decomposition compresses large motion capture datasets while preserving the most important movements. The eigenvectors with the largest eigenvalues capture the dominant motion patterns.

An eigenvector $\mathbf{v}$ of matrix $A$ satisfies:
If $\lambda = -1$ for an eigenvector, applying the transformation:
For a diagonal matrix, the eigenvalues are:
Google's PageRank algorithm is based on:
If $\lambda = 0$ is an eigenvalue, the transformation: