Inverse of a matrix: proof of the adjoint method#

For any matrix A, we can define the adjoint of a matrix adj A as the matrix containing all the cofactors, and taking the transpose:

A=(a11a12a13a21a22a23a31a32a33)adjA=(A11A21A31A12A22A32A13A23A33).

Note that the adjoint is not at all the same as the Hermitian adjoint operator from section Special Matrices! In fact, while the is easy to calculate, the adjoint requires us to calculate n2 cofactors, which can be a lot of work.

Now consider the top left (“11”) component of the matrix product AadjA:

(AadjA)11=a11A11+a12A12+a13A13=detA.

This element is the determinant of A. Similarly, we can calculate the “12” element of the matrix product AadjA:

(AadjA)12=a11A21+a12A22+a13A23=0.

This expression is zero, because it is equal to the determinant of A with the second row replaced with the first one (write this out for a 3×3 matrix if you do not see it). Since we now have a matrix with two identical rows, the determinant is zero. Continuing this type of argument, you can show that therefore

AadjA=(detA000detA000detA)=detAI.

Since detA is just a number, we have that

A(adjAdetA)=IA1=adjAdetA.

Calculating the cofactors and the determinant of A therefore allows you to find the inverse of A directly. The argument above works for all n×n matrices.

Note that the inverse does not exist if the determinant is zero. You can understand this from the meaning of the determinant we explored above: a zero determinant operation maps vectors onto spaces of a lower dimension. Since the inverse of a matrix is the transformation that “undoes” the effect of the original matrix, it should take any vector back to its original. However, if more than one vector gets mapped to the same transformed vector, the reverse operation will not be able to tell which of the possible input vectors the original matrix operated on.