Breaking down the essentials of matrix decompositions, explaining how they are used in machine learning and why determinants, traces, eigenvalues, and eigenvectors enable us to draw better decision boundaries when designing our machine learning models.
Matrix factorization
Also known as matrix decomposition, this is the process in which matrices can be described with just a few numbers that characterize the overall properties.
Determinants
- What is it?
- A determinant maps a (square) matrix to a real number.
- For a (square) matrix , the determinant would be written as .
- In an example where we have , we would express this as follows:
- What does it tell us?
- For any square matrix , it holds that is invertible if and only if .
- It tells us about the changes in volume of the space the matrix inhabits.
- Because this tells us about the area/volume, if the angles of the vectors were linearly dependent, then the area would equal zero. Therefore if the vectors are linearly independent, then their determinant would be non-zero.
- When you have your matrix, if you swap the columns, you are essentially reversing the orientation. This will have an impact on our machine learning models and the decision boundary.
- Example: If you have the matrix , you would compute the determinant as follows: . However if you swapped the columns of the matrix as follows: , then computing the determinant would be as follows: . This clearly shows that the orientation is flipped and reversed.
- The sign of the determinant indicates the orientation of the spanning vectors with respect to the standard basis.
- The determinant acts as a function that measures the signed volume formed by column vectors composed in a matrix.
- Determinant as Scaling Factor:
- If the determinant of a matrix is positive, it indicates that the transformation scales volumes by a positive factor. In other words, the transformation expands or contracts space without flipping it.
- If the determinant is negative, it means that the transformation includes a reflection, and the volume of the parallelepiped is scaled by a negative factor. This implies a flip or an orientation reversal.
- How to compute determinants
- matrix
- matrix
- Laplaceβs expansion
Trace
- What is a trace?
- The trace of a square matrix is the sum of diagonal elements. The importance of this property illuminates whether the matrix is invariant under cyclic permutations, i.e. .
- It is defined as the following, where essentially the trace is the sum of the diagonal elements of :
- The trace of a linear mapping is independent of the basis (while matrix representations of linear mappings usually do).
Characteristic Polynomial
- The characteristic polynomial of a square matrix is a polynomial equation obtained by subtracting a scalar multiple of the identity matrix from the matrix itself, and then finding the determinant of the resulting expression. The roots of this polynomial are the eigenvalues of the matrix, which have significant applications in linear algebra and various mathematical fields.
- They are represented as follows:
- In this case, the characteristic polynomial would be . Specifically
Eigenvalues & Eigenvectors
- An eigenvalue of a linear mapping will tell us how a special set of vectors, the eigenvectors, s transformed by the linear mapping.
- An eigenvector is