Ali K Esfahani
Rank and Fundamental Subspaces
Consider the equation Ax=b, in which A ∈ ℝᵐˣⁿ, x ∈ ℝⁿ, and b ∈ ℝᵐ. We say x belongs to the input space and b belongs to the output space. From this perspective, the matrix A represents a linear transformation (or linear map) from ℝⁿ to ℝᵐ.
However, not necessarily all of the ℝᵐ is covered during this mapping. Moreover, some directions in the input space may be mapped into zero and not survive this mapping.
as we previously discussed, we have two fundamental subspaces associated with any matrix:
C(A) or the column space of A: the set of all vectors b in ℝᵐ that can be written as Ax=b for some x ∈ ℝⁿ.
N(A) or the null space of A: the set of all vectors x ∈ ℝⁿ such that Ax=0.
So, to be more precise, matrix A maps the input space into its column space (which includes zero point). Those vectors in the input space that are mapped to zero (Ax=0) form the null space of A.
The null space of a matrix maybe trivial (containing only the zero vector), or not. Which vectors are mapped to zero, and which survive depends entirely on the structure of the matrix.
At this point, a natural question arises: What about the rest of the input space ℝⁿ that is not included in the null space?
In addition to the column space (in ℝᵐ) and null space (in ℝⁿ), we have two more fundamental subspaces that together form the complete picture of linear transforming from ℝⁿ to ℝᵐ:
C(Aᵀ) or the row space, which is the complement of null space and that together they span the entire input space ℝⁿ.
N(Aᵀ) or the Left null space, which is the complement of the column space and together they span the entire output space ℝᵐ.
Before going deeper into these concepts, it is important to note that algebraically, the rows of A are the columns of Aᵀ. Since we already understand column spaces well, it is conventional to define the row space of A as C(Aᵀ). Just as column space represents all possible combinations of columns of matrix A, row space represents all possible combinations of rows of matrix A.
Similarly, the null space of Aᵀ is referred to as the left null space A. To better understand left null space, consider that N(Aᵀ) is the group of all vectors like y, which map the matrix Aᵀ into zero:

This expression reveals how vectors in the left null space interact with the matrix A: they are precisely the row vectors that, when multiplied from the left, annihilate the matrix. At the same time, the left null space is the orthogonal complement of the column space of A.
A similar relationship holds between the row space and the null space. To show the orthogonality, consider y ∈ R(A), and x ∈ N(A).

In order to prove two vectors are perpendicular, we should show that their dot product is zero:

This means that any vector in the row space is perpendicular to any vector in null space. in such a case, we say the two spaces are orthogonal. In the same way, column space and left null space are also orthogonal.
Rank as a Measure of the Fundamental Subspaces
The four fundamental subspaces describe how a matrix acts on input and output spaces. The concept of rank tells us how much of those spaces are involved.
The rank of a matrix A is defined as the number of linearly independent columns of A. Equivalently, it is the number of linearly independent rows of A.
The rank of a matrix can be defined as the number of independent columns of a matrix. This is essentially equal to number of independent rows of that matrix. This implies that:

Beyond the formal definition, rank admits several equivalent and useful interpretations:
Rank = number of independent directions preserved by A
Rank = number of input directions that survive without collapsing to zero
Rank = number of pivot positions
Each of these interpretations emphasizes a different aspect of the same underlying fact: rank measures the effective dimensionality of the linear transformation.
A practical approach to identify the independent columns of a matrix (basis of the column space), is to apply row operations and reduce the matrix to row-reduced echelon form (RREF). Counting the number of independent columns yields the rank of the matrix.
We don’t delve into how the RREF procedure is (which is quite simple) and how to find the basis of the 4 subspaces. As engineers, we are interested to solve Ax=b. The behavior of the system Ax=b depends jointly on:
the shape of the matrix (m×n)
and its rank.
The table below, shows when a solution exists, and whether it is unique or not.
Matrix Shape | Rank Case | Column Space | Null Space | Behavior of (Ax=b) |
Tall (m>n) | r = n (full column rank) | (n)-dim subspace of ℝᵐ | {0} | Solution exists only if b∈Col(A); solution is unique |
r < n | Lower-dimensional subspace of ℝᵐ | Non-trivial | Solution exists only if b∈Col(A); infinite solutions | |
Square (m=n) | r = n (full rank) | ℝᵐ | {0} | Unique solution for every b |
r < n | Lower-dimensional subspace of ℝⁿ | Non-trivial | Either no solution or infinite solutions, depending on b | |
Wide (m<n) | r = m (full row rank) | ℝᵐ | Non-trivial | At least one solution for every b; infinitely many solutions |
r < m | Lower-dimensional subspace of ℝᵐ | Non-trivial | Solution exists only if b∈Col(A); infinite solutions |
As evident in the table, when we null space is non-trivial (r≠min(n,m)), it is impossible to have a unique solution. Because if x1 is a solution to Ax=b, and there is a nontrivial null space for A, then:

Another crucial point from the table is that, if b is not in the column space, then there would be no solution. However, in engineering problems, almost all the time, we find ourselves in a situation that b is not in the column space of A, for example when we have similar observation (similar rows of A) but different measurements (different rows of b). So, the main idea is trying to find the x such that Ax is as close as possible to b.
This naturally leads to the idea of orthogonal projection of b onto the column space. The orthogonality between the column space and the left null space ensures that any vector v ∈ ℝᵐ can be written uniquely as

When b does not lie in the column space of A, its column-space component vCol represents the orthogonal projection of b onto C(A). The remaining component lies in the left null space and represents the unavoidable residual error. This decomposition provides the foundation for approximating solutions to Ax=b when no exact solution exists.
04