Epistemic status: Mostly the result of a late night procrastination session. Proceed with caution. Might be too verbose or not verbose enough.

I have been thinking about linear algebra quite a lot lately. There is a common proof strategy, which usually goes something like:

1. You have a subspace of a vector space
2. Get a basis for that subspace
3. Use the exchange lemma to extend that basis to a basis of some larger subspace
4. Try to figure out if your new basis, or maybe the basis of the complementary subspace, has some desirable properties

However, after reading a particularly cool proof (3.106) in Linear Algebra Done Right, I noticed a fascinating concept: many linear algebra theorems, which are commonly proven using the startegy described above, can often be proven by picking a bunch of appropriate linear maps between the relevant vector spaces and then using a bunch of “elementary” theorems, usually proven using the above method.

This usually takes a bit of effort to figure out, but the results are quite satisfying. Let me illustrate:

Proposition (Dimension of a sum): Let $V$ and $U$ be finitely generated subspaces of a common vector space. Then we have

$\dim(V \times U) = \dim (V + U) + \dim (V \cap U).$

Proof. Consider a pair of linear maps: \begin{aligned} f : V \times U &\to V + U \\ (v, u) &\mapsto v + u, \\ g : V \cap U &\to V \times U \\ v &\mapsto (v, -v). \end{aligned}

It is clear that $f$ is surjective, and it is also clear that $g$ is injective. We just need to prove that $\operatorname{Ran}{g} = \operatorname{Ker} f$.

Take $g(v) \in \operatorname{Ran}{g}$. Then we compute $f(g(v)) = f(v, -v) = v - v = 0,$ therefore $\operatorname{Ran} g \subseteq \operatorname{Ker}{f}$.

Now we consider $(v, u) \in \operatorname{Ker}{f}$. This means that $f(v, u) = v + u = 0, \quad \text{thus}\quad v = -u.$ Additionally, $U$ is a subspace, hence it contains the additive inverse for each of its elements, meaning $v = -u \in V \cap U$. This means we can use $g$ to get $g(v) = (v, -v) = (v, u),$ and therefore $\operatorname{Ker}{f} \subseteq \operatorname{Ran}{g}$.

We finish by using the Rank-Nullity theorem twice, first on $f$ and then on $g$ \begin{aligned} \dim (V \times U) &= \dim \operatorname{Ran}{f} + \dim\operatorname{Ker}{f} \\ &= \dim(V + U) + \dim{\operatorname{Ran}{g}} \\ &= \dim(V + U) + \dim(V \cap U). \end{aligned}

The idea is ultimately simple: we know that every element in $V + U$ can be expressed as a sum of a vector from $V$ and a vector from $U$. By investigating the kernel of $f$, we are essentially asking how many ways this can be done.

I am not really sure if constructing proofs like this one is really “practical”. However, I find this approach somewhat more aesthetic and enlightening than picking a bunch of basis and then getting lost in a sea of coordinates.

Anyway, this blog post is pretty different from what I used to write in the past. I have finished my Electrical Engineering bachelor’s degree and started working towards getting a bachelor’s degree in Mathematics, so I ultimately hope to post more in this vein in the future. I do hope to write some technical posts as well, I have not given up engineering completely and still continue to work as a freelance embedded systems engineer.