## Dual spaces, transposes, and adjoints

What’s the dual space of a vector space?

Let $V$ be an $n$-dimensional vector space, with basis

$$ B = \{ v_1, \ldots, v_n \}. $$

Now consider an arbitrary linear function $f$ that takes a vector $v \in V$ and maps it to a scalar in, for example, $\mathbb{R}$. Then express $v$ as a linear combination of the basis vectors:

$$ \begin{align*}

f(v)

&= f(c_1v_1 + \cdots + c_nv_n) \\

&= c_1 f(v_1) + \cdots + c_n f(v_n).

\end{align*} $$

This means that if we specify $f(v_1)$ through $f(v_n)$, we’ve completely characterized $f$. In other words, if you tell me what $f(v_1)$ through $f(v_n)$ equal, I know what $f(v)$ equals for *any* vector $v$.

So we can specify any linear function that maps from $V$ to $\mathbb{R}$ (called a “linear functional”) as a list of $n$ scalars. This makes it clear that a linear functional is an $n$-dimensional vector itself, and so linear functions form a $n$-dimensional vector space as well:

$$V^* = \{ f_{[a_1, \ldots, a_n]} \mid a_1, \ldots, a_n \in \mathbb{R} \}.$$ This is the dual space of $V$, where I’ve used $f_{[a_1, \ldots, a_n]}$ to denote the linear functional such that $f(v_1) = a_1$, etc.

Since $V$ and its dual space $V^*$ have the same dimension, each element in $V$ has a corresponding element in $V^*$. Consider that

$$

f_{[a_1, \ldots, a_n]} = a_1(c_1v_1) + \cdots + a_n(c_nv_n).

$$Look at the right-hand side — if the vectors we’re dealing with are Euclidean vectors, then we could write $\vecb{a} = (a_1, \ldots, a_n)$ and express any functional as a dot product:

$$

f_{\vecb{a}}(\vecb{v}) = \vecb{a} \cdot \vecb{v}.

$$ We can extend this idea to any inner product space. If $V$ has an inner product, then we can very naturally associate a vector $w \in V$ with its corresponding functional in $V^*$, defined as $$f(v) = \langle v, w \rangle.$$

Now let’s throw linear transformations into the mix. Let $T$ be a linear transformation from $U$ to $W$, and let $f \in W^*$ be a linear functional on $W$.

Then if $u \in U$, observe that $f(T(u))$ gives us a scalar. Since $f$ composed with $T$ maps an element in $U$ to a scalar, the composition $f \circ T$ is a linear functional itself, an element of $U^*$!

In this way, given a functional in $W^*$ and a transformation $T$, we can generate functionals in $U^*$. So we can define another linear transformation from $W^*$ to $U^*$ that gives us this functional: $$ T^\intercal(f) = f \circ T.$$

$T^\intercal$ is called the transpose of $T$, and it’s not a coincidence that the matrix of $T^\intercal$ is the transpose of the matrix of $T$ in certain bases.

Finally, let’s talk about inner product spaces again. If we have inner products for $U$ and $W$, we can, like above, associate vectors $u \in U$ and $w \in W$ with functionals in $U^*$ and $W^*$ respectively:

$$\begin{align*}

u \quad&\longleftrightarrow\quad f_u(v) = \langle v, u \rangle \\

w \quad&\longleftrightarrow\quad f_w(v) = \langle v, w \rangle.

\end{align*}$$

Recall that $T^\intercal$ takes a functional in $W^*$ and returns a functional in $U^*$. It’d be nice if we could deal with vectors in $W$ and $U$ instead of functionals in $W^*$ and $U^*$. Can we replace the transpose with another transformation that takes a vector in $W$, transforms it into its corresponding functional in $W^*$, apply $T^\intercal$ to it to get a functional in $U^*$, and finally transform it back into its corresponding vector in $U$?

It turns out we can: it’s called the adjoint of $T$, denoted $T^*$.

To see how $T^*$ must be defined, let’s take the definition of the transpose and substitute $f \in W^*$ with its inner product equivalent, $\langle v, w\rangle$, where $w$ is the vector in $W$ associated with $f$, and $v$ is the functional’s argument:

$$\begin{align*}

T^\intercal(f)

&= T^\intercal(\langle v, w\rangle) \\

&= (\langle v, w\rangle) \circ T \\

&= \langle T(v), w\rangle.

\end{align*}$$

Then asking what vector in $U$ corresponds to this functional amounts to asking what vector satisfies

$$ \langle T(v), w\rangle = \langle v, ?\rangle. $$

Thus, $T^*$ is the linear transformation that is defined to be the one that makes the following equality true:

$$ \langle T(v), w\rangle = \langle v, T^*(w)\rangle. $$

Once we do that, we can go on to define important operators like normal, unitary, and Hermitian operators!

Your definition of the dual of a vector space isn’t correct.

You can either take the algebraic dual of a vector space: every linear functional from a vector space to the scalar field. This differs from what you said because you can easily choose a vector space to have uncountable dimension.

Or the topological dual, if your vector space has a notion of convergence you can take only the continuous linear functionals, which is also good.

Do you mean that diagram commutes for all linear transformations, or just ones that preserve the inner product?

space , the continuous dual and the algebraic dual coincide. This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps . Nevertheless, in the theory of topological vector spaces the terms “continuous dual space” and “topological dual space” are often replaced by “dual space”, since there is no serious need to consider discontinuous maps in this field.

It appears to me that your math can’t decide whether the vector called “v” is a U or a W. For instance, you have an inner product “”, suggesting it must be a W, and also invoke “T(v)”, where T : U –> W. Am I missing something?

oops, the inner product got squished. I meant: “<v,w>”.