Skip to content

Basic Vector Algebra

Vector Spaces

Geometry consists of two classes of “objects”: Points, which form the space, and vectors, which defines a sense of direction at every point. First, we will deal with linear spaces, which are the space of the vectors assigned to a single fixed point. Then, we will deal with affine spaces, in which points, as well as vectors, can vary.

A vector space is the set of all vectors associated with a single point, together with the operations of addition and scalar multiplication.

  • The addition of two vectors are generally performed by the parallelogram rule, although other definitions are also possible.
  • Scalar multiplication changes a vector’s length, but not direction.

These operations must satisfy the axioms for addition and scalar multiplication. As these axioms are well known, we will not list them here.

Norm of a vector

Each vector has a length, or norm, associated with it. Norm of a vector \(\vec{v}\) is denoted by \(|\vec{v}|\). Vectors with unit length are called “normal”.

Angle between two vectors

The angle between two vectors \(\vec{v_1}\) and \(\vec{v_2}\) is denoted by \(\measuredangle \vec{v_1} \vec{v_2}\). Angle is directed: \(\measuredangle \vec{v_1} \vec{v_2} = -\measuredangle \vec{v_2} \vec{v_1}\).

Multiplication of two vectors

Addition of two vectors and multiplication of a vector with a scalar are defined by means of the axioms of vector space. A more knotty problem is to multiply two vectors. Two different vector multiplications exist:

Dot Product

Dot product of two vectors give a scalar. It is defined by
\begin{eqnarray}
\vec{v}.\vec{w}=|\vec{v}||\vec{w}|\cos(\theta)
\end{eqnarray}
where \(\theta=\measuredangle \vec{v_1} \vec{v_2}\) is the angle between the two vectors.
Note that dot product of a vector with itself is
\begin{eqnarray}
\vec{v}.\vec{v}=|\vec{v}||\vec{v}|\cos(0)=|\vec{v}|^2
\end{eqnarray}
hence norm can be redefined as
\begin{eqnarray}
|\vec{v}|=\sqrt{\vec{v}.\vec{v}}
\end{eqnarray}

Cross Product

Cross product of two vectors give an another vector
\begin{eqnarray}
\vec{v} \times \vec{w} = \vec{z}
\end{eqnarray}
where the norm of the resulting vector $\vec{z}$ is given by
\begin{eqnarray}
|\vec{z}|=|\vec{v}||\vec{w}||\sin(\theta)|
\end{eqnarray}
and its direction is given by the “right hand rule”. Note that right hand rule indicates that
\begin{eqnarray}
\vec{v}\times \vec{w} = -\vec{w}\times \vec{v}
\end{eqnarray}
hence, cross product is not commutative.

Cross product is only defined for 3D vectors.

Vector Bases in three dimensional spaces

In three dimensional space, we can pick three arbitrary vectors and express all the other vectors as a linear combination of these three vectors. The only restriction on these three vectors is that they should be linearly independent.
Let us pick three linearly independent vectors \(\vec{e_1}, \vec{e_2}, \vec{e_3} \). Then any other vector \(\vec{w}\) of the space can be written as
\begin{eqnarray}
\vec{w}=\alpha_1 \vec{e_1} + \alpha_2 \vec{e_2} + \alpha_3 \vec{e_3}
\end{eqnarray}
The set \(E=\{\vec{e_1}, \vec{e_2}, \vec{e_3}\} \) is called a basis and the scalars \(\alpha_1,\alpha_2,\alpha_3\) are called “the components of vector \(\vec{w}\) in basis \(B\)”. An equivalent notation for (xx) is
\begin{eqnarray}
\vec{w} =\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3
\end{bmatrix}_E
\end{eqnarray}

Note that we can pick a completely different set of basis vectors as \(U=\{\vec{u_1}, \vec{u_2}, \vec{u_3}\} \). In this new basis, our old vector \(\vec{w}\) will be expressed as
\begin{eqnarray}
\vec{w}=\beta_1 \vec{u_1} + \beta_2 \vec{u_2} + \beta_3 \vec{u_3}
\end{eqnarray}
In this new basis, it will have a completely different representation:
\begin{eqnarray}
\vec{w} =\begin{bmatrix}
\beta_1\\
\beta_2\\
\beta_3
\end{bmatrix}_U
\end{eqnarray}
and these two representations are equivalent.
\begin{eqnarray}
\vec{w} =\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3
\end{bmatrix}_E  = \begin{bmatrix}
\beta_1\\
\beta_2\\
\beta_3
\end{bmatrix}_U
\end{eqnarray}
Naturally, the basis vectors are expressed in their own basis as
\begin{eqnarray}
\vec{e_1} =\begin{bmatrix}
1\\
0\\
0
\end{bmatrix}_E,   \vec{e_2} = \begin{bmatrix}
0\\
1\\
0
\end{bmatrix}_E,   \vec{e_3} = \begin{bmatrix}
0\\
0\\
1
\end{bmatrix}_E
\end{eqnarray}

Orthonormal Case

In orthonormal case, the basis vectors are orthonormal, ie, they are of unit length (normal)
\begin{eqnarray}
|\vec{e_i}|=1
\end{eqnarray}
and othogonal to each other
\begin{eqnarray}
\vec{e_i}.\vec{e_j}=\delta_{ij}
\end{eqnarray}
The dot product definition can be extended to basis. If we define the vectors $\boldsymbol{v}$, $\vec{v}$ in the basis $B$ as
\begin{eqnarray}
\vec{v}=a_1 \vec{e_1} +a_2 \vec{e_2} +a_3 \vec{e_3} \\
\vec{w}=b_1 \vec{e_1} +b_2 \vec{e_2} +b_3 \vec{e_3}
\end{eqnarray}
Then the dot product of them is
\begin{eqnarray}
\vec{v}. \vec{w} &=& a_1b_1 \vec{e_1}. \vec{e_1}+ a_1b_2 \vec{e_1}. \vec{e_2}+ a_1b_3 \vec{e_1}. \vec{e_3}+ \\
&& a_2b_1 \vec{e_2}. \vec{e_1}+ a_2b_2 \vec{e_2}. \vec{e_2}+ a_2b_3 \vec{e_2}. \vec{e_3}+ \\
&& a_3b_1 \vec{e_3}. \vec{e_1}+ a_3b_2 \vec{e_3}. \vec{e_2}+ a_3b_3 \vec{e_3}. \vec{e_3}
\end{eqnarray}
using equations .., this simplifies to
\begin{eqnarray}
\vec{v}. \vec{w}=a_1 b_1 + a_2 b_2 + a_3 b_3
\end{eqnarray}
Or, in tupple notation
\begin{eqnarray}
\vec{v}. \vec{w}= \vec{v}^T \vec{w}= \begin{bmatrix}
a_1
a_2
a_3
\end{bmatrix} \begin{bmatrix}
b_1\\
b_2\\
b_3
\end{bmatrix} =a_1 b_1 + a_2 b_2 + a_3 b_3
\end{eqnarray}
Please note carefully that this formula for inner products is only valid in orthonormal bases. So, do not use it in non-orthonormal bases. The fully general formula will be given in next subsection.

The norm in orthonormal basis can be expressed as
\begin{eqnarray}
|\vec{v}|=\sqrt{\vec{v}.\vec{v}}=\sqrt{a_1^2 + a_2^2 + a_3^2}
\end{eqnarray}
The angle \(\theta\) between \(\vec{v}\) and \(\vec{v}\) can be found from (..)
\begin{eqnarray}
\theta &=& \frac{\vec{v}.\vec{w}}{|\vec{v}||\vec{w}|}\\
&=& \frac{a_1 b_1 + a_2 b_2 + a_3 b_3}{\sqrt{a_1^2 + a_2^2 + a_3^2} \sqrt{b_1^2 + b_2^2 + b_3^2 } }
\end{eqnarray}
As for cross product, we have
\begin{eqnarray}
\vec{v} \times \vec{w} &=& a_1b_1 \vec{e_1} \times \vec{e_1}+ a_1b_2 \vec{e_1}\times \vec{e_2}+ a_1b_3 \vec{e_1} \times \vec{e_3}+ \\
&& a_2b_1 \vec{e_2}\times \vec{e_1}+ a_2b_2 \vec{e_2} \times \vec{e_2}+ a_2b_3 \vec{e_2} \times \vec{e_3}+ \\
&& a_3b_1 \vec{e_3} \times \vec{e_1}+ a_3b_2 \vec{e_3}\times \vec{e_2}+ a_3b_3 \vec{e_3} \times \vec{e_3}
\end{eqnarray}
Let us use rule (…)
\begin{eqnarray}
\vec{v} \times \vec{w} &=& a_1b_2 \vec{e_1}\times \vec{e_2}+ a_1b_3 \vec{e_1} \times \vec{e_3}+ \\
&& a_2b_1 \vec{e_2}\times \vec{e_1}+ a_2b_3 \vec{e_2} \times \vec{e_3}+ \\
&& a_3b_1 \vec{e_3} \times \vec{e_1}+ a_3b_2 \vec{e_3}\times \vec{e_2}
\end{eqnarray}
and rule (..)
\begin{eqnarray}
\vec{v} \times \vec{w} &=& (a_1b_2-a_2b_1) \vec{e_1}\times \vec{e_2}+ (a_1b_3-a_3b_1) \vec{e_1} \times \vec{e_3}
+ (a_2b_3-a_3b_2) \vec{e_2} \times \vec{e_3} \nonumber
\end{eqnarray}

Non-Orthonormal Case

In non-orthonormal case, formulas become much more complicated.
Let us assume that we want to use three non-orthonormal vectors \(\vec{u_1}, \vec{u_2}, \vec{u_3}\) as a basis. This means that we have
\begin{eqnarray}
\vec{e_i}.\vec{e_j}=g_{ij}
\end{eqnarray}
instead of (..), which makes things much more complicated.
If we define the vectors \(\vec{v}\), \(\vec{v}\) in the basis \(B\) as
\begin{eqnarray}
\vec{v}=a_1 \vec{u_1} +a_2 \vec{u_2} +a_3 \vec{u_3} \\
\vec{w}=b_1 \vec{u_1} +b_2 \vec{u_2} +b_3 \vec{u_3}
\end{eqnarray}
Then the dot product of them is
\begin{eqnarray}
\vec{v}. \vec{w} &=& a_1b_1 \vec{u_1}. \vec{u_1}+ a_1b_2 \vec{u_1}. \vec{u_2}+ a_1b_3 \vec{u_1}. \vec{u_3}+ \\
&& a_2b_1 \vec{u_2}. \vec{u_1}+ a_2b_2 \vec{u_2}. \vec{u_2}+ a_2b_3 \vec{u_2}. \vec{u_3}+ \\
&& a_3b_1 \vec{u_3}. \vec{u_1}+ a_3b_2 \vec{u_3}. \vec{u_2}+ a_3b_3 \vec{u_3}. \vec{u_3}\\ \\
&=& g_{11} a_1b_1 + g_{12} a_1b_2 + g_{13} a_1b_3 + \\
&& g_{21} a_2b_1 + g_{22} a_2b_2 + g_{23} a_2b_3 + \\
&& g_{31} a_3b_1 + g_{32} a_3b_2 + g_{33} a_3b_3 \\\\
&=& \sum_{i=1}^3 \sum_{j=1}^3 g_{ij} a_i b_j
\end{eqnarray}
and the norm equation becomes
\begin{eqnarray}
|\vec{v}|=\sqrt{\vec{v}.\vec{v}}=\sqrt{\sum_{i=1}^3 \sum_{j=1}^3 g_{ij} a_i a_j}
\end{eqnarray}
and the angle between the two vectors $\vec{v}$, $\vec{v}$ becomes
\begin{eqnarray}
\theta &=& \frac{\vec{v}.\vec{w}}{|\vec{v}||\vec{w}|}\\
&=& \frac{\sum_{i=1}^3 \sum_{j=1}^3 g_{ij} a_i b_j}{\sqrt{\sum_{i=1}^3 \sum_{j=1}^3 g_{ij} a_i a_j} \sqrt{\sum_{i=1}^3 \sum_{j=1}^3 g_{ij} b_i b_j } }
\end{eqnarray}
As can be seen, the non-orthonormal formulas are considerably more complicated then their orthonormal versions.

Expressing a vector in a basis

All the problems of physics are described in some sort of setting. We pick our origin and define our basis vectors by using the features of
this setting. It is impossible to describe a vector, let alone a basis, in a completely featureless space.

For example, if we want to describe the locations around istanbul, we may choose our origin at the ayasofya museum. Then the
first basis vector can be choosen from ayasofya to sultanahmet, and the second basis vector can be choosen from ayasofya to yenicami.
\subsection{Basis Independence}

\section{Basis change between two orthonormal bases}
Consider two left-handed orthonormal bases $u_1, u_2, u_3$ and $v_1, v_2, v_3$. We can write
\begin{eqnarray}
u_1 =
\end{eqnarray}