Skip to content

3. Vector spaces

The lecture on vector spaces consists of three parts:

and at the end of this lecture note, there is a set of corresponding exercises


The contents of this lecture are summarised in the following videos:

  1. Vector spaces: Introduction

  2. Operations in vector spaces

  3. Properties of vector spaces

Total video lentgh: ~16 minutes

3.1. Definition and basis dependence

A vector is a mathematical object characterised by both a magnitude and a direction, that is, an orientation in a given space.

We can express a vector in terms of its individual components. Let's assume we have an -dimensional space, meaning that the vector can be oriented in different ways along each of dimensions. The expression of in terms of its components is

We will denote by the vector space composed by all possible vectors of the above form.

The components of a vector, can be real numbers or complex numbers, depending on whether we have a real or a complex vector space.

Vector basis

Note that the above expression of in terms of its components assume that we are using a specific basis. It is important to recall that the same vector can be expressed in terms of different bases. A vector basis is a set of vectors that can be used to generate all the elements of a vector space.

For example, a possible basis of could be denoted by , and we can write a generic vector as

However, one could choose a different basis, denoted by , where the same vector would be expressed in terms of a different set of components

Thus, while the vector remains the same, the values of its components depend on the specific choice of basis.

The most common basis is the Cartesian basis, where for example for :

The elements of a vector basis must be linearly independent from one another, meaning that none of them can be expressed as a linear combination of the other basis vectors.

We can consider one example in the two-dimensional real vector space , namely the coordinate plane, shown below.

image

In this figure, you can see how the same vector can be expressed in two different bases. In the first one (left panel), the Cartesian basis is used and its components are . In the second basis (right panel), the components are different, namely , while the magnitude and direction of the vector remain unchanged.

For many problems, both in mathematics and in physics, the appropriate choice of the vector space basis may significantly simplify the solution process.

3.2. Properties of a vector space

You might be already familiar with the concept of performing a number of various operations between vectors, so in this course, let us review some essential operations that are relevant to start working with quantum mechanics:

Addition

I can add two vectors to produce a third vector, As with scalar addition, also vectors satisfy the commutative property, Vector addition can be carried out in terms of their components,

Scalar multiplication

I can multiply a vector by a scalar number (either real or complex) to produce another vector, Addition and scalar multiplication of vectors are both associative and distributive, so the following relations hold

Vector products

In addition to multiplying a vector by a scalar, as mentioned above, one can also multiply two vectors among them. There are two types of vector products; where the end result is a scalar (so just a number) and where the end result is another vector.

Scalar product of vectors

The scalar product of vectors is given by Note that since the scalar product is just a number, its value will not depend on the specific basis in which we express the vectors: the scalar product is said to be basis-independent. The scalar product is also found via with the angle between the vectors.

Cross product

The vector product (or cross product) between two vectors and is given by where (and likewise for ) is the norm of the vector , is the angle between the two vectors, and is a unit vector which is perpendicular to the plane that contains and . Note that this cross-product can only be defined in three-dimensional vector spaces. The resulting vector will have as components , , and .

Unit vector and orthonormality

Unit vector

A special vector is the unit vector, which has a norm of 1 by definition. A unit vector is often denoted with a hat, rather than an arrow ( instead of ). To find the unit vector in the direction of an arbitrary vector , we divide by the norm:

Orthonormality

Two vectors are said to be orthonormal of they are perpendicular (orthogonal) and both are unit vectors.

Now we are ready to define in a more formal way what vector spaces are, an essential concept for the description of quantum mechanics.

The main properties

The main properties of vector spaces are the following:

A vector space is complete upon vector addition. This property means that if two arbitrary vectors and are elements of a given vector space , then their addition should also be an element of the same vector space

A vector space is complete upon scalar multiplication. This property means that when I multiply one arbitrary vector , element of the vector space , by a general scalar , the result is another vector which also belongs to the same vector space

The property that a vector space is complete upon scalar multiplication and vector addition is also known as the closure condition.

There exists a null element such that .

Inverse element: for each vector there exists another element of the same vector space, , such that their addition results in the null element, . This element it called the inverse element.

A vector space comes often equipped with various multiplication operations between vectors, such as the scalar product mentioned above (also known as inner product), but also many other operations such as vector product or tensor product. There are also many other properties, but for what we are interested in right now, these are sufficient.

3.3. Matrix representation of vectors

It is advantageous to represent vectors with a notation suitable for matrix manipulation and operations. As we will show in the next lectures, the operations involving states in quantum systems can be expressed in the language of linear algebra.

First of all, let us remind ourselves how we express vectors in the standard Euclidean space. In two dimensions, the position of a point when making explicit the Cartesian basis vectors reads As mentioned above, the unit vectors and form an orthonormal basis of this vector space, and we call and the components of with respect to the directions spanned by the basis vectors.

Recall also that the choice of basis vectors is not unique, we can use any other pair of orthonormal unit vectors and , and express the vector in terms of these new basis vectors as with and . So, while the vector itself does not depend on the basis, the values of its components are basis dependent.

We can also express the vector in the following form which is known as a column vector. Note that this notation assumes a specific choice of basis vectors, which is left implicit and displays only the information on its components along this specific basis.

For instance, if we had chosen another set of basis vectors and , the components would be and , and the corresponding column vector representing the same vector in such case would be given by

We also know that Euclidean space is equipped with a scalar vector product. The scalar product of two vectors in 2D Euclidean space is given by where and indicate the magnitude (length) of the vectors and indicates its relative angle. Note that the scalar product of two vectors is just a number, and thus it must be independent of the choice of basis.

The same scalar product can also be expressed in terms of components of and . When using the basis, the scalar product will be given by Note that the same result would be obtained if the basis had been chosen instead

The scalar product of two vectors can also be expressed, taking into account the properties of matrix multiplication, in the following form where here we say that the vector is represented by a row vector.

Therefore, we see that the scalar product of vectors in Euclidean space can be expressed as the matrix multiplication of row and column vectors. The same formalism, as we will see in the next class, can be applied for the case of Hilbert spaces in quantum mechanics.


3.4. Problems

1) [😀] Find a unit vector parallel to the sum of and , where we have defined and .


2) [😀] If the vectors and may be written in the parametric form as a function of the parameter as follows and Evaluate the following derivatives with respect to the parameter :

(a) .

(b) .


3) [😓] Three non-zero vectors , and are such that is perpendicular to and is perpendicular to . Show that is perpendicular to . If the magnitude of the vectors , and are in the ratio 1:2:4, find the angle between and .


4) [😀] Find the vector product and the triple product , where these three vectors are defined as and and