Further Attributes of a Euclidean Space

In this Chapter, we will describe additional geometric concepts available in a Euclidean space that will play an important role in our approach to Tensor Calculus. Even if some of these concepts are already familiar to you, you may find our particular approach to be different from what is found in most contemporary texts. Namely, we will continue to introduce concepts in a pure geometric context without any reference to coordinates. Thus, Euclidean spaces will continue to serve as absolute references for our analytical investigations.
In a Euclidean space, a basis -- i.e. an ordered set of linearly independent vectors whose number matches the dimension of the space -- can be described as either positively or negatively oriented. The concept of orientation works in each natural Euclidean dimension, i.e. one, two, and three, and is critical for several important objects that we will introduce in the future, including the Levi-Civita symbol εijk\varepsilon_{ijk} in Chapter 17 and the curl operator in Chapter 18.
Interestingly, the concept of orientation can be almost completely sidestepped, as was done in the predecessor to this book, Introduction to Tensor Analysis and the Calculus of Moving Surfaces, as well as numerous other texts on Tensor Calculus. However, this concept admits such an elegant geometric approach and thus fits so perfectly into our present narrative, that there is much to be gained by discussing it in significant detail.

3.1.1In three dimensions

Let us begin our discussion in three dimensions where the concept of orientation is at its richest. Consider an ordered set of three linearly independent vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} and let PP be the plane spanned by U\mathbf{U} and V\mathbf{V}.
(3.1)
The plane PP splits the three-dimensional space into two half-spaces. Since the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} are linearly independent, and thus the vector W\mathbf{W} cannot lie in the plane PP, it is found either in one or the other half-space.
  (3.2)
Within PP, the vector U\mathbf{U} can be rotated towards V\mathbf{V} in one of two directions. In one direction, the rotation is less than 180180^{\circ} and in the other -- greater than 180180^{\circ}. If W\mathbf{W} is found in the half-space from which the shorter rotation appears counterclockwise then the set U,V,W\mathbf{U},\mathbf{V},\mathbf{W} is said to be positively oriented. Otherwise, it is negatively oriented. In the figure above, the set of vectors on the left is positively oriented, while the one on the right is negatively oriented.
The same criterion can also be expressed in the form of the right-hand rule: if you curl the fingers of your right hand in the direction of the shorter rotation from U\mathbf{U} to V\mathbf{V} and the thumb of that hand points towards the half-space containing W\mathbf{W}, then the set is positively oriented. Note that the right-hand rule relies on the ability to choose the correct hand for the task. Mistakenly choosing the left hand would, naturally, lead to the wrong conclusion. Thus, the right-hand rule relies on the preexisting convention of which hand is right. If humanity lost its collective sense of right, there may not exist a reliable way of restoring it back to its present meaning.
The same can be said of the notion of counterclockwise rotation in our initial formulation of the criterion. It therefore appears that any definition of orientation must rely on a preexisting convention. That said, it is, ultimately, unimportant to be able to define an absolute orientation. It is far more important to be able to tell whether two bases have the same orientation. For this less ambitious task, it is not necessary to know which hand is right or which direction is counterclockwise, as long as the same hand and the same direction are applied to each set of vectors.

3.1.2In two dimensions

Consider two linearly independent vectors U\mathbf{U} and V\mathbf{V} in the plane. For reasons that will become apparent shortly, the task of defining orientation in the plane requires that we think of the plane as a standalone object rather than one embedded in a three-dimensional space. Then an ordered set of vectors U,V\mathbf{U},\mathbf{V} is positively oriented if the shorter rotation from U\mathbf{U} to V\mathbf{V} takes place in the counterclockwise direction. Otherwise, it is negatively oriented. The following figure shows a positively oriented set on the left and a negatively oriented set on the right.
  (3.3)
It must be clear from the above definition why it is essential to think of the plane as a standalone entity rather than a subset of a three-dimensional space. Indeed, a plane imagined in the context of the surrounding three-dimensional space can be viewed either "from above" or "from below". Then a set of vectors that is positively oriented when viewed "from above" will appear negatively oriented when viewed "from below", making the definition incomplete. This is one of the reasons why the concept of orientation applies only to complete sets of linearly independent vectors, i.e. sets whose count matches the dimension of the space.

3.1.3In one dimension

A one-dimensional Euclidean space is a straight line. In order to define the orientation of a set consisting of a single nonzero vector, we must arbitrarily label one of the two directions on the straight line as positive. Then the vector that points in the positive direction is said to be positively oriented. Otherwise, the set is negatively oriented.
The notion of orientation is inextricably linked to the determinant. In, say, three dimensions, suppose that two ordered sets of linearly independent vectors U,V,W\mathbf{U},\mathbf{V},\mathbf{W} and U,V,W\mathbf{U} ^{\prime},\mathbf{V}^{\prime},\mathbf{W}^{\prime} are related by the matrix
A=[A11A12A13A21A22A23A31A32A33],(3.4)A=\left[ \begin{array} {ccc} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{array} \right] ,\tag{3.4}
i.e.
[UVW]=[A11A12A13A21A22A23A31A32A33][UVW],(3.5)\left[ \begin{array} {c} \mathbf{U}^{\prime}\\ \mathbf{V}^{\prime}\\ \mathbf{W}^{\prime} \end{array} \right] =\left[ \begin{array} {ccc} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{array} \right] \left[ \begin{array} {c} \mathbf{U}\\ \mathbf{V}\\ \mathbf{W} \end{array} \right] ,\tag{3.5}
In words, the rows of AA consist of the components of the primed vectors in terms of the unprimed vectors. Given this relationship, the orientations of the two sets are the same when
detA>0(3.6)\det A\gt 0\tag{3.6}
and opposite when
detA<0.(3.7)\det A \lt 0.\tag{3.7}
The great advantage of this criterion is its algebraic nature which opens many doors including the extension of the concept of orientation to higher dimensions as well as general linear spaces.
We will now outline an intuitive geometric argument that ought to convince you of the validity of the determinant criterion. First, let us confirm that it yields the correct result when the sets U,V,W\mathbf{U},\mathbf{V},\mathbf{W} and U,V,W\mathbf{U}^{\prime},\mathbf{V}^{\prime},\mathbf{W}^{\prime} coincide, i.e.
U=U          (3.8)V=V          (3.9)W=W,          (3.10)\begin{aligned}\mathbf{U} & =\mathbf{U}^{\prime}\ \ \ \ \ \ \ \ \ \ \left(3.8\right)\\\mathbf{V} & =\mathbf{V}^{\prime}\ \ \ \ \ \ \ \ \ \ \left(3.9\right)\\\mathbf{W} & =\mathbf{W}^{\prime},\ \ \ \ \ \ \ \ \ \ \left(3.10\right)\end{aligned}
and therefore have the same orientation. In this case, AA is the identity matrix and thus
detA=1(3.11)\det A=1\tag{3.11}
which is positive, consistent with the two sets of vectors having the same orientation.
Next, let us consider an arbitrary set of vectors U,V,W\mathbf{U},\mathbf{V} ,\mathbf{W} that has the same orientation as U,V,W\mathbf{U}^{\prime} ,\mathbf{V}^{\prime},\mathbf{W}^{\prime}. We will present a continuous orientation-preserving evolution of the vectors U\mathbf{U} ^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} into the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}, such that the vectorstextbf{ }U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} maintain their linear independence throughout. Since in the course of this continuous evolution, detA\det A cannot assume the value 00 and has the eventual value 11, we will conclude that the initial value of detA\det A is also positive, as we set out to show. Filling in some of the details of this procedure is left as an exercise.
Denote the plane containing the vectors U\mathbf{U} and V\mathbf{V} by PP. First, rigidly rotate the set U,V,W\mathbf{U}^{\prime},\mathbf{V} ^{\prime},\mathbf{W}^{\prime} so that U\mathbf{U}^{\prime} points in the same direction as U\mathbf{U}. Next, rigidly rotate U,V,W\mathbf{U} ^{\prime},\mathbf{V}^{\prime},\mathbf{W}^{\prime} again, this time about the straight line shared by U\mathbf{U} and U\mathbf{U}^{\prime} until V\mathbf{V}^{\prime} is in the plane PP and in the same half-plane as V\mathbf{V} with respect to U\mathbf{U}, so that the shorter rotation from U\mathbf{U} to V\mathbf{V} in the plane PP is in the same direction as from U\mathbf{U}^{\prime} to V\mathbf{V}^{\prime}. Since the orientation of U,V,W\mathbf{U},\mathbf{V},\mathbf{W} is the same as U,V,W\mathbf{U}^{\prime },\mathbf{V}^{\prime},\mathbf{W}^{\prime}, the vector W\mathbf{W}^{\prime} will be found in the same half-space relative to PP as W\mathbf{W} and we can now make three final innocuous adjustments. Rotate V\mathbf{V}^{\prime} within PP so that it points in the same direction as V\mathbf{V}, rotate W\mathbf{W}^{\prime} so that it points in the same direction as W\mathbf{W}, and, finally, scale U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} so that they coincide with U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}.
Since the evolution of U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} was continuous, so was the evolution of detA,\det A, and since U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} maintained their linear independence, detA\det A remained nonzero and thus maintained its sign. Finally, since the eventual value of detA\det A is 11, we conclude that its initial value was also positive, as we set out to prove.
For logical completeness, we must also show that if the orientations of U,V,W\mathbf{U},\mathbf{V},\mathbf{W} and U,V,W\mathbf{U}^{\prime},\mathbf{V} ^{\prime},\mathbf{W}^{\prime} are opposite, then detA\det A is negative. This case can be analyzed by reducing it to that of matching orientations. Namely, if the orientations of U,V,W\mathbf{U},\mathbf{V},\mathbf{W} and U,V,W\mathbf{U} ^{\prime},\mathbf{V}^{\prime},\mathbf{W}^{\prime} are opposite, then the orientations of V,U,W\mathbf{V},\mathbf{U},\mathbf{W} -- where the first two elements were switched -- and U,V,W\mathbf{U}^{\prime},\mathbf{V}^{\prime },\mathbf{W}^{\prime} are the same. (Justifying this statement is left as an exercise.) Therefore, the determinant of the matrix AA^{\ast} that relates V,U,W\mathbf{V},\mathbf{U},\mathbf{W} and U,V,W\mathbf{U}^{\prime},\mathbf{V} ^{\prime},\mathbf{W}^{\prime} is positive. Meanwhile, since AA and AA^{\ast } are related by a single column switch, their determinants differ in sign, i.e.
detA=detA.(3.12)\det A^{\ast}=-\det A.\tag{3.12}
Therefore,
detA<0,(3.13)\det A \lt 0,\tag{3.13}
and the demonstration is complete.
The fact established in the previous Section -- that the sign of the determinant of the matrix AA relating two complete sets of linearly independent vectors indicates whether the two sets have the same or opposite orientations -- is a special case of a much more general statement. Namely, the determinant of AA represents the ratio of the signed volumes of the parallelepiped formed by the two sets. The signed volume of the parallelepiped formed by vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} is defined to be its conventional volume if the set U,V,W\mathbf{U},\mathbf{V} ,\mathbf{W} is positively oriented and minus its conventional volume if it is negatively oriented.
Let us now outline the classical argument that proves this fact. Denote the two signed volumes by Ω\Omega^{\prime} and Ω\Omega and consider again the identity
[UVW]=[A11A12A13A21A22A23A31A32A33][UVW].(3.5)\left[ \begin{array} {c} \mathbf{U}^{\prime}\\ \mathbf{V}^{\prime}\\ \mathbf{W}^{\prime} \end{array} \right] =\left[ \begin{array} {ccc} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{array} \right] \left[ \begin{array} {c} \mathbf{U}\\ \mathbf{V}\\ \mathbf{W} \end{array} \right] . \tag{3.5}
Our task is to show that
Ω=det(A)Ω.(3.14)\Omega^{\prime}=\det\left( A\right) \Omega.\tag{3.14}
Fix the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}, and therefore also the value Ω\Omega, and only vary the matrix AA and with it, the vectors U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime}. Recall that the determinant is uniquely defined by the three propertiesnewline 1. The determinant is linear in each row of the matrixnewline 2. The determinant changes sign when two rows are switchednewline 3. The determinant of the identity matrix is 11.newline Meanwhile, observe that the signed volume Ω\Omega^{\prime} of the parallelepiped formed by the vectors U\mathbf{U} ^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} is similarly defined by the three propertiesnewline 1. Ω\Omega^{\prime} is linear with respect to each of the vectors U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime}newline 2. Ω\Omega^{\prime} changes sign when two vectors in the set U,V,W\mathbf{U} ^{\prime},\mathbf{V}^{\prime},\mathbf{W}^{\prime} are switchednewline 3. Ω\Omega^{\prime} equals Ω\Omega when the set U,V,W\mathbf{U}^{\prime},\mathbf{V}^{\prime},\mathbf{W}^{\prime} coincides with U,V,W\mathbf{U},\mathbf{V},\mathbf{W}, i.e. when AA is the identity matrix.newline Notice the perfect correspondence between the two sets of the governing properties. Thus, if we imagine that the vectors U,V,W\mathbf{U} ^{\prime},\mathbf{V}^{\prime},\mathbf{W}^{\prime} initially coincide with U,V,W\mathbf{U},\mathbf{V},\mathbf{W} and are subsequently constructed by replacing vectors with appropriate linear combinations, the values of Ω\Omega^{\prime} and detA\det A will change in identical ways. And since the identity
Ω=det(A)Ω.(3.14)\Omega^{\prime}=\det\left( A\right) \Omega. \tag{3.14}
is valid (by the third property) initially, it will be valid throughout the construction.
The disadvantage of the formula
Ω=det(A)Ω(3.14)\Omega^{\prime}=\det\left( A\right) \Omega\tag{3.14}
discussed in the previous Section, is that it relates the volume of the parallelepiped to that of some other parallelepiped. Meanwhile, we would like to determine the volume of the parallelepiped formed by vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} without a reference to another set of vectors.
In this Section, we will show that the square of the volume Ω\Omega equals the determinant of the familiar matrix
M=[UUUVUWVUVVVWWUWVWW],(3.15)M=\left[ \begin{array} {ccc} \mathbf{U}\cdot\mathbf{U} & \mathbf{U}\cdot\mathbf{V} & \mathbf{U} \cdot\mathbf{W}\\ \mathbf{V}\cdot\mathbf{U} & \mathbf{V}\cdot\mathbf{V} & \mathbf{V} \cdot\mathbf{W}\\ \mathbf{W}\cdot\mathbf{U} & \mathbf{W}\cdot\mathbf{V} & \mathbf{W} \cdot\mathbf{W} \end{array} \right] ,\tag{3.15}
that we have already encountered on a number of important occasions. Thus, Ω\left\vert \Omega\right\vert , i.e. the conventional (unsigned) volume of the parallelepiped formed by vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}, is the square root of detM\det M, i.e.
Ω=detM(3.16)\left\vert \Omega\right\vert =\sqrt{\det M}\tag{3.16}
Since, as we demonstrated in Exercise 2.14, the matrix MM is positive definite for linearly independent U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}, its determinant is positive and thus the extraction of the square root is valid. The above formula is the key to the object known as the volume element and denoted by Z\sqrt{Z} that will be introduced in Chapter 9.
In order to demonstrate this identity, let us introduce a basis b1,b2,b3\mathbf{b} _{1},\mathbf{b}_{2},\mathbf{b}_{3} which we will later assume to be orthonormal. The matrix AA that relates the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} to the elements of the basis b1,b2,b3\mathbf{b} _{1},\mathbf{b}_{2},\mathbf{b}_{3} consists of the components of the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} organized into rows, i.e.
A=[U1U2U3V1V2V3W1W2W3].(3.17)A=\left[ \begin{array} {ccc} U_{1} & U_{2} & U_{3}\\ V_{1} & V_{2} & V_{3}\\ W_{1} & W_{2} & W_{3} \end{array} \right] .\tag{3.17}
(Note that, in the future, we will switch to superscripts to enumerate the components of a vector.)
According to the findings of the previous Section, detA\det A is the ratio of the signed volume of the parallelepiped formed by U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} and that of the parallelepiped formed by b1\mathbf{b}_{1}, b2\mathbf{b}_{2}, and b3\mathbf{b}_{3}. By the familiar properties of the determinant, we have
det(AAT)=detAdetAT=det2A.(3.18)\det\left( AA^{T}\right) =\det A\det A^{T}=\det{}^{2}A.\tag{3.18}
Thus, det(AAT)\det\left( AA^{T}\right) equals the ratio of the squares of the volumes of the two parallelepipeds.
Now, consider the special case when the basis b1,b2,b3\mathbf{b}_{1},\mathbf{b} _{2},\mathbf{b}_{3} is orthonormal. Then the parallelepiped formed by b1\mathbf{b}_{1}, b2\mathbf{b}_{2}, and b3\mathbf{b}_{3} is, in fact, a unit cube of volume 11 and therefore det(AAT)\det\left( AA^{T}\right) is simply the square of the volume of the parallelepiped formed by U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}. Carrying out the matrix product, we find
AAT=[U12+U22+U32U1V1+U2V2+U3V3U1W1+U2W2+U3W3U1V1+U2V2+U3V3V12+V22+V32V1W1+V2W2+V3W3U1W1+U2W2+U3W3V1W1+V2W2+V3W3W12+W22+W32].(3.19)AA^{T}=\left[ \hspace{-0.05in} \small \begin{array} {ccc} U_{{\tiny 1}}^{2}+U_{2}^{2}+U_{3}^{2} & U_{1}V_{1}+U_{2}V_{2}+U_{3}V_{3} & U_{1}W_{1}+U_{2}W_{2}+U_{3}W_{3}\\ U_{1}V_{1}+U_{2}V_{2}+U_{3}V_{3} & V_{1}^{2}+V_{2}^{2}+V_{3}^{2} & V_{1} W_{1}+V_{2}W_{2}+V_{3}W_{3}\\ U_{1}W_{1}+U_{2}W_{2}+U_{3}W_{3} & V_{1}W_{1}+V_{2}W_{2}+V_{3}W_{3} & W_{1}^{2}+W_{2}^{2}+W_{3}^{2} \end{array} \normalsize \hspace{-0.05in}\right] .\tag{3.19}
Recalling the fact that the basis b1,b2,b3\mathbf{b}_{1},\mathbf{b}_{2} ,\mathbf{b}_{3} is orthonormal, we observe that the entries of AATAA^{T} are the pairwise dot products of the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} and therefore AA equals MM, i.e.
AAT=[UUUVUWVUVVVWWUWVWW]=M.(3.20)AA^{T}=\left[ \begin{array} {ccc} \mathbf{U}\cdot\mathbf{U} & \mathbf{U}\cdot\mathbf{V} & \mathbf{U} \cdot\mathbf{W}\\ \mathbf{V}\cdot\mathbf{U} & \mathbf{V}\cdot\mathbf{V} & \mathbf{V} \cdot\mathbf{W}\\ \mathbf{W}\cdot\mathbf{U} & \mathbf{W}\cdot\mathbf{V} & \mathbf{W} \cdot\mathbf{W} \end{array} \right] =M.\tag{3.20}
Thus, its determinant is indeed the square of the volume of the parallelepiped formed by U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}, as we set out to prove.
Let us verify this statement in the case of a two-dimensional parallelogram formed by U\mathbf{U} and V\mathbf{V}. Note that
UUUVVUVV=(UU)(VV)(UV)2.(3.21)\left\vert \begin{array} {cc} \mathbf{U}\cdot\mathbf{U} & \mathbf{U}\cdot\mathbf{V}\\ \mathbf{V}\cdot\mathbf{U} & \mathbf{V}\cdot\mathbf{V} \end{array} \right\vert =\left( \mathbf{U}\cdot\mathbf{U}\right) \left( \mathbf{V} \cdot\mathbf{V}\right) -\left( \mathbf{U}\cdot\mathbf{V}\right) ^{2}.\tag{3.21}
Since
UU=len2U,          (3.22)VV=len2V, and          (3.23)UV=lenUlenVcosγ,          (3.24)\begin{aligned}\mathbf{U}\cdot\mathbf{U} & =\operatorname{len}^{2}\mathbf{U}\text{,}\ \ \ \ \ \ \ \ \ \ \left(3.22\right)\\\mathbf{V}\cdot\mathbf{V} & =\operatorname{len}^{2}\mathbf{V}\text{, and}\ \ \ \ \ \ \ \ \ \ \left(3.23\right)\\\mathbf{U}\cdot\mathbf{V} & =\operatorname{len}\mathbf{U}\operatorname{len} \mathbf{V}\cos\gamma,\ \ \ \ \ \ \ \ \ \ \left(3.24\right)\end{aligned}
where γ\gamma is the angle between U\mathbf{U} and V\mathbf{V}, we have
UUUVVUVV=len2Ulen2Vlen2Ulen2Vcos2γ.(3.25)\left\vert \begin{array} {cc} \mathbf{U}\cdot\mathbf{U} & \mathbf{U}\cdot\mathbf{V}\\ \mathbf{V}\cdot\mathbf{U} & \mathbf{V}\cdot\mathbf{V} \end{array} \right\vert =\operatorname{len}^{2}\mathbf{U}\operatorname{len}^{2} \mathbf{V}-\operatorname{len}^{2}\mathbf{U}\operatorname{len}^{2} \mathbf{V}\cos^{2}\gamma.\tag{3.25}
Finally, since 1cos2γ=sin2γ1-\cos^{2}\gamma=\sin^{2}\gamma, we have
detM=(lenUlenVsinγ)2,(3.26)\det M=\left( \operatorname{len}\mathbf{U}\operatorname{len}\mathbf{V} \sin\gamma\right) ^{2},\tag{3.26}
which is precisely the square of the area of the parallelogram formed by U\mathbf{U} and V\mathbf{V}.
Undoubtedly, the reader is familiar with the cross product, also known as the vector product. The cross product is an operation of exceptional utility in the three-dimensional Euclidean space where it finds numerous applications, particularly in mechanics, fluid dynamics, and electromagnetism. While it is common to introduce the cross product algebraically in terms of the components of the vectors, we will, consistent with our general approach, adopt a geometric definition as our starting point. The analytical definition of the cross product will be discussed, in full tensor terms, in Chapter 17.
In a three-dimensional space, suppose that two vectors U\mathbf{U} and V\mathbf{V} form an angle γ\gamma. Then their cross product W\mathbf{W}, i.e.
W=U×V,(3.27)\mathbf{W}=\mathbf{U}\times\mathbf{V},\tag{3.27}
is uniquely determined by the following three conditions. First, W\mathbf{W} is orthogonal to the plane spanned by U\mathbf{U} and V\mathbf{V}. In other words, W\mathbf{W} is orthogonal to both U\mathbf{U} and V\mathbf{V}. Second, the length of W\mathbf{W} is the product of the lengths of U\mathbf{U} and V\mathbf{V} and the sine of the angle γ\gamma between them, i.e.
lenW=lenUlenVsinγ.(3.28)\operatorname{len}\mathbf{W}=\operatorname{len}\mathbf{U}\operatorname{len} \mathbf{V}\sin\gamma.\tag{3.28}
In geometric terms, this quantity equals the conventional area of the parallelogram formed by U\mathbf{U} and V\mathbf{V}. Finally, between the two vectors that satisfy the first two conditions, W\mathbf{W} is selected so that the set U,V,W\mathbf{U},\mathbf{V},\mathbf{W} is positively oriented. In other words, W\mathbf{W} is selected by the right-hand rule: when the fingers of the right hand curl from U\mathbf{U} to V\mathbf{V} in the shorter direction, W\mathbf{W} points in the direction indicated by the thumb.
(3.29)
It immediately follows from this definition that the cross product is anticommutative, i.e.
U×V=V×U.(3.30)\mathbf{U}\times\mathbf{V}=-\mathbf{V}\times\mathbf{U.}\tag{3.30}
The cross product of a vector with itself is defined to be zero
U×U=0,(3.31)\mathbf{U}\times\mathbf{U}=\mathbf{0},\tag{3.31}
although this identity may also be seen as a consequence of the anticommutative property, according to which U×U\mathbf{U}\times\mathbf{U} equals U×U-\mathbf{U}\times\mathbf{U} and must, therefore, be 0\mathbf{0}. The cross product is not associative, i.e. generally speaking,
U×(V×W)(U×V)×W,(3.32)\mathbf{U}\times\left( \mathbf{V}\times\mathbf{W}\right) \neq\left( \mathbf{U}\times\mathbf{V}\right) \times\mathbf{W,}\tag{3.32}
since the vector on the left is orthogonal to U\mathbf{U} while the vector on the right is not necessarily so. Thus, the cross product lacks the commutative and associative properties commonly satisfied by product-like operations.
Meanwhile, the cross product satisfies the associative law with respect to multiplication by a constant
α(U×V)=(αU)×V,(3.33)\alpha\left( \mathbf{U}\times\mathbf{V}\right) =\left( \alpha \mathbf{U}\right) \times\mathbf{V,}\tag{3.33}
as well as the distributive law
U×(V1+V2)=U×V1+U×V2.(3.34)\mathbf{U}\times\left( \mathbf{V}_{1}+\mathbf{V}_{2}\right) =\mathbf{U} \times\mathbf{V}_{1}+\mathbf{U}\times\mathbf{V}_{2}\mathbf{.}\tag{3.34}
While the associative law is easy to show, the distributive law poses somewhat of a challenge if we insist on proving it geometrically. An exercise at the end of this Chapter provides an elegant way of demonstrating this property by taking advantage of the distributive property of the dot product. The exercise uses the combination
U(V×W)(3.35)\mathbf{U}\cdot\left( \mathbf{V}\times\mathbf{W}\right)\tag{3.35}
which equals the signed volume of the parallelepiped spanned by the three vectors. We observe that
U(V×W)=V(W×U)=W(U×V),(3.36)\mathbf{U}\cdot\left( \mathbf{V}\times\mathbf{W}\right) =\mathbf{V} \cdot\left( \mathbf{W}\times\mathbf{U}\right) =\mathbf{W}\cdot\left( \mathbf{U}\times\mathbf{V}\right) ,\tag{3.36}
since each combination represents one and the same signed volume.
Finally, we note that the presented definition of the cross product has a clear problem with units of measurement. If length is measured in, say, meters, then the product lenUlenVsinγ\operatorname{len}\mathbf{U}\operatorname{len} \mathbf{V}\sin\gamma has the units of square meters and can therefore not serve as the length of another vector. Nevertheless, this issue will not affect the rest of our narrative and we will therefore leave it unaddressed.
We have already established that a Euclidean space is endowed with the concept of length for straight segments. Of course, the concept of length can be easily extended to areas and volumes for rectangles and rectangular parallelepipeds.
(3.37)
The area AA of a rectangle with sides of lengths aa and bb is given by
A=ab,(3.38)A=ab,\tag{3.38}
while the volume VV of a rectangular parallelepiped with sides of lengths aa, bb, and cc is given by
V=abc.(3.39)V=abc.\tag{3.39}
It is a straightforward task to extend these concepts to polygons and polyhedra, i.e. piecewise-straight shapes. For curved shapes, the story is more complicated, since we must first explain what we mean by the length of a curve, the area of a surface, and the volume of a curved solid.
We should note that the length of a curve can be well described on an intuitive geometric -- or, perhaps, physical -- level. A curve can be imagined as an inextensible malleable string, i.e. a string that changes its shape but does not stretch or shrink. Then the length of the curve can be understood as the Euclidean length of the string when it is pulled straight. This insight does not help us formulate a formal definition of length since the term inextensible relies on the notion of length in the first place. Nor does it provide us with a practical way of calculating length. It does, however, connect the concept of length to a physical object that we are all familiar with.
Unfortunately, no such intuitive insight is available for the area of a curved surface since -- as we will discover in the future -- most curved surfaces cannot be flattened without stretching or shrinking. However, as we have already stated in the case of a curve, even if there were such intuition, it would do little in the way of leading us to a reasonably rigorous definition. Thus, instead of pursuing rigor, we will give a description that, while not rigorous, is both intuitive and constructive, where by constructive we mean that it can be used, at least theoretically, to calculate the length, the area, or the volume of any curve, surface, or solid shape.
Let us illustrate our approach in the context of the area of a curved surface. One of the keys to our shared intuition with regard to area is its additive property: if a surface is divided into a number of parts, then the total area equals the sum of the areas of the parts.
(3.40)
For example, in the above figure, the total area SS of the overall patch is given by
S=S1+S2+S3+S4.(3.41)S=S_{1}+S_{2}+S_{3}+S_{4}.\tag{3.41}
However, the additive property by itself is not sufficient for the concept of area. After all, if a surface is curved, then so are all of its parts. Thus, subdivision of a surface does not eliminate the effects of curvature which make the concept of area challenging in the first place.
The breakthrough comes from the infinitesimal approach, already known to the ancients, which lies at the very heart of Calculus. The idea is to increase the number of the constituent pieces to "infinity" where each piece can be thought of as effectively flat.
(3.42)
What makes the infinitesimal approach work is the fact that, as the pieces grow in number and their size diminishes, the effects of curvature diminish at a faster rate. Consequently, when small curved patches are replaced by flat polygons (we omit the precise details of how to accomplish that), the combined area of the polygons approaches what we intuitively understand to be the area of the curved surface. This way of reasoning leads to the following attempt at the definition of area: the area of a curved surface is the limit of the areas of piecewise flat approximations to the surface as the number of flat pieces increases to infinity and the size of each piece goes to zero.
This definition has been known to be deeply problematic ever since the 18881888 paper by Hermann Schwarz titled On an erroneous definition of area of a curved surface surprised the mathematical community by showing that the surface of a cylinder can be approximated by increasingly small triangles whose combined area grows without bound instead of converging to the area of the cylinder. Thus, at the very least, additional stipulations are needed on the precise manner in which the piecewise flat "mesh" approaches the curved surface in order for the area of the former to converge to the area of the latter. However, these important technical details are beyond our scope. Nevertheless, this and many other difficulties notwithstanding, the idea of infinite subdivision has more than earned its place in the mathematical toolbox. While it is important to be aware of the serious difficulties with which some mathematical concepts present us, it is even more important to develop a habit of moving forward, all the while contemplating the uncertainties inevitably left behind.
The infinitesimal approach is particularly fitting in the context of our narrative since it is entirely geometric. It offers an effective way of defining the concepts of length, area, and volume by a geometric approach that does not require the introduction of coordinates.
Arc length, which is a synonym for the length of a curve, offers a convenient way of parameterizing the curve. Select an arbitrary point PP on the curve to serve as a reference point known as the origin and associate with every point its signed arc length to the point PP. To define signed arc length, arbitrarily choose one of the directions along the curve as positive and the other as negative. Then, to the points along the positive direction, assign their actual arc length, while to the points along the negative direction, assign minus their actual arc length. The choice of the direction in which the parameterization increases is known as its orientation and is entirely analogous to the concept of orientation in a one-dimensional Euclidean space.
(3.43)
We should note that the use of any parameterization amounts to imposing a coordinate system upon the curve, with an arc length parameterization being a particularly special coordinate system. Thus, relying on this parameterization is somewhat controversial in the context of Tensor Calculus which is built on the idea of avoiding special coordinate systems. On the other hand, arc length is a very natural coordinate system because of its perfect regularity along the curve, and in Chapter 5, it will demonstrate its unique value for some theoretical investigations. However, it is not well suited for other theoretical investigations as well as virtually all practical calculations. We will therefore have to later revisit the analysis of curves with the help of a fully developed tensor framework.
The idea of infinite subdivision can also be used as the foundation of the theory of integration. (In fact, the integral sign \int was introduced by Gottfried Leibniz as a stylized letter S for sum.) For example, for a scalar FF defined on a domain Ω\Omega, the volume integral
ΩFdΩ(3.44)\int_{\Omega}Fd\Omega\tag{3.44}
newline can be defined as the limit of the familiar Riemann sum FΔΩ\sum F\Delta\Omega as the subdivision of the domain increases. The exact same limiting process can be applied to a surface integral
SFdS(3.45)\int_{S}FdS\tag{3.45}
over a surface patch SS, as well as a line integral
LFdL(3.46)\int_{L}FdL\tag{3.46}
over a curve segment LL. In the above expressions the traditional symbols dLdL, dSdS, and dΩd\Omega represent the proverbial infinitesimal amounts of length, area, and volume.
Thus, the idea of integration is no more conceptually challenging than that of length, area, or volume, and we will similarly accept it without a formal definition. Note, furthermore, that integration works just as effectively for vector fields as it does for scalar fields. Indeed, for a vector field F\mathbf{F}, the vector-valued integral
ΩFdΩ(3.47)\int_{\Omega}\mathbf{F}d\Omega\tag{3.47}
makes sense since geometric vectors are subject to addition and multiplication by numbers and therefore the Riemann sum FΔΩ\sum\mathbf{F}\Delta\Omega is perfectly meaningful.
The interpretation of the integral of a quantity FF defined on some domain -- be it a curve, a surface, or a solid -- is straightforward and simple: it is the total amount of FF over the domain. For instance, if ρ\rho is the density distribution of a body occupying a domain Ω\Omega, then the integral
M=ΩρdΩ(3.44)M=\int_{\Omega}\rho d\Omega\tag{3.44}
represents the total mass. Similarly, if σ\sigma is the electric charge distribution over a surface SS, then the integral
Σ=SσdS(3.48)\Sigma=\int_{S}\sigma dS\tag{3.48}
represents the total charge. For a vector-valued example, if g\mathbf{g} is the (variable) acceleration of gravity distribution with density distribution ρ\rho, then the integral
G=ΩρgdΩ(3.49)\mathbf{G}=\int_{\Omega}\rho\mathbf{g}d\Omega\tag{3.49}
represents the total force of gravity. If the integrand is chosen to be 11, then the curve, surface, and volume integrals yield the length LL of the curve, the area SS of the surface, and the volume Ω\Omega of the solid, i.e.
L=LdL          (3.50)S=SdS          (3.51)Ω=ΩdΩ.          (3.52)\begin{aligned}L & =\int_{L}dL\ \ \ \ \ \ \ \ \ \ \left(3.50\right)\\S & =\int_{S}dS\ \ \ \ \ \ \ \ \ \ \left(3.51\right)\\\Omega & =\int_{\Omega}d\Omega.\ \ \ \ \ \ \ \ \ \ \left(3.52\right)\end{aligned}
These nearly tautological formulas will be used with surprising frequency in our future analyses.
In the context of our approach, integrals can be described as invariant since they are defined strictly in terms of geometric quantities and without the use of coordinate systems. In fact, a surprising amount of theoretical analysis with integrals can be performed without the use of specific coordinate systems. Nevertheless, most practical problems do require specific coordinates and thus, one of the tasks that Tensor Calculus takes upon itself is to provide a recipe for converting an invariant integral into an coordinate space arithmetic integral that can be evaluated by the techniques of ordinary Calculus or with the help of computational techniques.
Exercise 3.1In both two and three dimensions, show that switching the order of any two vectors in a basis changes its orientation.
Exercise 3.2In a three-dimensional space, show that the sets of vectors U,V,W\mathbf{U} ,\mathbf{V},\mathbf{W}, V,W,U\mathbf{V},\mathbf{W,U}, and W,U,V\mathbf{W,U,V} have the same orientation, opposite of that shared by the sets V,U,W\mathbf{V} ,\mathbf{U},\mathbf{W},  U,W,V\ \mathbf{U,W,V}, and W,V,U\mathbf{W,V},\mathbf{U}.
Exercise 3.3In a three-dimensional space, show that the orientation of the set of vectors U,V,W-\mathbf{U},-\mathbf{V},-\mathbf{W} is opposite that of U,V,W\mathbf{U} ,\mathbf{V},\mathbf{W}.
Exercise 3.4In a three-dimensional space, show that reflecting a basis U,V,W\mathbf{U} ,\mathbf{V},\mathbf{W} with respect to a plane changes its orientation. Similarly, in a two-dimensional space, show that reflecting the basis U,V\mathbf{U},\mathbf{V} with respect to a straight line changes its orientation.
Exercise 3.5Confirm that the linear independence of the vectors U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} is maintained at each step of the evolution described in Section 3.2.
Exercise 3.6Use an argument analogous to that of Section 3.2 to demonstrate the determinant criterion for matching orientations in the two-dimensional case.
Exercise 3.7Use the determinant criterion to show that adding a multiple of one vector to another does not change the orientation of a basis, while switching the order of two vectors does change the orientation, as does multiplying one of the vectors by a negative number.
Exercise 3.8Demonstrate the associative property
α(U×V)=(αU)×V(3.33)\alpha\left( \mathbf{U}\times\mathbf{V}\right) =\left( \alpha \mathbf{U}\right) \times\mathbf{V} \tag{3.33}
of the cross product by a geometric argument.
Exercise 3.9Suppose that W\mathbf{W} is the cross product of U\mathbf{U} and V\mathbf{V}:
W=U×V.(3.53)\mathbf{W}=\mathbf{U}\times\mathbf{V.}\tag{3.53}
If U\mathbf{U}^{\prime}, V\mathbf{V}^{\prime}, and W\mathbf{W}^{\prime} represent the mirror images of U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}, is W\mathbf{W}^{\prime} the cross product of U\mathbf{U} ^{\prime} and V\mathbf{V}^{\prime}? A mirror image is the result of reflecting a vector with respect to a plane. It is implied that all three vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} are reflected with respect to the same plane.
Exercise 3.10Fill in the details of the argument used in Section 3.3 that demonstrates that
Ω=det(A)Ω.(3.14)\Omega^{\prime}=\det\left( A\right) \Omega. \tag{3.14}
Exercise 3.11Show by a geometric argument that the combination
U(V×W)(3.35)\mathbf{U}\cdot\left( \mathbf{V}\times\mathbf{W}\right) \tag{3.35}
equals the signed volume of the parallelepiped formed by the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W}.
Exercise 3.12Alternatively, show that the combination
U(V×W)(3.35)\mathbf{U}\cdot\left( \mathbf{V}\times\mathbf{W}\right) \tag{3.35}
equals the signed volume of the parallelepiped formed by the vectors U\mathbf{U}, V\mathbf{V}, and W\mathbf{W} by demonstrating that the above product satisfies the three governing properties of signed volume discussed in Section 3.3.
Exercise 3.13From the two preceding exercises, conclude that
U(V×W)=V(W×U)=W(U×V).(3.36)\mathbf{U}\cdot\left( \mathbf{V}\times\mathbf{W}\right) =\mathbf{V} \cdot\left( \mathbf{W}\times\mathbf{U}\right) =\mathbf{W}\cdot\left( \mathbf{U}\times\mathbf{V}\right) . \tag{3.36}
Exercise 3.14Use the above identity to prove the distributive property
U×(V1+V2)=U×V1+U×V2(3.34)\mathbf{U}\times\left( \mathbf{V}_{1}+\mathbf{V}_{2}\right) =\mathbf{U} \times\mathbf{V}_{1}+\mathbf{U}\times\mathbf{V}_{2} \tag{3.34}
of the cross product. Hint: Dot the presumed identity with an arbitrary vector X\mathbf{X}.
Send feedback to Pavel