We will begin our narrative in a strictly geometric setting. We will accept geometric
vectors, defined as directed segments in the Euclidean space, as the initial objects of our
study. The Euclidean space corresponds to the physical space of our everyday experience. The
resulting geometric framework will prove invaluable as it will not only engage our geometric
intuition but will also, and perhaps more importantly, help us "ground" our investigations by
enabling us to relate all analytical expressions to an absolute, albeit imagined, reality.
Experience shows that without the ability to do so, even the simplest calculations related to
change of variables become mired in impenetrable uncertainties and ambiguities.
There are additional advantages to starting in a geometric setting. As we will quickly discover,
working with geometric quantities leads to greater organization of thought. Geometric vectors have
fewer algebraic capabilities compared to numbers. This is a positive: the small number of
accessible operations focuses our attention on the few that are available. For similar reasons,
analytical expressions that mix geometric and numerical quantities possess a greater degree of
structural organization since geometric objects can enter those expressions only in a very limited
number of ways.
Tensor Calculus is the art of using coordinate systems, and that is precisely why coordinate
systems will appear only in Chapter 6. Coordinate
systems cannot be fully appreciated until it is clear what can be accomplished without them.
Furthermore, it is essential to maintain a clear separation between the coordinate space and the
geometric space that the former serves to describe. Good fences make good neighbors. Thus, the
first few chapters will be spent exploring exclusively the geometric space and the geometric
objects within it. Most readers are likely already familiar with the factual content of these
chapters. Nevertheless, the early part of our narrative is essential as it will organize the
well-known facts into a coherent logical structure that will be carried through the entire book.
2.1A description of a Euclidean space
We define a Euclidean space as a space in which the axioms and conclusions of classical
Euclidean Geometry are valid. Thus, a Euclidean space is meant to describe the physical space of
our everyday experience.
Admittedly, this definition is not beyond reproach. First of all, mathematicians have for centuries
expressed doubt with respect to the internal consistency of Euclid's framework and continue to do
so to this day. Second, it has never been a foregone conclusion that Euclidean Geometry accurately
describes the three-dimensional space of our experience. This profound question took on even
greater significance since Albert Einstein's General Theory of Relativity was put forth in 1915.
Nevertheless, we will not let these eternal uncertainties impede our progress. One must find a
reasonable starting point and use it as a fulcrum for moving forward. Ours is the Euclidean space
as most of us intuitively understand it. From it, we will proceed to develop an analytical
framework that will later compel us to return to our starting point and reexamine its assumptions.
In addition to the three-dimensional Euclidean space, we may also consider two- and one-dimensional
Euclidean spaces. A two-dimensional Euclidean space is a plane. A one-dimensional Euclidean
space is a straight line. The plane will provide us with many of our concrete examples since planar
objects are often easier to visualize than their three-dimensional counterparts. The straight line,
on the other hand, is not a sufficiently rich object for many illustrations and will not figure
frequently in our discussions. Nevertheless, it is an important concept that we must keep in mind
when making general statements about Euclidean spaces.
Our geometric approach does not allow for four- and higher-dimensional Euclidean spaces. Unlike
two- and one-dimensional spaces, four- and higher-dimensional spaces cannot be visualized
and, therefore, our intuitive understanding of a Euclidean space cannot be extended to dimensions
greater than three. While this point is more-or-less uncontroversial, it is often met with
surprise. The reason behind the surprise is the ingrained association, or even equivalence, between
Euclidean spaces and . And
as
extends effortlessly to and
beyond, Euclidean spaces are often brought along for the ride, as if it were the very same thing.
However, it is not the very same thing, it is a very different thing, and it is very important to
break the ill-conceived equivalence between Euclidean space and , so
that a well-conceived association can be properly constructed.
It is worth reiterating the utmost importance of accepting the Euclidean space and objects within
it as pure geometric objects. The tradition of imagining a coordinate system imposed upon the space
has become so deeply entrenched, that for some of us it is difficult to imagine the geometric space
without also imagining a coordinate system. Relearning to do so may take some time as well as a
certain degree of introspection. But it is essential. The goal of Tensor Calculus is effective use
of coordinate systems which is not possible without realizing that coordinate systems are meant to
serve geometric ideas and not to replace them.
2.2Geometric vectors
The primary object in a Euclidean space is a directed segment known as a geometric vector:
(2.1)
We will tend to drop the word geometric with the
understanding that the term vector refers to a directed segment. We will denote vectors by
bold capital letters, such as Despite its apparent simplicity, the geometric vector is an object of great richness as it combines
geometric and algebraic characteristics. Its geometric nature is self-evident. Its algebraic nature
is found in the fact that vectors are subject to the operations of addition, multiplication by
numbers, and the dot product.
We should note that the very ability to accommodate vectors is an important characteristic of a
Euclidean space that is consistent with the fact that we intuitively understand the space of our
experience to be fundamentally straight. Yet, we have made no attempt to define either space
or straight. For the sake of forward progress, we will leave these terms undefined and
simply agree that we all intuitively understand these concepts in the same way and, furthermore,
that we recognize that the physical space of our everyday experience can accommodate straight
lines. We may continue to worry that some of our assumptions are vague or, worse, intrinsically
inconsistent. Should that be the case, we hope that our future investigations will help clarify
some of the vagueness, shed light on the existing inconsistencies, and present us a way of removing
some of the more glaring inconsistencies in favor of more subtle and therefore more profound
contradictions to be addressed at a later date.
2.2.1Euclidean length
One of the intrinsic concepts available in a Euclidean space is that of length. To highlight
its primary, axiomatic nature, it is referred to as Euclidean length. We will use the symbol
to denote the length of the vector
.
In order to define a unit of length, an arbitrary segment is singled out as a reference.
Then the length of any other segment is expressed by a multiple relative to the reference segment.
The length of a geometric vector is also known as its magnitude or absolute value.
The term length is used more commonly for vectors that represent geometric quantities such
as displacements. Meanwhile, the terms magnitude and absolute value are used for
vectors corresponding to physical quantities such as velocity, acceleration, and force.
2.2.2Addition of vectors
Vectors can be added together and multiplied by numbers. Addition is carried out according to the
well-known tip-to-tail rule: given two vectors and , their sum is constructed by shifting the tail
of to the tip of and connecting the tail of to the tip of . This construction is illustrated in
the following figure.
(2.4)
This
definition makes intuitive sense since it agrees with our understanding of movements.
An equivalent way to add to vectors is by the equally-well-known parallelogram rule.
According to the parallelogram rule, the tails of and are placed at the same point and
their sum is the vector connecting their tails to the fourth vertex of a completed parallelogram,
as illustrated in the following figure.
(2.5)
The advantage of the parallelogram rule is that it assigns
equal roles to the two vectors and does not require shifting one of the vectors. Its disadvantage,
however, is that it does not work when the two vectors are collinear or when one or both of the
vectors are zero. We must also remark that both definitions rely on the much-debated Fifth
Postulate, which reminds us that mathematical origins are always accompanied by doubt.
2.2.3Multiplication by a number and linear combinations
Next, let us describe multiplication of geometric vectors by numbers. If is a positive number, then the product
is
the vector that points in the same direction as and has length that equals the product of and the length of . If is negative, the direction of the
resulting vector is reversed, i.e.
points in the direction opposite of , and its length is the product of and the length of .
(2.6)
Since multiplication by a number has the effect of scaling, i.e. changing the length of the vector,
the word scalar is commonly used to mean number.
An expression that combines multiplication by numbers with addition, such as
is known as a linear
combination.
2.2.4Subtraction of vectors
Vectors can also be subtracted. One way to define the difference
is in terms of addition, i.e.
is a vector such that
Alternatively, can be defined as the linear combination
It is a straightforward task to show
that the two definitions are equivalent.
The result of a subtraction is easier to construct geometrically than that of addition. If the
tails of the vectors and are at the same point then is the vector from the tip of to the tip of . This construction is illustrated in
the following figure.
(2.12)
Note that our entire discussion has been, and will continue to be, conducted without introducing a
coordinate system. Furthermore, vector operations have been described without a reference to their
components with respect to some basis. Hopefully, this observation helps convince the reader that
geometric vectors in a Euclidean space represent a logically consistent and algebraically complete
mathematical entity.
2.2.5The local vector space
As Tensor Calculus is a close descendant of Linear Algebra in the sense of merging the worlds of
Algebra and Geometry, it is not surprising that our narrative will rely heavily on various
fundamental concepts from Linear Algebra such as linear independence, basis, linear decomposition,
linear transformation, eigenvalues, and others. In particular, we will frequently describe
geometric vectors as forming a linear space. In this context, it is important to think of
all vectors that may be potentially added or subtracted as emanating from a single point.
(2.13)
When vectors appear in a different arrangement -- for
instance, when implementing the tip-to-toe construction -- it should be considered a transient
configuration created for some temporary convenience.
Of course, we will later consider vector fields, i.e. collections of vectors defined at
different points in space. In the context of a vector field, vectors naturally emanate from
different points. However, for the purposes of performing algebraic analysis on those vectors, we
should once again treat them as emanating from a single point.
Note that this point of view will become particularly relevant in the near future when we introduce
curvilinear coordinate systems. In the context of a curvilinear coordinate system, the concept of a
linear space becomes highly localized. Namely, at each point there will be its own unique linear
space of geometric vectors that is completely separate from its neighbors.
2.3The dot product
Without a doubt, the dot product is one of the most beautiful concepts in Mathematics.
Before we present its well-known definition, we should note one of its most important and, at the
same time, surprising properties: with the help of the dot product, it is possible to express
virtually all geometric quantities by algebraic expressions. It is this particular universality of
the dot product that ultimately enables coordinate space analysis to become a logically complete
self-contained framework.
Given two vectors and , the dot product , is the product of their lengths and
the cosine of the angle between them, i.e.
Once again, note the geometric
nature of this definition: it is stated in terms of directly measurable geometric quantities and
does not make any references to the components of the vectors.
It is clear that the dot product is symmetric, i.e.
Furthermore, it is linear with
respect to vector addition, i.e.
and multiplication by numbers, i.e.
These two forms of linearity are
usually combined into the single distributive property
While commutativity and linearity
with respect to multiplication by numbers are rather obvious, linearity with respect to vector
addition needs a careful demonstration. This is left as an exercise for the reader.
For a fixed vector , think of the dot product as a function of , i.e.
Then, thanks to the distributive
property of the dot product, is a linear function, i.e.
Importantly, the converse is also
true: every linear function can be represented as a dot
product with a unique vector . We will make use of this crucial insight when discussing
directional derivatives in Chapter 4.
2.3.1The expressive power of the dot product
Let us now discuss a few examples that illustrate how the dot product can be used to express
various geometric quantities.
First, consider the dot product of a vector with itself. The angle between a vector and itself is
understood to be and therefore
Consequently, the length of a vector
equals the square root of the dot product with itself, i.e.
Of course, one may be skeptical with regard to the utility of this identity. Clearly, this is not a
practical way to determine the length of a vector. After all, in order to evaluate a dot product in
the first place, one must know the lengths of the vectors involved. However, from the conceptual of
view, this equation is indispensable. First, as we will see in Section 2.6, the dot product can be effectively evaluated in the
component space by cataloging the values of the dot product for the elements of a basis. In other
words, having evaluated the dot product for a handful of vectors by the geometric definition, the
dot products of any two vectors can be evaluated by working with their components. Second, as we
will discuss in Section 2.7, in more general vector spaces,
the dot product is a primary concept that precedes those of length and angle.
Consequently, in more general vector spaces, the identity
acts as the definition of
length.
It is a similar story for the dot product as a test of orthogonality. If two vectors and are orthogonal then their dot product
is zero since
The converse is also true for
nonzero vectors: if the dot product of two nonzero vectors is zero then the vectors are orthogonal.
More generally, the dot product can be used to measure the angle between two nonzero vectors according
to the formula that follows directly from the definition of the dot product, i.e.
Importantly, we can write the
expression on the right strictly in terms of the dot product
which helps illustrate the general
idea that all geometric quantities can be expressed in terms of the dot product.
As a further illustration of the great utility of the dot product, let us show how it can be used
to construct an algebraic proof of the Pythagorean theorem
for a right triangle with legs of
lengths and and a hypotenuse of length :
(2.27)
The
following figure illustrates two ways of representing a right triangle by associating its legs with
orthogonal vectors and and the hypotenuse with a third
vector . (2.28)
In the first configuration while in the second
Both configurations work equally
well for our purposes. Continuing with the first configuration, dot each side of the identity with
itself
By the combination of the
distributive and commutative properties,
Since , , and , the above identity becomes
which is precisely the statement of the Pythagorean theorem. This proof is so simple that it often
leaves one with the feeling that some circular logic must have been used somewhere along the way.
The Pythagorean theorem is essentially a planar identity. However, there also exists a
three-dimensional version for the diagonal of a rectangular parallelepiped illustrated in the
following figure.
(2.34)
If the length of the diagonal is and the lengths of the sides are , , and , then the theorem states that This theorem can be proven in
similar fashion by representing the sides of the parallelepiped with orthogonal vectors , , and and the diagonal by a vector as shown in the following figure:
(2.36)
Then
is the sum of , , and , i.e. and it is left as an exercise to
show that
2.4Linear decomposition by the dot product
We now turn our attention to the first of three other crucial applications of the dot product that
include linear decomposition by the dot product, orthogonal projection, and the evaluation of the
dot product in the component space. Note that these discussions will not make a single reference to
the explicit definition of the dot product
Instead, they will rely exclusively
on its commutative and distributive properties
This important insight shows that the dot product has a great deal of algebraic autonomy.
In a classical application, the dot product is used to construct an algebraic algorithm for linear
decomposition. Since the goal of Tensor Calculus is to provide an analytical framework, this
approach will be used almost exclusively when facing a linear decomposition task.
Given a basis and a
vector , linear decomposition is the task of finding the
coefficients of the linear combination
that represents with respect to the basis . The
resulting coefficients , , and
are
known as the components of the vector with respect to the basis . We
will now show that the components , , and
can be
determined by evaluating dot products without making any additional references to the relative
arrangement of the vectors , , , and
.
First, assume that the basis is orthogonal. In other words, the vectors , , and
are
orthogonal to each other which, in terms of the dot product, reads
In order to determine the
coefficient , dot
both sides of the identity
with to
obtain
Apply the distributive property on
the right, i.e.
and notice that all terms on the
right but the first one vanish due to the orthogonality of the basis. Thus,
and, solving for , we
find
Obviously, the same argument works for and
so,
collectively, we have
Thus, for an orthogonal basis, all coefficients can be
determined one at a time by evaluating two dot products, and .
This rule simplifies even further for an orthonormal (a.k.a. Cartesian) basis,
where the vectors , , and
are
not only orthogonal to each other but each one is also unit length, i.e.
Thus, for an orthonormal basis, the
coefficient is
given by
and can therefore be determined by
evaluating a single dot product.
Does this method continue to work when the basis is not orthogonal? The answer is
yes, although the expansion coefficients can no longer be determined one at a time or in as
few as two dot products. Instead, the coefficients need to be determined simultaneously as a set by
solving a linear system of equations. The system is formed by treating the result of multiplying
the identity with
each of the basis vectors as an equation for , , and
. We
have
Rewrite this system in the matrix form, i.e.
The resulting matrix
consists of the pairwise dot products of the basis vectors. In Linear Algebra, it is known as the
Gram matrix or the inner product matrix. We will eventually refer to it as the
covariant metric tensor.
The three coefficients , and
can be
determined by solving the system above. While this can be time consuming, it nevertheless remains
true that linear decomposition can be accomplished by evaluating dot products in combination with
other elementary arithmetic operations. In terms of the matrix -- or, more specifically, in terms of
its inverse, which we will later call the contravariant metric tensor -- the solution can be
written in the form
While it is well known that
calculating the inverse is an inefficient way of solving linear systems, the concept of the inverse
offers an effective way of capturing the solution of a linear system by an algebraic expression.
Finally, we note one important corollary of the above identity: a vector is uniquely determined by
the values of the dot products with the elements of a basis.
2.5Orthogonal projection onto a plane
Let us now turn our attention to the question of projection onto a plane which will yield an answer
surprisingly similar to that of the question of decomposition. Consider a plane spanned by the
vectors and
and a
vector that lies out of the plane. Let the vector be the orthogonal projection of onto the plane.
(2.55)
Since is the closest vector to within the plane we will reuse the symbols and
to
denote its components with respect to the basis , i.e.
The components of the vector itself with respect to some basis in the surrounding
Euclidean space will not figure in our discussion. Our goal is to find the expressions for and
in
terms of the dot products of the vectors , , and
. To
this end, note that, by definition, the difference , in other words, the vector
is orthogonal to the plane and is,
therefore, orthogonal to the vectors and
. Thus,
its dot products with and
must
vanish, i.e.
Multiplying out the expressions on the left yields
Organizing these equation into matrix form yields the linear system
for the coefficients and
. Thus,
and
are
given by
The similarity between this formula and the decomposition formula is evident. In fact, the
coefficients produced by the expression
can be interpreted in two different ways. For a vector within the plane spanned by and
, the
two coefficients are the components of . For a vector outside of the plane, the two coefficients are the
components of the projection of onto the plane, i.e. the vector in the plane closest to .
2.6The component space representation of the dot product
The matrix also has another crucial application
which may, perhaps, be described as more important than linear decomposition. As we are about to
show, the matrix represents the dot product in the
component space. Suppose that the components of the vectors and are , , and
, , , i.e.
Let us introduce the vectors (in the Linear Algebra sense)
in whose
entries consist of the components of and . The question is: how can the dot
product be calculated in terms of the matrix
and the vectors and ?
This question can be answered in straightforward fashion by substituting the linear expansions of
and into the dot product , i.e.
A repeated application of the
distributive law yields nine terms, i.e.
The resulting expression can be written concisely in two ways. The first approach uses the
summation symbol and
captures the fact that the above sum consists of all possible terms of the form .
We have
This is the form that we will
ultimately favor once we introduce the tensor notation. At this point, however, a matrix expression
will provide greater insight. In matrix terms, the sum of the nine terms can be captured by the
matrix product
With the help of the symbols , , and , the same identity can be written
more compactly as
The above identity represents the rule for carrying out the dot product in the component space. It
clearly shows that the matrix encapsulates all the required
information to calculate the dot product in the component space.
Finally, we note one important special case. If the basis is
orthonormal, i.e. Cartesian, then the matrix is the identity, i.e.
Therefore, the dot product between
and is given by the classical expression
which is also captured by the matrix
product
i.e.
Thus, an orthonormal basis gives the
dot product its simplest possible component space representation. Looking ahead, note that with the
help of the tensor notation, we will be able to achieve the same level of simplicity for any
basis. You can find the equivalent expression in tensor terms in equation (10.28).
2.7Comparison to the Linear Algebra approach
Vector spaces in Linear Algebra have all of the same elements as geometric vectors in Euclidean
spaces: vectors, bases, linear combinations, dot product, and so forth. This is not surprising
since the original impetus for the development of Linear Algebra was to extend, by means of
Algebra, the ideas of Euclidean geometry to a broader range of objects. The subject of Linear
Algebra is most often laid out from one of two distinct starting points, both of which are
different from ours . The first approach is exemplified by Gilbert Strang's Introduction to
Linear Algebra while the second -- by Israel Gelfand's Lectures on Linear Algebra. We
will now describe how each of these approaches define vectors and the dot product. However, once
the dot product is defined in either approach, the subsequent topics are treated similarly.
2.7.1Vectors as elements of
According to the approach in Strang's Introduction to Linear Algebra, a vector is an
element of , i.e.
a set of numbers, typically organized into a column and surrounded
by square brackets. Thus, a typical element of is
Vectors are denoted by plain
lowercase letters. This notation enables us to also treat vectors as matrices to participate in matrix products.
Vectors, as elements of , can
be added together and multiplied by numbers in the natural entry-by-entry fashion, i.e.
and
The dot product is defined by the matrix
product
where is a symmetric positive definite
matrix. Perhaps it is more appropriate to use the article a, as in a dot product,
since any symmetric positive definite can represent a valid dot product.
Nevertheless, the article the is used since once a dot product is chosen, it becomes
the dot product.
The symmetry of assures that the dot product is
commutative
while the distributive property of
matrix multiplication assures distributivity
Finally, the positive definiteness
of assures the positive definiteness of
the dot product, i.e.
provided that is not zero. This property enables us to define the
length of a vector as the square root of the dot product with itself, i.e.
It can be shown that the dot product
satisfies the Cauchy-Schwarz inequality
Note that the Cauchy-Schwarz inequality is a trivial statement with respect to geometric vectors,
but it is anything but trivial for the dot product .
The Cauchy-Schwarz inequality enables us to define the angle between and by the equation
although this concept is rarely used
and is interesting largely from the point of view of drawing a closer parallel between geometric
and algebraic vector spaces. The concept of orthogonality, on the other hand, is one that used very
commonly. Two vectors and are said to be orthogonal if
2.7.2Vectors as an axiomatic concept
By contrast with vectors as elements of ,
Gelfand's Lectures on Linear Algebra aims to bring the widest possible range of objects
under the umbrella of the subject. According to this approach, vectors are defined as
objects of any kind that can be added together and multiplied by numbers to produce another object
of the same kind. Addition and multiplication by numbers are introduced axiomatically simply by
requiring that those operations satisfy a number of natural properties such as associativity,
commutativity, and distributivity. Thus, the properties that had previously been corollaries are
adopted as definitions in this approach. The totality of vectors of a particular type is referred
to as a vector space or linear space. The two instances of vectors introduced above,
i.e. directed segments and elements of ,
satisfy all of the required properties and are therefore considered vectors in the axiomatic sense.
In the context of axiomatic vector spaces, the dot product is known as inner product and,
instead of writing
we now write
Naturally, an axiomatic approach to
vectors requires an axiomatic approach to the inner product since an explicit expression is not
possible given the diverse nature of objects now considered vectors. As usual, properties that had
previously been corollaries are now adopted as definitions. Thus, an inner product is defined as a
real-valued function of two vectors and that is symmetric, i.e.
distributive, i.e.
and positive definite, i.e.
The geometric dot product
and the dot product for vectors as
elements of
are examples of inner products. For
functions, an example of an inner product is
where .
The Cauchy-Schwarz inequality
follows from the axiomatic
definition. Its elegant proof is as follows. For any real number , the inner product of the vector
with itself is nonnegative, i.e.
thanks to the positive definite
property of the inner product. A repeated application of the distributive property, transforms the
above inequality into
The expression on the left is a
quadratic polynomial
where
Since has either one root or no roots, its
discriminant must be either zero or negative, i.e.
In other words,
which is equivalent to the
Cauchy-Schwarz inequality.
The length of a vector is, of course, defined by the identity
and the angle between the vectors and is
although, as we mentioned
previously, this quantity is rarely used.
Once the inner product is defined, whether it is the dot product for geometric vectors, the dot
product in , or a
general inner product, the rest of the discussion can proceed as before. In particular, we can
decompose by the inner product and we can carry out the inner product in the component space by an
appropriate multiplication. In the context of Linear Algebra, a Euclidean space is defined
to be any vector space combined with an inner product.
2.8Exercises
Exercise 2.1Demonstrate that the dot product satisfies the distributive property
Note that your demonstration must be geometric: you should not introduce a basis in order to refer to the components of the vectors.
Exercise 2.2Show that the distributive property of the dot product implies that products of sums can be distributed in familiar fashion, e.g.
Exercise 2.3Show that
Exercise 2.4On the basis of the above equation conclude that there cannot be another definition of the dot product that is symmetric, distributive, and has the property that the dot product of a vector with itself produces the square of its Euclidean length.
Exercise 2.5Show that every linear function , i.e. a function that satisifies the identity
for all , , , and , can be represented as a dot product with a unique vector , i.e.
Hint: Select a basis , let , and then determine , , and .
Exercise 2.6Use the dot product to show that if and are equal length, then and are orthogonal.
Exercise 2.7Conversely, use the dot product to show that if and are orthogonal then and are equal length.
In the following exercises, use vector algebra and the dot product to accomplish the stated task.
Exercise 2.8Prove the Pythagorean theorem in three dimensions
Exercise 2.9 Prove the Law of Cosines, i.e. for a triangle with sides , , and and the angle between the first two sides,
Exercise 2.10Prove the three-dimensional Law of Cosines for the parallelepiped in the following figure.
(2.109)
Namely, for a parallelepiped with sides of lengths , , and and angles , , and , the diagonal is given by Exercise 2.11 Prove the parallelogram law, i.e. in a parallelogram with sides of length and and diagonals of length and ,
Exercise 2.12Suppose that and are unit-length vectors that form an angle of . Describe, by geometric means, the vector , such that and .
Exercise 2.13Suppose that and are unit-length vectors that form an angle of . Find the vector , in terms of and , such that and .
Exercise 2.14Demonstrate that the matrix
is symmetric and, provided that , , and are linearly independent, positive definite.
Problem 2.1Use vector algebra to show that the medians of a triangle intersect at the same point. Hint: If , , and denote vectors pointing to the vertices of the triangle from an arbitrary common point, show that three medians intersect at the point, known as the centroid of the triangle, located at the tip of the vector
Problem 2.2This problem requires proficiency in a number of Linear Algebra concepts that exceeds the required level for the rest of the narrative. The goal of this problem is to determine the extent to which the basis can be reconstructed from the matrix
Note that the matrix depends only on the lengths of the vectors , , and and their relative arrangement. Thus, if the basis is subjected to an arbitrary orthogonal transformation, i.e. rotation and/or reflection (in other words, transformations that do not change the relative arrangement of vectors or their lengths), the new basis will result in the exact same dot product matrix . Therefore, any attempt at reconstructing the basis from can be accomplished, at best, to within an orthogonal transformation. Prove that this is also the worst case scenario. Namely, show that if two bases and produce the same dot product matrix , then they must be related by an orthogonal transformation.
Problem 2.3Devise an algorithm that reconstructs a basis that corresponds to a given dot product matrix .