Skew-symmetric systems, also known as alternating systems, are objects of immense
utility and are some of the most beautiful objects in Tensor Calculus. They find crucial
applications in expressing determinants, the cross product, as well as the differential
operator known as the curl. Furthermore, they provide an alternative approach to
constructing invariant differential operators which serves as the foundation of the subject of
Differential Forms. Last but not least, skew-symmetric systems represent a celebration of the
indicial notation. What '{E}lie Cartan referred to as a d\'{ebauches} of indices
(translations of d\'{ebauches} range from overabundance to orgies), we will
see as a dynamic choreographed dance. That said, skew symmetric systems do require utmost fluency
in the tensor notation. If there are any deficiencies in your technique, this topic will provide
you with a valuable opportunity to remedy them.
16.1General skew-symmetric systems
By definition, a system is skew-symmetric if any of its elements related by a swap of two
indices have opposite values. We will consider skew-symmetric systems of various orders. For
second-order systems, such as ,
the skew-symmetric property reads
which agrees with the familiar
like-named property of matrices. For example, in four dimensions,
would be represented by a skew-symmetric matrix, such as
The conspicuous zeros on the
diagonal, which correspond to , are an important
characteristic of skew-symmetric systems.
To illustrate higher-order skew-symmetric systems, consider a fourth-order system .
For the time being, we are not concerned with the tensor property of .
Thus, the placement of the indices as subscripts is arbitrary and we could just as easily consider
a fourth-order system
with four superscripts. We could even consider a mix of subscripts and superscripts, but that would
unnecessarily complicate the notation.
For reasons that will become apparent shortly, we must require that the dimension of space is greater than or equal to the order of the
system, i.e. in the case of .
For the sake of the present discussion, assume that .
The system is
skew-symmetric if, once again, any of its elements related by a swap of two indices have opposite
values. This requirement can be reduced down to three identities involving swaps of consecutive
indices, i.e.
Indeed, there is no need to document the swaps for all possible pairs of indices since any swap can
be accomplished by swaps of consecutive indices. For example, the swap
can be accomplished by three swaps
of consecutive indices, i.e.
Critical to the internal consistency of the definition is the fact that any given permutation can
only be represented either by an odd number of swaps or by an even number of swaps.
After all, if there was a way to achieve the permutation by an even number swaps, then
the value of
would be simultaneously
and
depending on whether we arrive at by an odd or even number of swaps.
Permutations that can be obtained by an odd number of swaps from the identity permutation are
called odd, while those that require an even number of swaps are called even. This
property of permutations is called parity.
The skew-symmetric condition imposes a severe constraint on the values of .
First, all elements whose indices are not all distinct must equal . For example, consider the element for
which the first and the third indices are equal. By the skew-symmetric condition, we must have
since swapping the
and the
values in
yields the same combination . Therefore,
Thus, in order for an element of
to
be nonzero, its indices must have distinct values.
Second, any two elements whose indices are related by a permutation are either equal or opposite of
each other. Indeed, consider the elements and
. Since
the combinations and are related by an odd number of swaps, e.g. , we must have
Thus, a fourth-order skew-symmetric
system in an -dimensional space can have at most
degrees of freedom, i.e.
independently assigned values. More generally, a skew-symmetric system of order in an -dimensional space can have at most
degrees of freedom. In particular,
and crucially for the upcoming discussion, when the order of a skew-symmetric system matches the
dimension of the space, the former can only have a single degree of freedom. For example, if then a system
has one degree of freedom represented by the element
All of the remaining elements of
are either or .
Finally, when the order of a skew-symmetric system exceeds the dimension of the space, all of its
elements must vanish since the indices cannot all have distinct values.
16.2Contraction with a symmetric system
The result of a double contraction between a skew-symmetric and a symmetric system is zero, i.e. a
system in which all elements are . For example, if is
a skew-symmetric system and
is a symmetric system, i.e.
then
We will demonstrate the above identity in a way that will make it clear that the statement holds
for more complicated indicial signatures as well. Let
and consider the combination in
which the indices on
are swapped, i.e.
We will show that this new
combination simultaneously equals and
which,
of course, implies that .
Thanks to the symmetry of ,
we have
On the other hand, due to the
skew-symmetry of --
in particular, the fact that --
we have
Note that in the second step in this
string of identities, we simply exchanged the letters and . Thus, indeed,
and, therefore,
as we set out to show.
When both systems are second-order, this statement can be interpreted in the language of matrices.
Namely, the product of a skew-symmetric and symmetric matrices has zero trace. For example,
consider the product
and observe that the trace of the
resulting matrix is .
16.3The permutation systems and
Let us now turn our attention to skew-symmetric systems whose order matches the dimension of the
space. We will illustrate such systems with a third-order system in
a three-dimensional space. Since the nonzero elements of
correspond to the permutations of the numbers , , and , out of a total of elements, there are only potentially nonzero ones:
Denoting the sole degree of freedom
shared by these elements by , we have
and
When , the corresponding system is called a permutation
system and is denoted by .
We will also consider the superscripted permutation system
defined in the exact same way. Let us summarize the values of
and
in the language that generalizes to higher and lower dimensions:
Any third-order skew-symmetric
system in
three dimensions is a scalar multiple of ,
i.e.
We must now say a few words about the placement of the indices in the permutation systems
and
since, in Tensor Calculus, indices must "earn" their placements by accurately indicating how the
system transforms under a change of coordinates. While the permutation systems are independent of
coordinates, we can simply assume, in the context of a coordinate analysis, that
and
have the values summarized above at all points of a Euclidean space. Then we may legitimately ask
whether
and
are tensors. In other words, suppose that, in the primed coordinates ,
and
are defined in the exact same way as
and .
Then, are the unprimed and primed systems related by the identities
and
The answer is no, but you may
be surprised just how close to actually holding these identities are. In the next Chapter, we will
introduce simple modifications of the permutation systems known as the Levi-Civita symbols
which are, in fact, (nearly) tensors.
16.4The connection to the determinant
While the better part of this Chapter is devoted to the determinant, we would be remiss at this
stage not to point out the inescapable connection between the permutation systems and the
determinant. For example, for a matrix with entries ,
the determinant is given by the formula
where the summation takes place over
all possible permutations of the numbers . For example, the permutation
corresponds to such that
The sign of the permutation, denoted
by is defined to be if is even and if is odd. When fully unpacked, the
above formula reads
From this expansion, it becomes apparent how the summation-based formula for the determinant
essentially handpicks the contributing terms.
However, thanks to our present experience with the tensor notation, we immediately observe that the
same sum can be captured far more compactly by the indicial expression
Of course, this compactness comes at
the expense of computational efficiency since the above contraction contains not but terms, all of which are zero except for the that matter. Nevertheless, the trade-off is well worth
it. In fact, it seems almost as if the permutation systems were specifically formulated for the
purpose of expressing the determinant.
Also note that for a set of three first-order systems ,
, and
, the
combination
corresponds to the determinant of
the matrix whose columns (or rows) consist of ,
, and
, i.e.
As we observed in Chapter 3, if we think of ,
, and
as
the components of the vectors , , and , then the sign of
tells us whether the orientation of
, , and is the same as that of the covariant
basis .
We will leave the topic of determinants for now and will return to it later in this Chapter where
we will discuss it in far greater detail.
16.5The delta systems
The delta systems are objects of utmost elegance and usefulness. The delta systems are highly
structured, expressive, and are governed by a tight set of algebraic rules. Thus, they go a long
way towards the algebraization of our subject. Furthermore, the delta systems are tensors
that have an equal number of superscripts and subscripts. As such, they introduce a great deal of
balance into tensorial calculations.
16.5.1The complete delta system
The complete delta system
is defined as the product of the permutation systems
and , i.e.
The term complete refers to
the fact that it has as many superscripts and as many subscripts as the dimension of the space. The
choice of the letter is not arbitrary. The Kronecker delta
and the complete delta system
are part of the same family of related objects.
The delta system
has the greatest number of indices of any system we have encountered so far. Is this a
d\'{ebauches} of indices or a choreographed dance? You be the judge. What is undeniable is
that the delta system
possesses a great deal of structure that will enable us to work with it on an intuitive algebraic
level.
Let us now explore its elements. Since the permutation systems consist of
s, s, and s, so does the delta system .
Its values can be summarized in the following table.
Thus, out of the elements, only are not zero.
Naturally, the delta system
is skew-symmetric in both its superscripts and its subscripts. It follows that simultaneously
subjecting its superscripts and subscripts to the same permutation yields in an equivalent symbol.
For example,
where the first two superscripts and
the first two subscripts were switched. Similarly,
where we moved the last superscript
and the last subscript to the first position. More generally, subjecting its superscripts and
subscripts to permutations of the same parity results in equivalent symbols. Finally, it is left as
an exercise to demonstrate the frequently used identity
which will also be used in our
upcoming discussion of the determinant.
16.5.2The relationship with the Kronecker delta
Of great utility is the following identity that expresses the delta system
in terms of the Kronecker deltas:
It is left up to the reader to
describe this identity in words. Undoubtedly, one can clearly see that the right side is related to
the determinant. Indeed, the above relationship can be
captured by the elegant equation
Elegant though it may be, the
preceding expanded form is more transparent and useful. Note that this relationship can also be
adopted as the definition of the delta system .
Explaining in complete detail why the identity
is valid is left as an exercise. It
is a good idea to consider a number of specific examples and to articulate the precise reason for
each conclusion. To help you in this endeavor, let us consider three such examples that will shed
some light on the "inner mechanics" of the expression on the right.
First, consider a typical nonzero element
which should equal since
is two switches away from . We have
Observe that the only nonzero term
is ,
which indeed equals . For a second example, consider
which should equal because
is not a permutation. We have
where all terms equal because it is impossible to have a nonzero term when the
superscripts and the subscripts are not identical sets of numbers. Finally, consider the element
which should also equal , because while the superscripts and the subscripts
represent identical sets of numbers, neither is a permutation. We have
This time, the two nonzero terms --
first and last -- cancel each other.
Despite its bulkiness, the identity
finds numerous practical
applications. However, its immediate theoretical impact is the crucial implication that the delta
system
is a tensor, as it is expressed by sums and products of the tensor Kronecker delta. Additionally,
the same identity shows that
vanishes under the covariant derivative, i.e.
In other words, the metrinilic
property of the covariant derivative extends to the delta system .
Interestingly, not only can the delta system be expressed in terms of the Kronecker delta, but the
latter can also be expressed in terms of the former. This relationship is captured by the identity
which is one of many that will be
discussed next.
16.5.3The partial delta systems
Between the Kronecker delta
and the complete delta system ,
there exists an object with an intermediate number of indices known as the partial delta
system . In
fact, when we generalize this discussion to dimensions, we will discover that between the Kronecker
delta
and the complete delta system ,
there exists a whole family of partial delta systems ,
one for every between and . The partial deltas are defined in a way that bridges the
Kronecker delta and the complete delta system.
For the remainder of this Section, we will operate in three dimensions which will limit our
discussion to the delta system .
However, the reader is invited to think about how all of the presented concepts generalize to dimensions. In describing the elements of , we
will choose our words so that the resulting definition also encompasses the Kronecker delta
and the complete delta system
and generalizes to dimensions. We have:
A few examples of the values of
and
the reasons for those values are given in the following table.
Thus, out of a total of elements, only are not zero. Specifically,
and
All delta systems, from the Kronecker delta to the complete delta system are connected by various
identities. Generally speaking, a contraction of a higher-order delta system yields a lower-order
delta system. Meanwhile, higher-order delta systems can be expressed in terms of the Kronecker
delta by determinant-like combinations. Demonstrations of all identities are left as exercises.
Let us begin by considering the result of contracting higher-order delta systems. Starting with the
complete system ,
we have
This is one of the most frequently
used identities in practical applications. In fact,
almost always arises as the result of contracting .
Furthermore, by the contraction property of tensors, the above identity immediately tells us that
the partial delta system is a
tensor, leading to the conclusion that all delta systems are tensors. Additionally, thanks
to the above identity, we know that
vanishes under the covariant derivative, i.e.
When is
itself contracted, the result is a multiple of the Kronecker delta, i.e.
To understand where the factor of
comes from, note that
When and both values equal, say, , we have
and it is clear that the same result
would be obtained for any common value of and .
In summary, the contraction of a higher-order delta system results in a lower-order delta system or
a multiple thereof.
Now, let us talk about expressing higher-order delta systems in terms of the Kronecker delta.
Recall that the complete delta system
is given in terms of the Kronecker delta by the formula
Contracting on and , we discover that
This is one of the most commonly
used identities, along with
In fact, these identities are
usually used in tandem: a contraction reduces
to which
is followed by the representation of in
terms of the Kronecker delta. Thus, for convenience, let us combine the two identities into one,
i.e.
Note that the expression on the
right once again exhibits the familiar determinant pattern and can therefore be captured
elegantly, albeit impractically, in the following way:
This completes our discussion of permutation and delta systems in three dimensions and we will now
turn our attention to lower and higher-dimensional cases.
16.6Generalizations to lower and higher dimensions
The permutation systems, the delta systems, and all of the related identities naturally generalize
to any number of dimensions. The two-dimensional case is particularly important since it plays a
crucial role in the study of two-dimensional surfaces.
16.6.1In two dimensions
In two dimensions, all indices assume values and . The permutation systems
and
are given by
However, it may be even easier
simply to list all of the elements. For ,
we have
and for
we have
In matrix terms, both
and
correspond to the matrix
The complete delta system is
the product of the permutation systems, i.e.
Note that, generally speaking, the
symbol is
ambiguous since it represents different systems in different dimensions. Thus, when working with
these symbols, it is essential to indicate the relevant dimension.
This point is highlighted by the contraction property
satisfied by in
two dimensions. Note that the analogous identity in three dimensions read .
On the other hand, the expression for in
terms of the Kronecker delta reads
which is identical to the
three-dimensional case. It turns out that the form of this identity is universal for all
dimensions.
16.6.2In higher dimensions
We must begin by noting that the case does not correspond to a geometric Euclidean space as we
have defined it. Nevertheless, the analytical framework holds up perfectly for any . Furthermore, in the future -- in Chapter 20, to be precise -- we will generalize the concept
of Euclidean spaces to higher dimensions.
In an -dimensional space, indices run the values between and . The permutation systems and
are defined as follows.
Thus, each permutation system has
elements of which only are not zero.
The complete delta system is
the product of the permutation systems, i.e.
and has the values
The complete delta system can be
expressed in terms of the Kronecker deltas by the identity
which proves that it is a tensor,
and that it vanishes under the covariant derivative, i.e.
The partial delta systems
for every between and can be defined by the very same language as in the two-
and three-dimensional cases, i.e.
It is, once again, important to
point out that this language includes the Kronecker delta as
well as the complete delta system .
For every , the partial delta system
can be expressed in terms of the Kronecker delta by
which shows the great universality
of this identity.
A contraction of any delta system results in a scalar multiple of the lower-order delta system. It
is left as an exercise to show that
and so on. In general, the multiple corresponding to the contraction of the -order
system
is , i.e.
For a specific example, let us consider the case and denote the complete delta system by the symbol .
Then we have the following relationships
Thus, in particular,
and, in general,
16.7Alternatization
In Matrix Algebra, it is common to represent a square matrix as a sum of a symmetric matrix and a skew-symmetric matrix , i.e.
For example, for
we have
Such a decomposition is unique and, as it is easy to show, we must have
and
The matrix is known as the symmetric part
of and is known as the skew-symmetric
part of .
In indicial notation, the same identities read
where
and
In the expression for ,
we are beginning to see the now-familiar skew-symmetric indicial pattern.
The concepts of symmetric and skew-symmetric parts generalize to higher-order systems
(although it is no longer true that a system is a sum of its symmetric and skew-symmetric parts).
For example, for a third-order system ,
its skew-symmetric part is
defined as
and it is clear how to generalize
this definition to higher-order systems. The operation that converts
into its skew-symmetric part is known as alternatization and it is valid in any dimension
that is equal or greater than the order of the system. Alternatization is relevant to our narrative
because it can be used as an alternative approach to constructing invariant differential operators.
As we established, the partial derivative
for a tensor is not
a tensor in its own right. We have remedied this problem by replacing the partial derivative with
the combination
which is a tensor. Observe,
however, that the combination
which represents the non-tensor
portion of the partial derivative is symmetric in and . Therefore, in the alternatization of
, i.e.
the terms containing the Christoffel
symbol cancel each other. As a result,
Thus, the alternatization
is also a tensor. In other words,
the tensor property can be achieved by alternatization -- once again, by virtue of cancelling the
non-tensor contributions. This remains true for tensors of any order as well as for partial
derivatives of any order. For example, the alternatization of the variant
is a tensor.
Thus, it would be of great value to express alternatization algebraically. This can be accomplished
by contraction with the appropriate partial delta system. For example, consider a third-order
system in
an -dimensional case where . Recall that ,
the skew-symmetric part, i.e. alternatized version, of ,
is given by
With the help of the partial delta
system ,
can be expressed by a simple contraction, i.e.
This follows from the formula
but working out the details is left
as an exercise.
Thanks to the algebraic precision of the equation
we will adopt it as the definition
of alternatization, i.e. contraction on all superscripts (or subscripts) with the partial delta
system of the appropriate order. For example, the alternatization of
is
As we alluded to at the top of this
Chapter, alternatization as a mechanism for achieving invariance is developed in the subject of
Differential Forms, where it serves as the basis for the operation of exterior derivative.
16.8The determinant
We have already demonstrated that the determinant of a system can be expressed in an elegant and concise
fashion with the help of the permutation system ,
i.e.
However, in view of the literal
superscripts, this expression does not live up to the tensorial standard that we have established.
Therefore, further development is required in order to achieve full tensorization of the
determinant. This will be the initial goal of this Section.
16.8.1Tensor expressions for the determinant
The concept of the determinant applies to second-order systems with scalar elements. For the
purposes of the present discussion let us consider a mixed second-order system ,
even though, for the time being, we are not particularly concerned with the tensor properties of
.
Adjusting the above identity to the mixed indicial signature, note that can be expressed as follows:
Let us switch the indices and in the expression above and consider the combination
Unsurprisingly, as a result of the
switch, the expression changes sign thanks to the skew-symmetric nature of .
Showing this requires three formal steps. First, switch the order of the terms
and in
order to return the literal superscripts to their original order. This, of course, does not change
the value of the expression, so
Second, switch the names of the
dummy indices and , which again leaves the value of the
expression unchanged, i.e.
Finally, note that
by the skew-symmetric property and therefore
Thus, in summary,
as we set out to show.
Of course, we would reach a similar conclusion if we switched any two of the literal superscripts.
In other words, we have just demonstrated that the third-order system
is fully skew-symmetric in the
superscripts , , . It must, therefore, be a scalar multiple of , i.e.
as we established earlier.
Unsurprisingly, the scalar is precisely , as can be confirmed by setting , , and to , , and . Thus, we have obtained our first fully tensorial
identity involving the determinant:
For future reference, note that we
could similarly show the identity
which uses subscripted permutation
systems.
In order to obtain an explicit expression for , contract both sides of
with , i.e.
Recall that
and
Thus, we have arrived at the
landmark formula
for the determinant of
that fully conforms to all rules of the tensor notation. In particular, it instantly tells us that
the determinant of a mixed second-order tensor
is an invariant.
Having accomplished the goal of this Section, let us give the analogous formulas for systems
enumerated by two subscripts, i.e.
and
and those enumerated by two
superscripts, i.e.
and
Note that since -- unlike the delta
systems -- the permutation systems are not tensors, we are unable to conclude that the determinant
of a doubly-covariant or a doubly-contravariant tensor is an invariant.
Finally, let us generalize the formulas we have just obtained to lower and higher dimensions. In
two dimensions, for second-order systems ,
,
and
we have
as well as the explicit formulas
Meanwhile, in dimensions, we have
as well as the explicit formulas
16.9The multiplicative property of determinants
The celebrated multiplicative property of determinants states that the determinant of the product
of two
matrices and equals the product of the
determinants, i.e.
We will now use the formulas derived
in the previous Section to demonstrate this property with the help of the tensor notation. Our
proof will be specific to three dimensions but it will be clear that it works for any dimension.
Suppose that ,
,
and
are second-order systems, where is
the "matrix product" of
and ,
i.e.
Our goal is to show that
We have
Substituting properly-reindexed
versions of the identity
into the equation above, we find
With the help of the identity
note that
or, in short,
Therefore,
and, since
we arrive at the desired identity
We ought to take a moment to
appreciate the breathtaking straightforwardness of this calculation. It is truly a testament to the
elegance and the effectiveness of the tensor notation.
16.10The combination for a second-order system in two dimensions
The related combinations
for second-order systems ,
,
and
in two dimensions play an important role in a number of applications. In particular, a combination
analogous to
appears in the celebrated Gauss equations
in the context of surfaces. This
amazing equation will be discussed in a future book while our present discussion will help lay the
necessary algebraic groundwork.
Recall the equation
that we have just derived for a
mixed system in
two dimensions. Since
we have
Multiplying out the left side, we
obtain.
This relationship, which is as
stunning as it is simple, is the goal of this Section. Note that taking to
be the Kronecker delta ,
which corresponds to the identity matrix whose determinant is , yields
Thus, the equation
can be seen as a generalization of
the identity
when interpreted in a
two-dimensional space.
The equation
can also be used to express for a matrix in terms of the traces of the matrix itself and
its square. Indeed, since , contracting with and with yields
and we note that the object
corresponds to the trace of the matrix while
corresponds to the trace of its square. This result can be easily generalized to any number of
dimensions. Namely, the determinant of an matrix can be expressed in terms of the traces of its
first powers.
Entirely analogous to the equation
is the subscripted version
for a second-order system
and the superscripted version
for a second-order system .
In particular, for the covariant metric tensor ,
we have
while for the contravariant metric
tensor ,
we have
where, as usual, is the determinant of the
covariant metric tensor .
We must reiterate that all identities derived in this Section are valid only in two dimensions. For
a second-order system in
three dimensions, the identity analogous to
reads
and it is entirely apparent how to
generalize this identity to any number of dimensions.
16.11Determinant cofactors
Let us return to a three-dimensional space, where the combination
produces times the determinant of .
If we drop the term
from that combination, we are left with
which is a second-order system
enumerated by the free indices and . With an additional factor of , this new combination is known as the cofactor
of
and is denoted by, i.e.
The cofactor has
two remarkable properties. First, it represents the partial derivative of the determinant when the
latter is interpreted as a function of the elements ,
i.e.
Second, it equals the product of
and the matrix inverse of . It
can therefore be used as an explicit algebraic expression for the matrix inverse of a second-order
system. This Section is devoted to demonstrating these properties.
Let us begin by demonstrating the identity
Notice the tensorial consistency of
the above expression. The indicial signature in
the "denominator" is consistent with the signature of on
the right. Since the letters and already appear as dummy indices in
,
we will use two new letters and and differentiate with respect to ,
i.e.
Recall from Chapter 7 the identity
where it was later used for
quadratic form minimization in Section 8.7. In our present
discussion, the independent variables, i.e. the elements ,
are enumerated by two indices instead of one. Nevertheless, we can similarly consider the
derivative of a typical one of them, say,
with respect to :
This derivative equals when
and
are one and the same variable (i.e. when and ) and otherwise. This observation is effectively captured by
the tensor identity
with similar identities for
and ,
i.e.
We are now ready to proceed with the main calculation. Differentiate both sides of the identity
with respect to ,
i.e.
By the product rule,
Since
we have
Absorbing the Kronecker deltas into
the delta systems, we find that
Now, recall the combination
corresponding to the cofactor , i.e.
and observe that each term in
parentheses in the preceding identity equals .
Thus, the sum in parentheses equals and
we can conclude that
as we set out to prove. It is left
as an exercise to generalize this identity to lower and higher dimensions.
Let us now turn our attention to the second property of the cofactor .
Namely, that it equals the product of and the matrix inverse of .
In the tensor notation, this relationship is captured by the equation
To prove this identity, denote the
product
by ,
i.e.
Since the cofactor is
given by
the product
is given by
and we must show that
We will demonstrate this identity by
considering two specific examples -- one where and another where . These two examples will convince us
of the general validity of the above identity.
First, consider the case where and , i.e.
The nonzero contributions to the sum
on the right come from the combinations for which
and , i.e.
However, each of these terms is zero
since
is symmetric in and and
is symmetric in and , and, as we demonstrated earlier, a double contraction of
a skew-symmetric system and a symmetric system vanishes. Thus, and, more generally,
Next, consider the case where , i.e.
Once again, the nonzero
contributions to the sum on the right come from the combinations for which
and , i.e.
It is left as an exercise to show
that, this time, each term in parentheses equals . Thus, and, more generally,
In summary,
or
as we set out to show.
16.12The derivative of the volume element
In this Section, we will show that the partial derivative
of the volume element ,
where is the determinant of the covariant
metric tensor is
given by the beautiful formula
This formula is of undeniable intrinsic interest. Furthermore, it will be used on a number of
occasions in our narrative, including the proof of the metrinilic property of the covariant
derivative with respect to the Levi-Civita symbols introduced in the next Chapter, the proof of the
Voss-Weyl formula in Chapter 18, and the
proof of Gauss's theorem, also known as the divergence theorem, in a future book.
To begin our analysis, note that by the chain rule,
Thus, we can focus our attention on
itself.
Treat as a function of the elements .
This is similar to the way we treated as a function of the elements of in
the previous Section. By the chain rule,
As we showed in the previous
Section,
Recall that the derivative
of the metric tensor was calculated in Chapter 12,
where we found that it is given by the identity
Combining the last two identities,
we find
Absorbing
into the Christoffel symbols yields
or, equivalently,
Since, as we noted earlier,
we have arrived at the identity
as we set out to show.
For illustration purposes, let us confirm this identity in cylindrical coordinates, where
Since ,
, and
correspond to , , and , we have
Recall that the nonzero elements of the Christoffel symbol in the cylindrical coordinates are given
by
Therefore, we have
as we set out to show.
16.13Exercises
Exercise 16.1Show that any of the equations
follows from the other two.
Exercise 16.2Show that
Exercise 16.3Show that
Exercise 16.4Show that
Exercise 16.5Show that
Exercise 16.6Describe the values of all the elements of the system
Exercise 16.7Similarly, show that
is a tensor for a tensor .
Exercise 16.8Show that for any system ,
Exercise 16.9Show that if is symmetric in its subscripts, i.e. , then
In particular,
Exercise 16.10We have shown that if is skew-symmetric, then for any . Show the converse, i.e. if for any , then is skew-symmetric.
Exercise 16.11Show that
and, more generally,
Exercise 16.12Evaluate
Exercise 16.13Show that
Exercise 16.14Evaluate
Exercise 16.15Evaluate
Exercise 16.16Show that
Exercise 16.17Confirm that in three dimensions,
and
Exercise 16.18Show that
Generalize this formula to higher dimensions.
Exercise 16.19In three dimensions, consider a first-order system and let
Show that is anti-symmetric and that can be recovered from by contraction with , i.e.
Exercise 16.20Similarly, consider an anti-symmetric second-order system and let
Show that can be recovered from by contraction with , i.e.
Exercise 16.21If is a tensor, show that
is also a tensor.
Exercise 16.22Use the formula
to show that the determinant of is .
Exercise 16.23Confirm the identity
by evaluating the quantities (which equals ) and for a general matrix
Exercise 16.24Derive the formula for the determinant of a matrix in terms of the traces of , , and .
Exercise 16.25In three dimensions, show that the determinant of a skew-symmetric system is zero. Show that this result extends to any odd dimension.
Exercise 16.26Demonstrate the identity
in two, as well as , dimensions.
Exercise 16.27Confirm the identity
in spherical coordinates.
Exercise 16.28Demonstrate the identity
and confirm it in cylindrical and spherical components.