Skip to main content

Proving A Rotation Matrix Is What It Purports to Be: EM II Notes 2014_08_15

Summary:  The one that took four days.  A detector that worked finally arrived for the experiment, so work on EM II has been somewhat slower.  Also, the example here uses a lot of material from prior examples and requires being on your toes.  This example is all about showing that a rather abstruse looking rotation matrix is in fact a rotation matrix.  It involves recognizing dot and cross products when they're written in tensor index notation and having rock solid index skills.  At the end of the day though, it's pretty cool, but it still seems like there should be an even simpler way to do this than the one shown here.

The game is to show that the following is a rotation matrix in that when multiplied by its transpose, the result is the identity matrix:
$M_{ij} = \delta_{ij}cos \alpha + n_i n_j \left(1 - cos \alpha\right) + \epsilon_{ijk}n_k sin \alpha$

Keep in mind that $n_i$ is defined to be a unit vector.  The transpose relation that we're supposed to show can be written down as $M_{ij}M_{ik} = \delta_{jk}$

Multiplying the matrix by itself will resullt in 9 terms.  We can make two of them go away identically and two others subtract from each other to disappear.

We'll make one change to the index just for clarities sake and rename $M_{ik}$ to $M_{il}$, and when referring to the k component in the $M_{il}$ matrix, it will be called m.

The nine terms are
$\delta_{ij}\delta_{il}cos^2\alpha = \delta_{jl}cos^2\alpha$
For this term, keep in mind that you're really summing three 3x3 matrices.  Only the $j=l$ terms survive, but they survive in three different locations in index space: 1, 1; 2, 2; and 3, 3.

$\delta_{il}n_in_j cos\alpha\left(1-cos\alpha\right)$
$= n_ln_j cos\alpha\left(1 - cos\alpha\right)$
The trick here was to absorb the $\delta_{il}$ into the $n_i$, changing the index of $n_i$ to $n_l$ in the process.

$\delta_{il}\epsilon_{ijk}n_k sin\alpha cos\alpha$
$=\epsilon_{ljk}n_k\space sin\alpha\space cos\alpha$
The trick in this first step was to apply the $\delta$ to the $\epsilon$ and switch the appropriate index name as above.  There's one more trick that can be played and we'll see it when we get to the 7th term.  For now, suffice it to say that this term is going to go away in the end.

$\delta_{ij} n_i n_l cos\alpha\left(1 - cos\alpha\right)$
First, keep in mind that we're still multiplying the $il$ terms times the $ij$ terms, they're just written out of order above.  We can play the same game we played in term number two to get a final answer of
$n_j n_l cos\alpha \left(1 - cos\alpha\right)$
These two terms, 2 and 4 are the same, so we just wind up with 2 teims term 2.

$n_in_ln_in_j\left(1 - cos\alpha\right)^2$
$= n_l n_i \left(1 - cos\alpha\right)^2$
The trick here is to group the two $n_i$ terms next to each other and recognize that group as the dot product of two unit vectors.  Since they're unit vectors, the dot product evaluates to one.

$n_n n_l \epsilon_{ijk} n_k sin\alpha \left(1 - cos\alpha\right)$
$= 0$
The trick here is to spot the cross product of two unit vectors.  Since the unit vector is of course identical to itself, it is also parallel to itself, so the cross product, $\epsilon_{ijk}n_i n_k$, disappears.

This is the one that winds up being identical to term three.
$\delta_{ij} \epsilon_{ilm} n_m sin\alpha cos alpha$
$= \epsilon_{jlm} n_m sin \alpha cos \alpha$
Here's the trick.  The k from term 3, rewritten here for convenience:
$\epsilon_{ljk}n_k\space sin\alpha\space cos\alpha$,
and the m from term 7 are summation variables.  Also note that while you're free to choose any values for $l$ and $j$ you like, they have to be the same values in term 3 and 7.  So, when I run my sum through either k or m, by choosing a value for, let's say k, I fix the values of j and l, (they have to not be equal to the chose k or m to avoid making the Levi-Civita symbol zero).  Since I've now fixed j and l, there's only one value of m that will be non-zero.  Essentially, this makes k and m move in lock step with each other.  Now, for the anti-symmetry trick.  If I switch any two indices in a Levi-Civita symbol, I change it's sign.  If you check, you'll see that the j and l are reversed in order between terms 3 and 7.  This means that the two terms will always have opposite signs, but the same values, and will sum to 0.  Terms 3 and 7 are removed from the rest of the process.

Term eight is similar to term six but with different indices.  It is equal to 0 for the same reason having to do with the cross product of parallel vectors.

$\epsilon_{ijk} n_k \epsilon_{ilm} n_m sin^2 \alpha$
First, we need to rearrange the indices to make use of the identity that transforms a product of $\epsilon$s into a difference of products of $\delta$s.
$\epsilon_{ijk} n_k \epsilon_{ilm} n_m sin^2 \alpha = \epsilon_{jki} n_k \epsilon_{lmi} n_m sin^2 \alpha$
The operation above cycled the indices on the $\epsilon$s to make them look like the identity as recroded in the notes.  Cycling the indices on a Levi-Civita symbol is symmetric and introduces no sign change.  Once this is done, we can use the identity,
$\epsilon_{jki} \epsilon_{lmi} = \delta_{jl} \delta_{km} - \delta_{jm} \delta_{kl}$,
to turn term 9 into
$\left(\delta_{jl} \delta_{km}n_k n_m - \delta_{jm} \delta_{kl}n_k n_m\right) sin^2 \alpha$
I'll refer to the first subterm of 9 as 9a and to the second subterm as 9b.  It's easy to get carried away and try to think or write out all the different combinations of indices in this one.  The term can be evaluated much more quickly by turning off all original thinking and just using pre-existing tricks though.  First, the 9a term contains a dot product, so,
$\delta_{jl} \delta_{km}n_k n_m  sin^2 \alpha = \delta_{jl} sin^2 \alpha$
Second, 9b can be simplified mechanically by running the same, '$\delta_{ij}$ renames an index of what it's applied to' trick we've been running throughout.  Here's the result,
$\delta_{jm} \delta_{kl}n_k n_m = n_l n_j sin^2 \alpha$
The final result is
$\delta_{jl} sin^2 \alpha - n_l n_j sin^2 \alpha$

Summing the terms

The remaining five, or six terms, depending on how you count, are summed to give:
$\delta_{jl} cos^2 \alpha + \delta_{jl} sin^2 \alpha + n_j n_l cos \alpha \left(1 - cos \alpha \right)
+ n_l n_j \left(1 - cos \alpha \right)^2 + n_l n_j cos \alpha \left(1 - cos \alpha \right) - n_l n_j sin^2 \alpha$
The first two terms add, and the third and fifth terms just double to give,
$\delta_{jl} + 2 n_j n_l cos \alpha \left(1 - cos \alpha \right)
+ n_l n_j \left(1 - cos \alpha \right)^2 - n_l n_j sin^2 \alpha$
This leaves a few cosine terms to expand and evaluate
$2 cos\alpha - 2 cos^2 \alpha$
$-2 cos\alpha + cos^2 \alpha + 1$
This leaves us with $-cos^2 \alpha + 1 = sin^2 \alpha$
Plugging this back into the original sum of terms we get,
$\delta_{jl} + n_l n_j sin^2 \alpha - n_l n_j sin^2 \alpha = \delta_{jl}$
It's done!

Picture of the Day
Sunset over Sound Beach, NY.


Popular posts from this blog

Cool Math Tricks: Deriving the Divergence, (Del or Nabla) into New (Cylindrical) Coordinate Systems

The following is a pretty lengthy procedure, but converting the divergence, (nabla, del) operator between coordinate systems comes up pretty often. While there are tables for converting between common coordinate systems, there seem to be fewer explanations of the procedure for deriving the conversion, so here goes!

What do we actually want?

To convert the Cartesian nabla

to the nabla for another coordinate system, say… cylindrical coordinates.

What we’ll need:

1. The Cartesian Nabla:

2. A set of equations relating the Cartesian coordinates to cylindrical coordinates:

3. A set of equations relating the Cartesian basis vectors to the basis vectors of the new coordinate system:

How to do it:

Use the chain rule for differentiation to convert the derivatives with respect to the Cartesian variables to derivatives with respect to the cylindrical variables.

The chain rule can be used to convert a differential operator in terms of one variable into a series of differential operators in terms of othe…

The Valentine's Day Magnetic Monopole

There's an assymetry to the form of the two Maxwell's equations shown in picture 1.  While the divergence of the electric field is proportional to the electric charge density at a given point, the divergence of the magnetic field is equal to zero.  This is typically explained in the following way.  While we know that electrons, the fundamental electric charge carriers exist, evidence seems to indicate that magnetic monopoles, the particles that would carry magnetic 'charge', either don't exist, or, the energies required to create them are so high that they are exceedingly rare.  That doesn't stop us from looking for them though!

Keeping with the theme of Fairbank[1] and his academic progeny over the semester break, today's post is about the discovery of a magnetic monopole candidate event by one of the Fairbank's graduate students, Blas Cabrera[2].  Cabrera was utilizing a loop type of magnetic monopole detector.  Its operation is in concept very simpl…

Unschooling Math Jams: Squaring Numbers in their own Base

Some of the most fun I have working on math with seven year-old No. 1 is discovering new things about math myself.  Last week, we discovered that square of any number in its own base is 100!  Pretty cool!  As usual we figured it out by talking rather than by writing things down, and as usual it was sheer happenstance that we figured it out at all.  Here’s how it went.

I've really been looking forward to working through multiplication ala binary numbers with seven year-old No. 1.  She kind of beat me to the punch though: in the last few weeks she's been learning her multiplication tables in base 10 on her own.  This became apparent when five year-old No. 2 decided he wanted to do some 'schoolwork' a few days back.

"I can sing that song... about the letters? all by myself now!"  2 meant the alphabet song.  His attitude towards academics is the ultimate in not retaining unnecessary facts, not even the name of the song :)

After 2 had worked his way through the so…