Skip to main content

Posts

Showing posts with the label emII

EM II Notes 2014_11_23: Homework sketches

Just a few notes on how to proceed on the penultimate homework of the semester. We're to show that the solutions for the 30/60/90 triangular waveguide given in the last homework set will also work for a waveguide that's formed from an equilateral traingle.  The three corners of the equilateral traingle are located at $\left(x,y\right) = \left(0, 0\right)$, $\left(x,y\right) = \left(a, a/\sqrt{3}\right)$, and $\left(x,y\right) = \left(a, -a/\sqrt{3}\right)$. This falls out immediately from last week's homeowrk.  Because the sine function is peiodic in $\pi$ over the domain from $\left(-\infty, \infty\right)$, the solution given last week in terms of sines will still evaluate to zero on the wall that falls at negative $y$. coordites.  The positive $x$ coordinates of the functions will evaluate to 0 on the wall in the same manner they did before???  There's an issue here.  It's products of the $x$ and $y$ sinusoids that all sum to zero.  These will need t...

Proper Velocity!!! and Getting Index Notation Worked Out: EM II Notes 2014_09_09

Summary:  It looks like I'll finally get a good understanding of the gamma notation for moving proper velocities to lab velocities and back.  It'll be nice to know it inside and out, but a little irksome given all that can be done with the hyperbolic notation we're not using.  I want to maintain my fluency in both. There may be a subtle second notation for inverted Lorentz transforms.  As it turns out, the subtle notation difference of moving around indices in the top and the bottom with spaces is meant to keep track of which index comes first when you go back to side by side notation. First, we cover Lorentz transforms, (which are not in fact tensors), and contractions and arrive at the interesting result in equation 1.99: $\Lambda^\mu_\rho \Lambda^\sigma_\mu T^\rho_\sigma = \delta^\sigma_\rho T^\rho_\sigma$ Which indicates the transpose of the Lorentz transform times itself follows a sort of orthogonality rule making use of contravariant indices. Q: ...

Showing that SpaceTime Intervals are invariant: EM II notes 2014_09_03

Summary:  Continuing notes on the tensor version of the Lorentz tranform.  It's time to start on the second set of examples. The interval in four space is invariant under Lorentz transforms and is called the Lorentz scalar. The Lorentz transform also applies to differential distances as, $dx^{\prime\mu} = \Lambda^\mu_\nu x^\mu$ We were asked in class to work out $x^2+y^2+z^2-t^2 = x^{\prime 2}+y^{\prime 2}+z^{\prime 2}-t^{\prime 2}$ The transforms we'll use are: $x = \gamma\left(x^\prime + vt^\prime\right)$ $t = \gamma\left(t^\prime + vx^\prime\right)$ Substituting these into the l.h.s. gives $\gamma^2\left(x^\prime + vt^\prime\right)^2 - \gamma^2\left(t^\prime + vx^\prime\right)^2 = x^{\prime 2} - t^{\prime 2}$ $ = \gamma^2\left(x^{\prime 2} +2vtx + v^2t^{\prime 2}\right) - \gamma^2\left(t^{\prime 2} + 2vxt+v^2x^{\prime 2}\right)= x^{\prime 2} - t^{\prime 2}$ $ = \gamma^2\left(x^{\prime 2} + v^2t^{\prime 2}\right) - \gamma^2\left(t^{\prime 2} + v^2x^{...

Two More Tensor Identities, and then Special Relativity Next! EM II Notes 2014_08_22

Summary :  Still More Tensor Identities Three more identities with gradients, divergences, Laplacians, and cross products.  Later today, the fun stuff, special relativity begins! $\vec{A} \cdot \left(\vec{B} \times \vec{C}\right) = \vec{B} \cdot \left(\vec{C} \times \vec{A}\right)$ $= A_i \epsilon_{ijk} B_j C_k$ As long as we only cycle the indices in the Levi-Civita symbol, we won't cause a sign change, so the above is also equal to $= A_k \epsilon_{ijk} B_i C_j$ Which we can commute to get $= B_i \epsilon_{ijk} C_j A_k = \vec{B} \cdot \left(\vec{C} \times \vec{A}\right)$ Done! $\vec{\nabla} \cdot \left(\vec{\nabla} \times \vec{A} \right) = 0$ $=\partial_i \epsilon_{ijk} \partial_j A_k$ $= 0$ If $i$ and $j$ are equal, then the Levi-Civita evaluates to zero.  If they are not equal, then swapping the two indices produces the same mixed partial derivative result, but with a negative sign inserted by swapping indices in the Levi-Civita symbol.  ...

More Tensor Index Identity Proofs: EM II Notes 2014_08_18

Summary:  Having worked through the examples that looked the most difficult, today's notes contain examples that are pick-up work from the easy problems.  These are simple-ish tensor index identities, including the divergence of the position vector, the cross product of the position vector, the Laplacian of one over the displacement squared, and the curl of a gradient. $\nabla \cdot \vec{r} = 3$ $= \dfrac{\partial}{\partial x_i} r_i$ Keep in mind that $r_1 = x$, $r_2 = y$, and $r_3 = z$.  Using the rules of partial differentiation, when the partial operates on the variable it is with respect to it will return 1, and when it operates on any other variable, it will return 0.  The results sum to 3. $\vec{\nabla} \times \vec{r} = 0$ $=\epsilon_{ijk} \partial_j r_k$ $= 0$ For the $\epsilon{ijk}$ to evaluate to a non-zero result, $j$ and $k$ have to not be equal.  However, as discussed above, if $J \ne k$, then the partial derivative evaluates to zero. ...

Proving A Rotation Matrix Is What It Purports to Be: EM II Notes 2014_08_15

Summary:  The one that took four days.  A detector that worked finally arrived for the experiment, so work on EM II has been somewhat slower.  Also, the example here uses a lot of material from prior examples and requires being on your toes.  This example is all about showing that a rather abstruse looking rotation matrix is in fact a rotation matrix.  It involves recognizing dot and cross products when they're written in tensor index notation and having rock solid index skills.  At the end of the day though, it's pretty cool, but it still seems like there should be an even simpler way to do this than the one shown here. The game is to show that the following is a rotation matrix in that when multiplied by its transpose, the result is the identity matrix: $M_{ij} = \delta_{ij}cos \alpha + n_i n_j \left(1 - cos \alpha\right) + \epsilon_{ijk}n_k sin \alpha$ Keep in mind that $n_i$ is defined to be a unit vector.  The transpose relation that we're su...

Rotating Cross Product Inputs Rotates the Outputs, EMII Notes 2014_08_11

What's Gone Before and What Will Ensue :  Yesterday, he first step of an exercise regarding the rotation of cross products was worked out.  Today, the identity proved yesterday will be used to show that when the two vectors in a cross product are rotated by the same rotation matrix, the resulting vector of the cross product is rotated by the same rotation matrix.  In the end, yet another property will be proven using tensor index notation. The identity from yesterday is: $\epsilon_{ijk}W_{iq}W_{jl}W_{km} = det\left(W\right)\epsilon_{qlm}$ We'll also need the definition of the cross product in index notation $\vec{A} \times \vec{B} = \epsilon_{ijk}A_jB_k$ and the rotation matrix transpose identity $M^TM = 1$ also known as $M_{iq}M_{in} = \delta_{qn}$ We want to prove that if $\vec{A}$ and $\vec{B}$ are both rotated by the same rotation matrix, $M_{in}$, then so is the result of the cross product, $\vec{V}$ First, rotate the two input vectors $\vec{A^{\prime}...

Showing Anti-Symmetries with Levi-Civita: EMII notes 2014_08_11

Summary of what's gone on before   The use of index notation to indicate a transpose was explained and shown with a concrete example.  Today, work on old homeworks begin.  The big issue of the day was figuring out an elegant way of showing that a contracted product was antisymmetric. The insights about commuting terms in index based products and how the Levi-Civita symbol works were worth the effort, but it was in fact a lot of effort!  There's a heap of broken attempts to get the short answer at the bottom of the post. The first part of the rotated cross product problem is to show that $W_{il}W_{jm}W_{kn}\epsilon_{lmn} = det\left(W\right)\epsilon_{lmn}$ We're to do this by first showing that the left hand side is antisymmetric with respect to the $i$, $j$, and $k$ indices and therefore proportional to $\epsilon_{lmn}$ and then by showing that for a concreted example, $i=1$, $j=2$, $k=3$, the left hand side is equal to $det\left(W\right)$ There's a long way ...

Understanding How to Transpose Without Really Trying: EMII Notes 2014_08_09 Part II

Summary of what's gone on before.  Got through the index notation for gradients and whatnot.  I was left a little bit baffled by the notation for the orthogonal transpose identity.  Consequently, I'm digging back into it. In this set of notes, the transpose, orthogonal identity, $M_{ki}M_{kj} = \delta_{ij}$, is first hammered out.  It then becomes obvious what's going on.  The dummy summing of the two row indices gave us the equivalent of a matrix multiply where it's row times row instead of row times column.  The rows of a matrix that should have been transposed however are the same as the columns of one that wasn't.  In other words by forcing a different type of matrix multiply, they teased out the transpose for free. Here's the hammering through bit. Let's take as an example, the simple rotation matrix about the z axis.  Keep in mind that it has already been explained above why this will work for any orthogonal matrices and that this is j...

Index Notation Partials and Rotations, a Cool Gradient Trick: EMII Notes 2014_08_09 Part I

Summary of what's gone on before.  Finally got the Levi-Civita to Kronecker delta identity down yesterday, 2014/08/08.  Today we're making more use of the index notation.  There are however, some notational stumbling points $r^2 = x^2 + y^2 + z^2$ $\vec{r} = \left(x, y, z\right)$ Now if we want to take the derivative of both sides of the magnitude equation above, first remember we can write $r^2$ as $r^2 = x_jx_j$ Now, finally taking the derivative of both sides o the above we get $2r \partial_i r = 2x_j\partial_i x_j$ remembering the partial differentiation rules and the kronecker delta we can write the above down as $2r \partial_i r = 2x_j \delta_{ij} = 2x_i$ which finally gives: $\partial_i r = \dfrac{x_i}{r}$ The Kronecker trick above is crucial. Also remember, not one of the r's is a vector, they're all the magnitude of the vector. \section{Rotations} Any rigid rotation of a vector can be defined as: $\begin{pmatrix} x'\\ y'\...

Levi-Civita Product to Kronecker Delta Difference of Products: EMII Notes 2014_08_08

Summary of what's gone on before .  In the previous set of notes from the 6th, (there were no notes on the 7th), it was pointed out that the 'convenient' comment on page 11 of the notes was to cryptic.  Today's entire half hour was spent figuring out the following derivation that sprang from the convenient comment.  We want to derive: $\epsilon_{ijk}\epsilon_{lmk} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl}$ Here's what to do.  First remember sum notation gives you $\epsilon_{ijk}\epsilon_{lmk} = \epsilon_{ij1}\epsilon_{lm1} + \epsilon_{ij2}\epsilon_{lm2} +\epsilon_{ij3}\epsilon_{lm3}$ Here's the first use of the big trick for the day.  Because of the properties of the Levi-Civita symbol, $\epsilon_{ijk}$, on the indices 2 and 3 will make the first term non-zero, while only the pairs 1,3 and 1,2 will make the other two terms non-zero.  Once any of these combinations is chosen however, the other two terms will vanish.  Given that, let's g...