Skip to main content

Posts

Showing posts from January, 2013

Lorentz Special Relativity Violation Notes

I'm trying something new today.  I've been doing some background reading on the supposed violation of special relativity reported by M Mansuripur in PRL and pointed out by  +Cliff Harvey  earlier this week.  The embedded document below is a linked diagram of the background articles I've been reading.  An arrow indicates that the article referenced the pointed at article.  My goal in reading about this was to learn about the 'hidden momentum' issue mentioned in Mansuripur's article.  The articles listed here are the earliest and, in my opinion, the most basic articles I could find on the subject.

Law of Cosines and the Legendre Polynomials

This is so cool!!! A few days ago I extolled the virtues of the law of cosines, taught in high schools the world over, and claimed that it turned up in all kinds of problems that you run into later in physics.  I gave one example of an electrostatics problem I was working on, but I had a nagging feeling in the back of my head that there were even cooler examples I'd forgotten about.  It turns out that there is a way cooler example of the importance of the law of cosines!  The law of cosines can be used to calculate the Legendre polynomials!!! OK, so what are the Legendre polynomials?  They turn up repeatedly in graduate physics classes.  First of all, they're used to solve electrostatics problems [4].  The most noticeable place I saw them was in quantum mechanics, where they were derived as a special case of spherical harmonics.  Spherical harmonics are used to describe the wavefunction of an electron in a hydrogen atom, and ultimately to come up with graphs of the wave fun

Clarifying the Taylor Series

A friendly warning, the following is bit esoteric, (a fancy word for hazy), and is still very much a work in progress.  I'm having fun with it though, and I think it's an insightful review of the Taylor series in any event, so if you're interested, read on. My previous post was about a new way to memorize the Taylor series.  It emphasized what the Taylor series did rather than how it was derived.  I hadn't completely thought things through and had introduced a new term, "the inverse chain rule".   +John Baez  pointed out that the explanation as it stood sounded rather mysterious, and he was right.  His comment made me rethink the entire Taylor series process again, and I think I, (hopefully), have a much more clear explanation now! My premise is that the easy way to memorize the Taylor series is to understand what is going on in each term.  Hopefully, the same simple thing will be happening in each term making the whole process easy to remember and apply.

A Different Way to Memorize the Taylor Series and a Cry For Mathematical Help

A Different Way to Memorize the Taylor Series and a Cry For Mathematical Help As an undergrad, and throughout my graduate career until now, I've always had a hard time remembering how to apply a Taylor series.  I knew there were coefficients in front of powers of x and that those coefficients somehow involved derivatives of the function I was trying to aproximate with the Taylor series, but that was aobut it.  Even when I looked up the formula, there was still always some initial confusion.  I'd arrive at an equation that looked like the following followed by a little bit of text blithely informing that if I'd only take the nth derivative of both sides of the equation, and evaluate the derivative at x = b, then of course I'd see that the coefficient a sub n could be written as This inevitably led to having to remember that the first n-1 terms in the sum would become zero after they were differentiated n times, and that the terms of n+1 and above would beco

The Importance of the Law of Cosines

Just a quick note on the more generic and far lest cited sibling of the Pythagorean theorem, the law of cosines.  If you're like me, you were taught this formula sometime between the eighth and the eleventh grades and promptly forgot it after your exam.  It's a formula that relates the length of one side of a triangle  (any side), to its opposite angle and the lengths of the other two sides.  The formula and its associated diagram, ( courtesy of Wikipedia ), are shown below: If you keep it in mind, you'll start to see it show up all over the place, honest!  For example it turned up in today's electromagnetism homework.  We were tasked on determining the electrostatic potential at any point in a plane due to a ring of charge.  Some of us put a significant amount of effort into determining a general formula for the distance from a point in the plane to any point on the circle.  An industrious eight grader however, could have rattled the formula off by heart!

Quantum Mechanics, Localization, and Electrical Engineering

In electrical engineering circles, it's common knowledge that any stimulus in the time domain can be decomposed into a Fourier series, or a Fourier transform in the frequency domain.  In other words, you can build an arbitrary signal in the time domain using sine/cosine waves whose frequencies and amplitudes are specified by the Fourier transform.  In physics, a similar concept arises in quantum mechanics.  Objects that live in the space we're familiar with, position space, can be described as waves in momentum space where the wave number, (analogous to frequency), k, of a given wave is described by the momentum, p of an object as: .   This is the principle behind De Broglie wave descriptions of electrons for example. An illustration of what the terms local vs. non-local mean in quantum mechanics led to a much better understanding of the uncertainty principal and wave functions for me anyway.  Our professor mentioned that in momentum space, a potential that is specified

EM and Complex Analysis

There are an increasing number of apparent correspondences between EM this semester and our section on complex analysis in math methods last semester.  These are just notes on a few of them. Uniqueness of the Electrostatic Potential Solution and Liouville's theorem After stating that we would be solving Poisson's equation to determine electrostatic potentials, our professor then launched into a proof that the solutions, once found, would be unique.  We first defined a potential psi equal to the difference between non-unique solutions, (assuming for the moment in our proof by contradiction that there could be more than one unique solution).  We placed psi back into Poisson's equation and ran through the following steps: Ultimately we wound up proving that at best psi is a constant, but that it must be zero everywhere on the surface that defines the Dirichlet boundary conditions that the two 'non-unique' solutions both satisfy, so it's constant value

The Abraham-Minkowski Controversy

What I thought would  be  a boring post on EM boundary conditions a few days ago has turned into something interesting.  Jonah Miller replied back to the post and made a comment that ultimately led to the Abraham-Minkowski controversy on how to interpret the momentum of a photon in a piece of material.  Apparently this debate has progressed fora little bit over 100 y ears with the form of the momentum equation in a material proposed by Max Abraham apparently conflicting with the one proposed by Hermann Minkowski of Minkowski metric fame where n is the index of refraction for the material, h is Planck's constant, nu is the frequency of the light, and c is the speed of light. The controversy even led to people studying whether or not the discrepancy might be used as the basis for a reactionless drive with the Air Force and NASA purportedly getting in on the game according to Wikipedia. It turns out  that the answer in all likelihood is that everyone is correct, (OK,

Bra Ket Notation

Just a few notes here about shiny things that caught my eye regarding bra and ket notation in the quantum mechanics II lecture last night. Inner Products Inner products are the Hilbert space, quantum mechanical, state vector equivalent of the dot product for more standard vectors like position or velocity.  The unit basis ket , at least in our class, is written as where j is the index of the component.  Associating back to Cartesian coordiantes, 1 would denote x, 2 would denote y, and 3 would denote z.  The ket vector is the same symbol in a ket and when the two are applied to each other we get the inner product . In other words, the inner product only produces contributions from like basis vectors, just like the dot product. So here's the cool bit, the following all accomplish about the same thing, they find a number proportional to the magnitude of the component of one vector, lying along another vector whether those  vectors are what we most typically call ve