Saturday, May 23, 2015

Orbits continued

In my last post I mentioned how I've been busy setting up the Lagrangian for a particle in a gravitational field and solving the associated equations of motion. This is my first blog post where I use LaTeX, so bear with me. The Lagrangian for this system is reproduced below. As the gravitational potential energy depends on only on radius, I thought it best to set this up in polar coordinates. Note that solving this numerically in cartesian coordinates is no big either!
$$ L = T - V $$
$$ L = \frac{1}{2} m (r^2\dot{\theta}^2 + \dot{r}^2) + \frac{GMm}{r}$$
$$ \dot{r} = \frac{dr}{dt} $$
$$ \dot{\theta} = \frac{d\theta}{dt} $$
We arrive at the associated equations of motion for $\theta$ and $r$ simply by the applying the Euler-Lagrange equations so as to minimize the action.
$$m\ddot{r} = mr\dot{\theta}^2 - \frac{GMm}{r^2}$$
$$mr^2\ddot{\theta} + 2mr\dot{r}\dot{\theta} = 0 $$
Holy moly this looks like a mess. I don't even know how to start solving this analytically. Luckily, I have a powerful laptop at my disposal, so I can solve this system of couple ODE's numerically. Before I do anything though, I need to turn this into a system of first order ODE's. This is a clever little trick to simplify this problem quite a bit. The way I'll go about this is by creating new variables that correspond to my generalized coordinates $r$ and $\theta$ and their first time derivatives $\dot{r}$ and $\dot{\theta}$. I'll give everything a subscript just to keep track.
$$ r_1 = r ; r_2 = \dot{r}$$
$$ \theta_1 = \theta ; \theta_2 = \dot{\theta}$$
Now, instead of having two 2nd order ODE's, I have a system of four 1st order ODE's. I can thus rewrite my original equations of motion in terms of these new variables $r_1, r_2, \theta_1$ and $\theta_2$. I've gotten rid of the $m$ term in both equations to make things look nice.
\begin{array}{rcl} \dot{r_1} & = & r_2 \\
\dot{r_2} & = & r_1\theta_2^2 - \frac{GM}{r_1^2} \\
\dot{\theta_1} & = & \theta_2 \\
\dot{\theta_2} & = & \frac{-2r_2\theta_2}{r_1}\end{array}
With expressions for the first derivatives, I can now march my initial conditions through time using a simple numerical method for ODE's. I implemented this in Python using forward and backward Euler's method. Backward Euler is much harder to implement, but it ended up being faster than forward Euler (despite having to solve a system of nonlinear equations at each time step) because the time step was considerably (about 2 or 3 orders of magnitude) larger. Instead of writing out $r_1, r_2, \theta_1$ and $\theta_2$, I'll define a vector $\vec{r}$ whose elements are the variables I'm solving for.
$$ \vec{r} = \left[ \begin{array}{c} r_1 \\ r_2 \\ \theta_1 \\ \theta_2 \end{array} \right] $$
Similarly,
$$ \dot{\vec{r}} = \left[ \begin{array}{c} \dot{r_1} \\ \dot{r_2} \\ \dot{\theta_1} \\ \dot{\theta_2} \end{array} \right] $$
With this notation, forward Euler becomes
$$ \vec{r}_{n+1} = \vec{r}_n + h\:\dot{\vec{r}}_n$$
and backward Euler becomes
$$\vec{r}_{n+1} = \vec{r}_n + h\:\dot{\vec{r}}_{n+1}$$
Where $h$ is the time step. Note the implication of writing the $n+1$ subscript in the backward Euler equation; we now have to solve a system of nonlinear equations at every time step. This amounts to a lot of calculations! Luckily SciPy has a really fast nonlinear solver. All I have to do is feed it the jacobian for the system and some good initial guess, and it returns the solution in a very small number of iterations. This is no doubt because the method is based off of Newton's method, and the initial guess I use is the $r_{n+1}$ I would have calculated using Newton's method. If you want to see the code, just hit me up and I'll send it!


1 comment: