Files
thesis/LaTeX/approach.tex
2022-03-09 21:22:36 -07:00

392 lines
21 KiB
TeX

\chapter{Algorithm Overview} \label{algorithm}
In this section, we will review the actual execution of the algorithm developed. As an
overview, the routine was developed to enable the determination of an optimized spacecraft
trajectory from the selection of some very basic mission parameters. Those parameters
include:
\begin{itemize}
\item Spacecraft dry mass
\item Thruster Specific Impulse
\item Thruster Maximum Thrusting Force
\item Thruster Duty Cycle Percentage
\item Number of Thruster on Spacecraft
\item Total Starting Weight of the Spacecraft
\item A Maximum Acceptable $V_\infty$ at arrival and $C_3$ at launch
\item The Launch Window Timing and the Latest Arrival
\item A cost function relating the mass usage, $v_\infty$ at arrival, and $C_3$ at
launch to a cost
\item A list of flyby planets starting with Earth and ending with the destination
\end{itemize}
Which allows for extremely automated optimization of the trajectory, while still providing
the mission designer with the flexibility to choose the particular flyby planets to
investigate.
This is achieved via an optimal control problem in which the ``inner loop'' is a
non-linear programming problem to determine the optimal low-thrust control law and flyby
parameters given a suitable initial guess. Then an ``outer loop'' monotonic basin hopping
algorithm is used to traverse the search space and more carefully optimize the solutions
found by the inner loop.
\section{Trajectory Composition}
In this thesis, a specific nomenclature will be adopted to define the stages of an
interplanetary mission in order to standardize the discussion about which aspects of the
software affect which phases of the mission.
Overall, a mission is considered to be the entire overall trajectory. In the context of
this software procedure, a mission is taken to always begin at the Earth, with some
initial launch C3 intended to be provided by an external launch vehicle. This C3 is not
fully specified by the mission designer, but instead is optimized as a part of the
overall cost function (and normalized by a designer-specified maximum allowable value).
This overall mission can then be broken down into a variable number of ``phases''
defined as beginning at one planetary body with some excess hyperbolic velocity and
ending at another. The first phase of the mission is from the Earth to the first flyby
planet. The final phase is from the last flyby planet to the planet of interest.
Each of these phases are then connected by a flyby event at the boundary. Each flyby
event must satisfy the following conditions:
\begin{enumerate}
\item The planet at the end of one phase must match the planet at the beginning of
the next phase.
\item The magnitude of the excess hyperbolic velocity coming into the planet (at the
end of the previous phase) must equal the magnitude of the excess hyperbolic
velocity leaving the planet (at the beginning of the next phase).
\item The flyby ``turning angle'' must be such that the craft maintains a safe
minimum altitude above the surface or atmosphere of the flyby planet.
\end{enumerate}
These conditions then effectively stitch the separate mission phases into a single
coherent mission, allowing for the optimization of both individual phases and the entire
mission as a whole. This nomenclature is similar to the nomenclature adopted by Jacob
Englander in his Hybrid Optimal Control Problem paper, but does not allow for missions
with multiple targets, simplifying the optimization.
\section{Inner Loop Implementation}\label{inner_loop_section}
The optimization routine can be reasonable separated into two separate ``loops'' wherein
the first loop is used, given an initial guess, to find valid trajectories within the
region of the initial guess and submit the best. The outer loop is then used to traverse
the search space and supply the initial loop with a number of well chosen initial
guesses.
Figure~\ref{nlp} provides an overview of the process of breaking a mission guess down
into an NLP, but there are essentially three primary routines involved in the inner
loop. A given state is propagated forward using the LaGuerre-Conway Kepler solution
algorithm, which itself is used to generate powered trajectory arcs via the
Sims-Flanagan transcribed propagator. Finally, these powered arcs are connected via a
multiple-shooting non-linear optimization problem. The trajectories describing each
phase complete one ``Mission Guess'' which is fed to the non-linear solver to generate
one valid trajectory within the vicinity of the original Mission Guess.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{flowcharts/nlp}
\caption{A flowchart of the Non-Linear Problem Solving Formulation}
\label{nlp}
\end{figure}
\subsection{LaGuerre-Conway Kepler Solver}\label{conway_pseudocode}
The most basic building block of any trajectory is a physical model for simulating
natural trajectories from one point forward in time. The approach taken by this
paper uses the solution to Kepler's equation put forward by
Conway\cite{laguerre_conway} in 1986 in order to provide simple and very
processor-efficient propagation without the use of integration. The code logic
itself is actually quite simple, providing an approach similar to the Newton-Raphson
approach for finding the roots of the Battin form of Kepler's equation.
The following pseudo-code outlines the approach taken for the elliptical case. The
approach is quite similar when $a<0$:
% TODO: Some symbols here aren't recognized by the font
\begin{singlespacing}
\begin{verbatim}
i = 0
# First declare some useful variables from the state
sig0 = (position ⋅ velocity) / √(mu)
a = 1 / ( 2/norm(position) - norm(velocity)^2/mu )
coeff = 1 - norm(position)/a
# This loop is essentially a second-order Newton solver for ΔE
ΔM = ΔE_new = √(mu/a^3) * time
ΔE = 1000
while abs(ΔE - ΔE_new) > 1e-10
ΔE = ΔE_new
F = ΔE - ΔM + sig0 / √(a) * (1-cos(ΔE)) - coeff * sin(ΔE)
dF = 1 + sig0 / √(a) * sin(ΔE) - coeff * cos(ΔE)
d2F = sig0 / √(a) * cos(ΔE) + coeff * sin(ΔE)
ΔE_new = ΔE - n*F / ( dF + sign(dF) * √(abs((n-1)^2*dF^2 - n*(n-1)*F*d2F )))
i += 1
end
# ΔE can then be used to determine the F/Ft and G/Gt coefficients
F = 1 - a/norm(position) * (1-cos(ΔE))
G = a * sig0/ √(mu) * (1-cos(ΔE)) + norm(position) * √(a) / √(μ) * sin(ΔE)
r = a + (norm(position) - a) * cos(ΔE) + sig0 * √(a) * sin(ΔE)
Ft = -√(a)*√(mu) / (r*norm(position)) * sin(ΔE)
Gt = 1 - a/r * (1-cos(ΔE))
# Which provide transformations from the original position and velocity to the
# final
final_position = F*position + G*velocity
final_velocity = Ft*position + Gt*velocity
\end{verbatim}
\end{singlespacing}
This approach was validated by generating known good orbits in the 2 Body Problem.
For example, from the orbital parameters of a certain state, the orbital period can
be determined. If the system is then propagated for an integer multiple of the orbit
period, the state should remain exactly the same as it began. In
Figure~\ref{laguerre_plot} an example of such an orbit is provided.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/laguerre_plot}
\caption{Example of a natural trajectory propagated via the Laguerre-Conway
approach to solving Kepler's Problem}
\label{laguerre_plot}
\end{figure}
% TODO: Consider adding a paragraph about the improvements in processor time
\subsection{Sims-Flanagan Propagator}
Until this point, we've not yet discussed how best to model the low-thrust
trajectory arcs themselves. The Laguerre-Conway algorithm efficiently determines
natural trajectories given an initial state, but it still remains, given a control
law, that we'd like to determine the trajectory of a system with continuous input
thrust.
For this, we leverage the Sims-Flanagan transcription mentioned earlier. This allows
us to break a single phase into a number of ($n$) different arcs. At the center of
each of these arcs we can place a small impulsive burn, scaled appropriately for the
thruster configured on the spacecraft of interest. Therefore, for any given phase,
we actually split the trajectory into $2n$ sub-trajectories, with $n$ scaled
impulsive thrust events. As $n$ is increased, the trajectory becomes increasingly
accurate as a model of low-thrust propulsion in the 2BP. This allows the mission
designer to trade-off speed of propagation and the fidelity of the results quite
effectively.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/spiral_plot}
\caption{An example trajectory showing that classic continuous-thrust orbit
shapes, such as this orbit spiral, are easily achievable using a Sims-Flanagan
model}
\label{sft_plot}
\end{figure}
Figure~\ref{sft_plot} shows that the Sims-Flanagan transcription model can be used
to effectively model these types of orbit trajectories. In fact, the Sims-Flanagan
model is capable of modeling nearly any low-thrust trajectory with a sufficiently
high number of $n$ samples.
Finally, it should be noted that, in any proper propagation scheme, mass should be
decremented proportionally to the thrust used. The Sims-Flanagan Transcription
assumes that the thrust for a given sub-trajectory is constant across the entirety
of that sub-trajectory. Therefore, the mass used by that particular thrusting event
can be determined by knowledge of the percentage of maximum thrust being provided
and the mass flow rate (a function of the duty cycle percentage ($d$), thrust ($f$),
and the specific impulse of the thruster ($I_{sp}$), commonly used to measure
efficiency)\cite{sutton2016rocket}:
\begin{equation}
\Delta m = \Delta t \frac{f d}{I_{sp} g_0}
\end{equation}
Where $\Delta m$ is the fuel used in the sub-trajectory, $\Delta t$ is the time of
flight of the sub-trajectory, and $g_0$ is the standard gravity at the surface of
Earth.
\subsection{Non-Linear Problem Solver}
Now that we have the basic building blocks of a continuous-thrust trajectory, we can
leverage one of the many non-linear optimization packages to find solutions near to
a (proposed) trajectory. This trajectory need not be valid.
For the purposes of discussion in this Section, we will assume that the inner-loop
algorithm starts with just such a ''Mission Guess``, which represents the proposed
trajectory. However, we'll briefly mention what quantities are needed for this
input:
A Mission Guess object contains:
\begin{singlespacing}
\begin{itemize}
\item The spacecraft and thruster parameters for the mission
\item A launch date
\item A launch $v_\infty$ vector representing excess Earth velocity
\item For each phase of the mission:
\begin{itemize}
\item The planet that the spacecraft will encounter (either flyby or
complete the mission) at the end of the phase
\item The $v_{\infty,out}$ vector representing excess velocity at the
planetary flyby (or launch if phase 1) at the beginning of the phase
\item The $v_{\infty,in}$ vector representing excess velocity at the
planetary flyby (or completion of mission) at the end of the phase
\item The time of flight for the phase
\item The unit-thrust profile in a sun-fixed frame represented by a
series of vectors with each element ranging from 0 to 1.
\end{itemize}
\end{itemize}
\end{singlespacing}
From this information, as can be seen in Figure~\ref{nlp}, we can formulate the
mission in terms of a non-linear problem. Specifically, the Mission Guess object can
be represented as a vector, $x$, the propagation function as a function $F$, and the
constraints as another function $G$ such that $G(x) = \vec{0}$.
This is a format that we can apply directly to the IPOPT solver, which Julia (the
programming language used) can utilize via bindings supplied by the SNOW.jl
package\cite{snow}.
IPOPT also requires the derivatives of both the $F$ and $G$ functions in the
formulation above. Generally speaking, a project designer has two options for
determining derivatives. The first option is to analytically determine the
derivatives, guaranteeing accuracy, but requiring processor time if determined
algorithmically and sometimes simply impossible or mathematically very rigorous to
determine manually. The second option is to numerically derive the derivatives,
using a technique such as finite differencing. This limits the accuracy, but can be
faster than algorithmic symbolic manipulation and doesn't require rigorous manual
derivations.
However, the Julia language has an excellent interface to a new technique, known as
automatic differentiation\cite{RevelsLubinPapamarkou2016}. Automatic differentiation
takes a slightly different approach to numerical derivation. It takes advantage of
the fact that any algorithmic function, no matter how complicated, can be broken
down into a series of smaller arithmetic functions, down to the level of simple
arithmetic. Since all of these simple arithmetic functions have a known derivative,
we can define a new datatype that carries through the function both the float and a
second number representing the derivative. Then, by applying (to the derivative) the
chain rule for every minute arithmetic function derivative as that arithmetic
function is applied to the main float value, the derivative can be determined,
accurate to the machine precision of the float type being used, with a processing
equivalent of two function calls (this of course depends on the simplicity of the
chained derivatives compared to the function pieces themselves). Generally speaking
this is much faster than the three or more function calls necessary for accurate
finite differencing and removes the need for the designer to tweak the epsilon value
in order to achieve maximum precision.
\section{Outer Loop Implementation}
Now we have the tools in place for, given a potential ''mission guess`` in the
vicinity of a valid guess, attempting to find a valid and optimal solution in that
vicinity. Now what remains is to develop a routine for efficiently generating these
random mission guesses in such a way that thoroughly searches the entirety of the
solution space with enough granularity that all spaces are considered by the inner loop
solver.
Once that has been accomplished, all that remains is an ''outer loop`` that can generate
new guesses or perturb existing valid missions as needed in order to drill down into a
specific local minimum. In this thesis, that is accomplished through the use of a
Monotonic Basin Hopping algorithm. This will be described more thoroughly in
Section~\ref{mbh_subsection}, but Figure~\ref{mbh_flow} outlines the process steps of
the algorithm.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{flowcharts/mbh}
\caption{A flowchart visualizing the steps in the monotonic basin hopping
algorithm}
\label{mbh_flow}
\end{figure}
\subsection{Random Mission Generation}\label{random_gen_section}
At a basic level, the algorithm needs to produce a mission guess (represented by all
of the values described in Section~\ref{inner_loop_section}) that contains random
values within reasonable bounds in the space. This leaves a number of variables open
to for implementation. For instance, it remains to be determined which distribution
function to use for the random values over each of those variables, which bounds to
use, as well as the possibilities for any improvements to a purely random search.
Currently, the first value set for the mission guess is that of $n$, which is the
number of sub-trajectories that each arc will be broken into for the Sims-Flanagan
based propagator. For this implementation, that was chosen to be 20, based upon a
number of tests in which the calculation time for the propagation was compared
against the accuracy of a much higher $n$ value for some known thrust controls, such
as a simple spiral orbit trajectory. This value of 20 tends to perform well and
provide reasonable accuracy, without producing too many variables for the NLP
optimizer to control for (since the impulsive thrust at the center of each of the
sub-trajectories is a control variable). This leaves some room for future
improvements, as will be discussed in Section~\ref{improvement_section}.
The bounds for the launch date are provided by the user in the form of a launch
window, so the initial launch date is just chosen as a standard random value from a
uniform distribution within those bounds.
A unit launch direction is then also chosen as a 3-length vector of uniform random
numbers and normalized. This vector is then multiplied by a uniform random number
between 0 and the root of the maximum launch $C_3$ specified by the user to generate
an initial $\vec{v}_\infty$ vector at launch.
Next, the times of flight of each phase of the mission is then decided. Since launch
date has already been selected, the maximum time of flight can be calculated by
subtracting the launch date from the latest arrival date provided by the mission
designer. Then, each leg is chosen from a uniform distribution with bounds currently
set to a minimum flight time of 30 days and a maximum of 70\% of the maximum time of
flight. These leg flight times are then iteratively re-generated until the total
time of flight (represented by the sum of the leg flight times) is less than the
maximum time of flight. This allows for a lot of flexibility in the leg flight
times, but does tend toward more often producing longer missions, particularly for
missions with more flybys.
Then, the internal components for each phase are generated. It is at this step, that
the mission guess generator splits the outputs into two separate outputs. The first
is meant to be truly random, as is generally used as input for a monotonic basin
hopping algorithm. The second utilizes a Lambert's solver to determine the
appropriate hyperbolic velocities (both in and out) at each flyby to generate a
natural trajectory arc. For this Lambert's case, the mission guess is simply seeded
with zero thrust controls and outputted to the monotonic basin hopper. The intention
here is that if the time of flights are randomly chosen so as to produce a
trajectory that is possible with a control in the vicinity of a natural trajectory,
we want to be sure to find that trajectory. More detail on how this is handled is
available in Section~\ref{mbh_subsection}.
However, for the truly random mission guess, there are still the $v_\infty$ values
and the initial thrust guesses to generate. For each of the phases, the incoming
excess hyperbolic velocity is calculated in much the same way that the launch
velocity was calculated. However, instead of multiplying the randomly generate unit
direction vector by a random number between 0 and the square root of the maximum
launch $C_3$, bounds of 0 and 10 kilometers per second are used instead, to provide
realistic flyby values.
The outgoing excess hyperbolic velocity at infinity is then calculated by again
choosing a uniform random unit direction vector, then by multiplying this value by
the magnitude of the incoming $v_{\infty}$ since this is a constraint of a
non-powered flyby.
From these two velocity vectors the turning angle, and thus the periapsis of the
flyby, can then be calculated by the following equations:
\begin{align}
\delta &= \arccos \left( \frac{\vec{v}_{\infty,in} \cdot
\vec{v}_{\infty,out}}{|v_{\infty,in}| \cdot {|v_{\infty,out}}|} \right) \\
r_p &= \frac{\mu}{\vec{v}_{\infty,in} \cdot \vec{v}_{\infty,out}} \cdot \left(
\frac{1}{\sin(\delta/2)} - 1 \right)
\end{align}
If this radius of periapse is then found to be less than the minimum safe radius
(currently set to the radius of the planet plus 100 kilometers), then the process is
repeated with new random flyby velocities until a valid seed flyby is found. These
checks are also performed each time a mission is perturbed or generated by the NLP
solver.
The final requirement then, is the thrust controls, which are actually quite simple.
Since the thrust is defined as a 3-vector of values between -1 and 1 representing
some percentage of the full thrust producible by the spacecraft thrusters in that
direction, the initial thrust controls can then be generated as a $20 \times 3$
matrix of uniform random numbers within that bound.
\subsection{Monotonic Basin Hopping}\label{mbh_subsection}
Now that a generator has been developed for mission guesses, we can build the
monotonic basin hopping algorithm. Since the implementation of the MBH algorithm
used in this paper differs from the standard implementation, the standard version
won't be described here. Instead, the variation used in this paper, with some
performance improvements, will be considered.