Files
thesis/LaTeX/thesis.tex
2022-02-27 23:56:22 -07:00

1632 lines
94 KiB
TeX

\documentclass[defaultstyle,11pt]{thesis}
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{hyperref}
\usepackage{amsmath}
\usepackage{float}
\usepackage{xfrac}
\title{Designing Optimal Low-Thrust Interplanetary Trajectories Utilizing Monotonic Basin Hopping}
\author{Richard C.}{Johnstone}
\otherdegrees{B.S., Unviersity of Kentucky, Mechanical Engineering, 2016 \\
B.S., University of Kentucky, Physics, 2016}
\degree{Master of Science}{M.S., Aerospace Engineering}
\dept{Department of}{Aerospace Engineering}
\advisor{Prof.}{Natasha Bosanac}
\reader{Kathryn Davis}
\readerThree{Daniel Scheeres}
\abstract{ \OnePageChapter
There are a variety of approaches to finding and optimizing low-thrust trajectories in
interplanetary space. This thesis analyzes one such approach, namely the application of a
Monotonic Basin Hopping (MBH) algorithm to a series of Sims-Flanagan transcribed trajectory arcs
and its applications in a multiple-shooting non-linear solver for the purpose of finding valid
low-thrust trajectories between planets given poor initial conditions. These valid arcs are then
fed into the MBH algorithm, which combines them in order to find and optimize interplanetary
trajectories, given a set of flyby planets. This allows for a fairly rapid searching of a very
large solution space of low-thrust profiles via a medium fidelity inner-loop solver and a
well-suited optimization routine. The trajectories found by this method can then be optimized
further by feeding the solutions back, once again, into the non-linear solver, this time
allowing the solver to perform optimization.
}
\dedication[Dedication]{
Dedicated to some people.
}
\acknowledgements{ \OnePageChapter
This will be an acknowledgement.
}
\emptyLoT
\begin{document}
\input macros.tex
\chapter{Introduction}
Continuous low-thrust arcs utilizing technologies such as Ion propulsion, Hall thrusters, and
others can be a powerful tool in the design of interplanetary space missions. They tend to be
particularly suited to missions which require very high total change in velocity ($\Delta
V$) values and take place over a particularly long duration. Traditional impulsive thrusting
techniques can achieve these changes in velocity, but typically have a far lower specific
impulse and, as such, are much less fuel efficient, costing the mission valuable
financial resources that could instead be used for science. Because of their inherently high
specific impulse (and thus efficiency), low-thrust techniques are well-suited to interplanetary
missions.
The first such mission by NASA to use an electric ion-thruster for an interplanetary mission
was the Deep Space 1 mission\cite{brophy2002}, which tested the ``new'' technology, first
appearing as a concept in science fiction stories of the early 1900's and first tested
successfully during NASA's Space Electric Rocket Test (SERT) mission of
1964\cite{cybulski1965results}, on an interplanetary mission for the first time. The Ion
thruster used on Deep Space 1 allowed the mission to rendezvous with both an asteroid (9969
Braille) and later with a comet (Borrelly), when the technologies being tested, such as the
ion thruster, proved robust enough and efficient enough to allow for two mission extensions.
After this initial successful test, ion thrusters and other forms of low-thrust electric
propulsion have been used in a variety of missions. The NASA Dawn \cite{rayman2006dawn}
spacecraft in 2015 became the first spacecraft to successfully orbit two planetary bodies,
thanks in large part to the efficiency of its ion propulsion system. Also notable is the
joint ESA and JAXA spacecraft Bepi-Colombo\cite{benkhoff2010bepicolombo}, which was launched
in October 2018 and is projected to perform a flyby of Earth, two of Venus, and six of
Mercury before inserting into an orbit around that planet.
In general, whenever an interplanetary trajectory is posed, it is advisable to
first explore the possibility of low-thrust technologies. In an interplanetary mission, the
primary downside to low-thrust orbits (that they require significant time to achieve large
$\Delta V$ changes) is made irrelevant by the fact that interplanetary trajectories take such a
long time as a matter of course.
Another technique often leveraged by interplanetary trajectory designers is the gravity assist.
Gravity assists cleverly utilize the inertia of a large planetary body to ``slingshot'' a
spacecraft, modifying the direction of its velocity with respect to the central body, the Sun.
This technique lends itself very well to impulsive trajectories. The gravity assist maneuver
itself can be modeled very effectively by an impulsive maneuver with certain constraints, placed
right at the moment of closest approach to the (flyby) target body. Because of this,
optimization with impulsive trajectories and gravity assists are common.
However, there is no physical reason why low-thrust trajectories can't also incorporate gravity
assists. The optimization problem simply becomes much more complicated. The separate problems of
optimizing flyby parameters (planet, flyby date, etc.) and optimizing the low-thrust control
arcs don't combine very easily. This concept has been explored heavily by Dr. Jacob
Englander \cite{englander2014tuning}, \cite{englander2017automated},
\cite{englander2012automated} recently in an effort to develop a generalized and automated
routine for producing unconstrained, globally optimal trajectories for realistic
interplanetary mission development that utilizes both planetary flybys and efficient
low-thrust electric propulsion techniques.
This thesis will attempt to recreate the techniques developed by Dr Englander in his 2012
paper, which explored a Hybrid Optimal Control Algorithm for the optimization of
interplanetary missions\cite{englander2012automated}. Several changes have been made to the
approach presented in that paper. Most notably, the Evolutionary Algorithm used to automate
flyby selection has been removed in favor of manual designer flyby selection and the
addition of a Lambert's Solver was used to improved robustness during the Monotonic Basin
Hopping step. However, in large, the intention of this thesis is to explore the capabilities
of this algorithm as an initial design tool for an example mission to Saturn in the near
future.
This thesis will explore these concepts in a number of different sections. Section
\ref{traj_opt} will explore the basic principles of trajectory optimization in a manner agnostic
to the differences between continuous low-thrust and impulsive high-thrust techniques. Section
\ref{low_thrust} will then delve into the different aspects to consider when optimizing a low
thrust mission profile over an impulsive one. Section \ref{interplanetary} provides more detail
on the interplanetary considerations, including force models and gravity assists. Section
\ref{algorithm} will cover the implementation details of the optimization algorithm
developed for this paper. Finally, section \ref{results} will explore the results of some
hypothetical missions to Saturn.
\chapter{Trajectory Optimization} \label{traj_opt}
Trajectory optimization is concerned with finding a narrow solution (namely, optimizing a
spaceflight trajectory to an end state) with a wide range of possible techniques,
approaches, and controls making up a very large solution space. In this section, the
foundations for direct optimization of these sorts of problems will be explored by first
introducing the Two-Body Problem, then an algorithm for directly solving for states in that
system, then exploring approaches to Non-Linear Problem (NLP) solving in general and how
they apply to spaceflight trajectories.
\section{The Two-Body Problem}
The motion of a spacecraft in space is governed by a large number of forces. When planning and
designing a spacecraft trajectory, we often want to use the most complete (and often complex)
model of these forces that is available. However, in the process of designing these
trajectories, we often have to compute the path of the spacecraft many hundreds, thousands, or
even millions of times. Utilizing very high-fidelity force models that account for aerodynamic
pressures, solar radiation pressures, multi-body effects, and many others may be infeasible
for the method being used if the computations take too long.
Therefore, a common approach (and the one utilized in this implementation) is to first look
simply at the single largest force governing the spacecraft in motion, the gravitational force
due to the primary body around which it is orbiting. This can provide an excellent
low-to-medium fidelity model that can be extremely useful in categorizing the optimization
space as quickly as possible. In many cases, including the algorithm used in this paper, it is
unlikely that local cost-function minima would be missed due to the lack of fidelity of the
Two Body Problem.
In order to explore the Two Body Problem, we must first examine the full set of
assumptions associated with the force model\cite{vallado2001fundamentals}. Firstly, we
are only concerned with the nominative two bodies: the spacecraft and the planetary body
around which it is orbiting. Secondly, both of these bodies are modeled as simple point
masses. This removes the need to account for non-uniform densities and asymmetry. The
third assumption is that the mass of the spacecraft ($m_2$) is much much smaller than
the mass of the planetary body ($m_1$) and enough so as to be considered negligible. The
only force acting on this system is then the force of gravity that the primary body
enacts upon the secondary. Lastly, we'll assume a fixed inertial frame. This isn't
necessary for the formulation of a solution, but will simplify the derivation.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{fig/2bp}
\caption{Figure representing the positions of the bodies relative to each other and
the center of mass in the two body problem}
\label{2bp_fig}
\end{figure}
Reducing the system to two point masses with a single gravitational force acting between them
(and only in one direction) we can model the opposite forces on the two bodies as:
\begin{align}
m_2 \ddot{\vec{r}}_2 &= \frac{G m_1 m_2}{r^2} \frac{\vec{r}}{\left| r \right|} \\
m_1 \ddot{\vec{r}}_1 &= - \frac{G m_2 m_1}{r^2} \frac{\vec{r}}{\left| r \right|}
\end{align}
Where $\vec{r}$ is the position of the spacecraft relative to the primary body,
$\vec{r}_1$ is the position of the primary body relative to the center of the inertial
frame, and $\vec{r}_2$ is the position of the spacecraft relative to the center of the
inertial frame. $G$ is the universal gravitational parameter, $m_1$ is the mass of the
planetary body, and $m_2$ is the mass of the spacecraft. From these equations, we can
then determine the acceleration of the spacecraft relative to the primary planet of the
system:
\begin{equation}
\ddot{\vec{r}} = \ddot{\vec{r}}_2 - \ddot{\vec{r}}_1 =
- \frac{G \left( m_1 + m_2 \right)}{r^2} \frac{\vec{r}}{\left| r \right|}
\end{equation}
Due to our assumption that the mass of the spacecraft is significantly smaller than the
mass of the primary body ($m_1 >> m_2$) we can simplify the problem by removing the
negligible $m_2$ term. We can also introduce, for convenience, a number $\mu$ which
represents, for a given planet, the universal gravitational parameter multiplied by the
mass of the planet. Doing so and simplifying produces:
\begin{equation}
\ddot{\vec{r}} = - \frac{\mu}{r^2} \hat{r}
\end{equation}
Where $\mu = G m_1$ is the specific gravitational parameter for our primary body of interest.
\subsection{Kepler's Laws and Equations}
Now that we've fully qualified the forces acting within the Two Body Problem, we can concern
ourselves with more practical applications of it as a force model. It should be noted,
firstly, that the spacecraft's position and velocity (given an initial position and velocity
and of course the $\mu$ value of the primary body) is actually analytically solvable for all
future points in time. This can be easily observed by noting that there are three
one-dimensional equations (one for each component of the three-dimensional space) and
three unknowns (the three components of the second derivative of the position).
In the early 1600s, Johannes Kepler produced just such a solution, by taking advantages of
what is also known as ``Kepler's Laws'' which are\cite{murray1999solar}:
\begin{enumerate}
\item Each planet's orbit is an ellipse with the Sun at one of the foci. This can be
expanded to any orbit by re-wording as ``all orbital paths follow a conic section
(circle, ellipse, parabola, or hyperbola) with a primary mass at one of the foci''.
Specifically the path of the orbit follows the trajectory equation:
\begin{equation}
r = \frac{\sfrac{h^2}{\mu}}{1 + e \cos(\theta)}
\end{equation}
Where $h$ is the angular momentum of the satellite, $e$ is the
eccentricity of the orbit, and $\theta$ is the true anomaly, or simply
the angular distance the satellite has traversed along the orbit path.
\item The area swept out by the imaginary line connecting the primary and secondary
bodies increases linearly with respect to time. This implies that the magnitude of the
orbital speed is not constant. For the moment, we'll just take this
value to be a constant:
\begin{equation}\label{swept}
\frac{\Delta t}{T} = \frac{k}{\pi a b}
\end{equation}
Where $k$ is the constant value, $a$ and $b$ are the semi-major and
semi-minor axis of the conic section, and $T$ is the period. In the
following section, we'll derive the value for $k$.
\item The square of the orbital period is proportional to the cube of the semi-major
axis of the orbit, regardless of eccentricity. Specifically, the relationship is:
\begin{equation}
T = 2 \pi \sqrt{\frac{a^3}{\mu}}
\end{equation}
Where $T$ is the period and $a$ is the semi-major axis.
\end{enumerate}
\section{Analytical Solutions to Kepler's Equations}
Kepler was able to produce an equation to represent the angular displacement of an
orbiting body around a primary body as a function of time, which we'll derive now
for the elliptical case\cite{vallado2001fundamentals}. Since the total area of an
ellipse is the product of $\pi$, the semi-major axis, and the semi-minor axis ($\pi
a b$), we can relate (by Kepler's second law) the area swept out by an orbit as a
function of time, as we did in Equation~\ref{swept}.
This leaves just one unknown variable $k$, which we can determine through use of the
geometric auxiliary circle, which is a circle with radius equal to the ellipse's semi-major
axis and center directly between the two foci, as in Figure~\ref{aux_circ}.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{fig/kepler}
\caption{Geometric Representation of Auxiliary Circle}\label{aux_circ}
\end{figure}
In order to find the area swept by the spacecraft, $k$, we can take advantage of the fact
that that area is the triangle $k_1$ subtracted from the elliptical segment $PCB$:
\begin{equation}\label{areas_eq}
k = area(seg_{PCB}) - area(k_1)
\end{equation}
Where the area of the triangle $k_1$ can be found easily using geometric formulae:
\begin{align}
area(k_1) &= \frac{1}{2} \left( ae - a \cos E \right) \left( \frac{b}{a} a \sin E \right) \\
&= \frac{ab}{2} \left(e \sin E - \cos E \sin E \right)
\end{align}
Now we can find the area for the elliptical segment $PCB$ by first finding the circular
segment $POB'$, subtracting the triangle $COB'$, then applying the fact that an ellipse is
merely a vertical scaling of a circle by the amount $\frac{b}{a}$.
\begin{align}
area(PCB) &= \frac{b}{a} \left( area(POB') - area(COB') \right) \\
&= \frac{b}{a} \left( \frac{a^2 E}{2} - \frac{1}{2} \left( a \cos E \right)
\left( a \sin E \right) \right) \\
&= \frac{abE}{2} - \frac{ab}{2} \left( \cos E \sin E \right) \\
&= \frac{ab}{2} \left( E - \cos E \sin E \right)
\end{align}
By substituting the two areas back into Equation~\ref{areas_eq} we can get the $k$ area
swept out by the spacecraft:
\begin{equation}
k = \frac{ab}{2} \left( E - e \sin E \right)
\end{equation}
Which we can then substitute back into the equation for the swept area as a function of
time (Equation~\ref{swept}):
\begin{equation}
\frac{\Delta t}{T} = \frac{E - e \sin E}{2 \pi}
\end{equation}
Which is, effectively, Kepler's equation. It is commonly known by a different form:
\begin{equation}
M = \sqrt{\frac{\mu}{a^3}} \Delta t = E - e \sin E
\end{equation}
Where we've defined the mean anomaly as $M$ and used the fact that $T =
\sqrt{\frac{a^3}{\mu}}$. This provides us a useful relationship between Eccentric Anomaly
($E$) which can be related to spacecraft position, and time, but we still need a useful
algorithm for solving this equation.
\subsection{LaGuerre-Conway Algorithm}\label{laguerre}
For this application, I used an algorithm known as the LaGuerre-Conway
algorithm\cite{laguerre_conway}, which was presented in 1986 as a faster and more
robust algorithm for directly solving Kepler's equation and has been in use in many
applications since. This algorithm is known for its convergence robustness and also
its speed of convergence when compared to higher order Newton methods.
This thesis will omit a step-through of the algorithm itself, but psuedo-code for
the algorithm will be discussed briefly in Section~\ref{conway_pseudocode}.
\section{Non-Linear Problem Optimization}
Now we can consider the formulation of the problem in a more useful way. For instance, given a
desired final state in position and velocity we can relatively easily determine the initial
state necessary to end up at that desired state over a pre-defined period of time by solving
Kepler's equation. In fact, this is often how impulsive trajectories are calculated since,
other than the impulsive thrusting event itself, the trajectory is entirely natural.
However, often in trajectory design we want to consider a number of other inputs. For
instance, a low thrust profile, a planetary flyby, the effects of rotating a solar panel on
solar radiation pressure, etc. Once these inputs have been accepted as part of the model, the
system is generally no longer analytically solvable, or, if it is, is too complex to calculate
directly.
Therefore an approach is needed, in trajectory optimization and many other fields, to optimize
highly non-linear, unpredictable systems such as this. The field that developed to approach
this problem is known as Non-Linear Problem (NLP) Optimization.
There are, however, two categories of approaches to solving an NLP. The first category,
indirect methods, involve declaring a set of necessary and/or sufficient conditions for declaring
the solution optimal. These conditions then allow the non-linear problem (generally) to be
reformulated as a two point boundary value problem. Solving this boundary value problem can
provide a control law for the optimal path. Indirect approaches for spacecraft trajectory
optimization have given us the Primer Vector Theory\cite{jezewski1975primer}.
The other category is the direct methods. In a direct optimization problem, the cost
function itself is calculated to provide the optimal solution. The problem is usually
thought of as a collection of dynamics and controls. Then these controls can be modified
to minimize the cost function. A number of tools have been developed to optimize NLPs
via this direct method in the general case. For this particular problem, direct
approaches were used as the low-thrust interplanetary system dynamics adds too much
complexity to quickly optimize indirectly and the individual optimization routines
needed to proceed as quickly as possible.
\subsection{Non-Linear Solvers}
For these types of non-linear, constrained problems, a number of tools have been developed
that act as frameworks for applying a large number of different algorithms. This allows for
simple testing of many different algorithms to find what works best for the nuances of the
problem in question.
One of the most common of these NLP optimizers is SNOPT\cite{gill2005snopt}, which
is a proprietary package written primarily using a number of Fortran libraries by
the Systems Optimization Laboratory at Stanford University. It uses a sparse
sequential quadratic programming approach.
Another common NLP optimization packages (and the one used in this implementation)
is the Interior Point Optimizer or IPOPT\cite{wachter2006implementation}. It can be
used in much the same way as SNOPT and uses an Interior Point Linesearch Filter
Method and was developed as an open-source project by the organization COIN-OR under
the Eclipse Public License.
Both of these methods utilize similar approaches to solve general constrained non-linear
problems iteratively. Both of them can make heavy use of derivative Jacobians and Hessians
to improve the convergence speed and both have been ported for use in a number of
programming languages, including in Julia, which was used for this project.
This is by no means an exhaustive list, as there are a number of other optimization
libraries that utilize a massive number of different algorithms. For the most part, the
libraries that port these are quite modular in the sense that multiple algorithms can be
tested without changing much source code.
\subsection{Linesearch Method}
As mentioned above, this project utilized IPOPT which leveraged an Interior Point
Linesearch method. A linesearch algorithm is one which attempts to find the optimum
of a non-linear problem by first taking an initial guess $x_k$. The algorithm then
determines a step direction (in this case through the use of either automatic
differentiation or finite differencing to calculate the derivatives of the
non-linear problem) and a step length. The linesearch algorithm then continues to
step the initial guess, now labeled $x_{k+1}$ after the addition of the ``step''
vector and iterates this process until predefined termination conditions are met.
In this case, the IPOPT algorithm was used, not as an optimizer, but as a solver. For
reasons that will be explained in the algorithm description in Section~\ref{algorithm} it
was sufficient merely that the non-linear constraints were met, therefore optimization (in
the particular step in which IPOPT was used) was unnecessary.
\subsection{Multiple-Shooting Algorithms}
Now that we have software defined to optimize non-linear problems, what remains is
determining the most effective way to define the problem itself. The most simple
form of a trajectory optimization might employ a single shooting algorithm, which
propagates a state, given some control variables forward in time to the epoch of
interest. The controls over this time period are then modified in an iterative
process, using the NLP optimizer, until the target state and the propagated state
matches. This technique can be visualized in Figure~\ref{single_shoot_fig}.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/single_shoot}
\caption{Visualization of a single shooting technique over a trajectory arc}
\label{single_shoot_fig}
\end{figure}
In this example, the initial trajectory is the green arc, which contains a certain
control thrust $\Delta V_{init}$ and is propagated for a certain amount of time and
results in the end state $x_{init}$. The target state $x_{final}$ can be achieved by
varying the control and propagating forward in time until this final state is
achieved. This type of shooting algorithm can be quite useful for simple cases such
as this one.
However, some problems require the use of a more flexible algorithm. In these cases,
sometimes a multiple-shooting algorithm can provide that flexibility and allow the
NLP solver to find the optimal control faster. In a multiple shooting algorithm,
rather than having a single target point at which the propagated state is compared,
the target orbit is broken down into multiple arcs, then end of each of which can be
seen as a separate target. At each of these points we can then define a separate
control. The end state of each arc and the beginning state of the next must then be
equal for a valid arc, as well as the final state matching the target final state.
This changes the problem to have far more constraints, but also increased freedom
due to having more control variables.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/multiple_shoot}
\caption{Visualization of a multiple shooting technique over a trajectory arc}
\label{multiple_shoot_fig}
\end{figure}
In this example, it can be seen that there are now more constraints (places where
the states need to match up, creating an $x_{error}$ term) as well as control
variables (the $\Delta V$ terms in the figure). This technique actually lends itself
very well to low-thrust arcs and, in fact, Sims-Flanagan Transcribed low-thrust arcs
in particular, because there actually are control thrusts to be optimized at a
variety of different points along the orbit. This is, however, not an exhaustive
description of ways that multiple shooting can be used to optimize a trajectory,
simply the most convenient for low-thrust arcs.
\chapter{Low-Thrust Considerations} \label{low_thrust}
Thus far, the techniques that have been discussed can be equally useful for both impulsive and
continuous thrust mission profiles. In this section, we'll discuss the intricacies of continuous
low-thrust trajectories in particular. There are many methods for optimizing such profiles and
we'll briefly discuss the difference between a direct and indirect optimization of a low-thrust
trajectory as well as introduce the concept of a control law and the notation used in this
thesis for modelling low-thrust trajectories more simply.
\section{Low-Thrust Control Laws}
In determining a low-thrust arc, a number of variables must be accounted for and, ideally,
optimized. Generally speaking, this means that a control law must be determined for the
thruster. This control law functions in exactly the same way that an impulsive thrust
control law might function. However, instead of determining the proper moments at which
to thrust, a low-thrust control law must determine the appropriate direction, magnitude,
and presence of a thrust at each point along its continuous orbit.
\subsection{Angle of Thrust}
Firstly, we can examine the most important quality of the low-thrust control law, the
direction at which to point the thrusters while they are on. The methods for determining this
direction varies greatly depending on the particular control law chosen for that
mission. Often, this process involves first determining a useful frame to think about
the kinematics of the spacecraft. In this case, we'll use a frame often used in these
low-thrust control laws: the spacecraft $\hat{R} \hat{\theta} \hat{H}$ frame. In this
frame, the $\hat{R}$ direction is the radial direction from the center of the primary to
the center of the spacecraft. The $\hat{H}$ hat is perpendicular to this, in the
direction of orbital momentum (out-of-plane) and the $\hat{\theta}$ direction completes
the right-handed orthonormal frame.
This frame is useful because, for a given orbit, especially a nearly circular one, the
$\hat{\theta}$ direction is nearly aligned with the velocity direction for that orbit at
that moment. This allows us to define a set of two angles, which we'll call $\alpha$ and
$\beta$, to represent the in and out of plane pointing direction of the thrusters. This
convention is useful because a $(0,0)$ set represents a thrust force more or less
directly in line with the direction of the velocity, a commonly useful thrusting
direction for most effectively increasing (or decreasing if negative) the angular
momentum and orbital energy of the trajectory.
Therefore, at each point, the first controls of a control-law, whichever frame or
convention is used to define them, need to represent a direction in 3-dimensional space
that the force of the thrusters will be applied.
\subsection{Thrust Magnitude}
However, there is actually another variable that can be varied by the majority of
electric thrusters. Either by controlling the input power of the thruster or the duty
cycle, the thrust magnitude can also be varied in the direction of thrust, limited by
the maximum thrust available to the thruster. Not all control laws allow for this
fine-tuned control of the thruster. Generally speaking, it's most efficient either to
thrust or not to thrust. Therefore, controlling the thrust magnitude may provide too
much complexity at too little benefit.
The algorithm used in this thesis, however, does allow the magnitude of the thrust
control to be varied. In certain cases it actually can be useful to have some fine-tuned
control over the magnitude of the thrust. Since the optimization in this algorithm is
automatic, it is relatively straightforward to consider the control thrust as a
3-dimensional vector in space limited in magnitude by the maximum thrust, which allows
for that increased flexibility.
\subsection{Thrust Presence}
The alternative to this approach of modifying the thrust magnitude, is simply to modify
the presence or absence of thrust. At certain points along an arc, the efficiency of
thrusting, even in the most advantageous direction, may be such that a thrust is
undesirable (in that it will lower the overall efficiency of the mission too much) or,
in fact, be actively harmful.
For instance, we can consider the case of a simple orbit raising. Given an initial orbit
with some eccentricity and some semi-major axis, we can define a new orbit that we'd
like to achieve that simply has a higher semi-major axis value, regardless of the
eccentricity of the new orbit. It is well known by analysis of the famous Hohmann
Transfer\cite{hohmann1960attainability}, that thrusting for orbit raising is most
effective near the periapsis of an orbit, where changes in velocity will have a higher
impact on total orbital energy. Therefore, for a given low-thrust control law that
allows for the presence or absence of thrusting at different efficiency cutoffs, we can
easily come up with two different orbits, each of which achieve the same semi-major
axis, but in two different ways at two different rates, both in time and fuel use, as
can be seen in Figures~\ref{low_efficiency_fig} and \ref{high_efficiency_fig}.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/low_efficiency}
\caption{Graphic of an orbit-raising with a low efficiency cutoff}
\label{low_efficiency_fig}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/high_efficiency}
\caption{Graphic of an orbit-raising with a high efficiency cutoff}
\label{high_efficiency_fig}
\end{figure}
All of this is, of course, also true for impulsive trajectories. However, since the
thrust presence for those trajectories are generally taken to be impulse functions, the
control laws can afford to be much less complicated for a given mission goal, by simply
thrusting only at the moment on the orbit when the transition will be most efficient.
For a low-thrust mission, however, the control law must be continuous rather than
discrete and therefore the control law inherently gains a lot of complexity.
\section{Direct vs Indirect Optimization}
As previously mentioned, there are two different approaches to optimizing non-linear
problems such as trajectory optimizations in interplanetary space. These methods are the
direct method, in which a cost function is developed and used by numerical root-finding
schemes to drive cost to the nearest local minimum, and the indirect method, in which a
set of sufficient and necessary conditions are developed that constrain the optimal
solution and used to solve a boundary-value problem to find the optimal solution.
Both of these methods have been applied to the problem of low-thrust interplanetary
trajectory optimization \cite{Casalino2007IndirectOM}. The common opinion of the
difference between these two methods is that the indirect methods are more difficult to
converge and require a better initial guess than the direct methods. However, they also
require less parameters to describe the trajectory, since the solution of a boundary
value problem doesn't require discretization of the control states.
In this implementation, robustness is incredibly valuable, as the Monotonic Basin
Hopping algorithm is leveraged to attempt to find all minima basins in the solution
space by ``hopping'' around with different initial guesses. Since these initial guesses
are not guaranteed to be close to any particular valid trajectory, it is important that
the optimization routine be robust to poor initial guesses. Therefore, a direct
optimization method was leveraged by transcribing the problem into an NLP and using
IPOPT to find the local minima.
\section{Sims-Flanagan Transcription}
The major problem with optimizing low thrust paths is that the control law must necessarily be
continuous. Also, since indirect optimization approaches are, in the context of
interplanetary trajectories including flybys, quite difficult the problem must
necessarily be reformulated as a discrete one in order to apply a direct approach. Therefore,
this thesis chose to use a model well suited for discretizing low-thrust paths: the
Sims-Flanagan transcription (SFT)\cite{sims1999preliminary}.
The SFT is actually quite a simple method for discretizing low-thrust arcs. First the
continuous arc is subdivided into a number ($N$) of individual consistent timesteps of length
$\frac{tof}{N}$. The control thrust is then applied at the center of each of these time
steps. This approach can be seen visualized in Figure~\ref{sft_fig}.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{fig/sft}
\caption{Example of an orbit raising using the Sims-Flanagan Transcription with 7
Sub-Trajectories}
\label{sft_fig}
\end{figure}
Using the SFT, it is relatively straightforward to propagate a state (in the context of the
Two-Body Problem) that utilizes a continuous low-thrust control, without the need for
computationally expensive numeric integration algorithms, by simply solving Kepler's equation
(using the LaGuerre-Conway algorithm introduced in Section~\ref{laguerre}) $N$ times. This
greatly reduces the computation complexity, which is particularly useful for cases in which
low-thrust trajectories need to be calculated many millions of times, as is the case in this
thesis. The fidelity of the model can also be easily fine-tuned. By simply increasing the
number of sub-arcs, one can rapidly approach a fidelity equal to a continuous low-thrust
trajectory within the Two-Body Problem, with only linearly-increasing computation time.
\chapter{Interplanetary Trajectory Considerations} \label{interplanetary}
The question of interplanetary travel opens up a host of additional new complexities. While
optimizations for simple single-body trajectories are far from simple, it can at least be
said that the assumptions of the Two Body Problem remain fairly valid. In interplanetary
travel, the primary body most responsible for gravitational forces might be a number of
different bodies, dependent on the phase of the mission. In fact, at some points along the
trajectory, there may not be a ``primary'' body, but instead a number of different forces of
roughly equal magnitude vying for ``primary'' status.
In the ideal case, every relevant body would be considered as an ``n-body'' perturbation
during the entire trajectory. For some approaches, this method is sufficient and preferred.
However, for most uses, a more efficient model is necessary. The method of patched conics
can be applied in this case to simplify the model.
Interplanetary travel does not merely complicate trajectory optimization. The increased
complexity of the search space also opens up new opportunities for orbit strategies. The
primary strategy investigated by this thesis will be the gravity assist, a technique for
utilizing the gravitational energy of a planet to modify the direction of solar velocity.
Finally, the concept of multiple gravity assists and the techniques used to visualize the
space in which we might accomplish stringing together multiple flybys will be analyzed. A
number of tools have been developed to assist mission designers in manually visualizing the
search space, but some of these tools can also be leveraged by the automated optimization
algorithm.
\section{Launch Considerations}
Before considering the dynamics and techniques that interplanetary travel imposes upon
the trajectory optimization problem we must first concern ourself with getting to
interplanetary space. Generally speaking, interplanetary trajectories require a lot of
orbital energy and the simplest and quickest way to impart orbital energy to a satellite
is by using the entirety of the launch energy that a launch vehicle can provide.
In practice, this value, for a particular mission, is actually determined as a parameter
of the mission trajectory to be optimized. The excess velocity at infinity of the
hyperbolic orbit of the spacecraft that leaves the Earth can be used to derive the
launch energy. This is usually qualified as the quantity $C_3$, which is actually double
the kinetic orbital energy with respect to the Sun, or simply the square of the excess
hyperbolic velocity at infinity\cite{wie1998space}.
This algorithm and many others will take, essentially for granted, that the initial
orbit at the beginning of the mission will be some hyperbolic orbit with velocity enough
to leave the Earth. That initial $v_\infty$ will be used as a tunable parameter in the
NLP solver. This allows the mission designer to include the launch $C_3$ in the cost
function and, hopefully, determine the mission trajectory that includes the least
initial launch energy. This can then be fed back into a mass-$C_3$ curve for prospective
launch providers to determine what the maximum mass any launch provider is capable of
imparting that specific $C_3$ to.
A similar approach is taken at the end of the mission. This algorithm, and many others,
doesn't attempt to exactly match the velocity of the planet at the end of the mission.
Instead, the excess hyperbolic velocity is also treated as a parameter that can be
minimized by the cost function. If a mission is to then end in insertion, a portion of
the mass budget can then be used for an impulsive thrust engine, which can provide a
final insertion burn at the end of the mission. This approach also allows flexibility
for missions that might end in a flyby rather than insertion.
\section{Patched Conics}
The first hurdle to deal with in interplanetary space is the problem of reconciling
Two-Body dynamics with the presence of multiple and varying planetary bodies. The most
common method for approaching this is the method of patched
conics\cite{bate2020fundamentals}. In this model, we break the interplanetary trajectory
up into a series of smaller sub-trajectories. During each of these sub-trajectories, a
single primary is considered to be responsible for the trajectory of the orbit, via the
Two-Body problem.
The transition point can be calculated a variety of ways. The most typical method is to
calculate the gravitational force due to the two bodies separately, via the Two-Body
models. Whichever primary is a larger influence on the motion of the spacecraft is
considered to be the primary at that moment. In other words, the spacecraft, at that
epoch, is within the Sphere of Influence of that primary. Generally for missions in this
Solar System, the spacecraft is either within the Sphere of Influence of a planetary
body or the Sun. However, there are points in the Solar System where the gravitational
influence of two planetary bodies are roughly equivalent to each other and to the
influence of the Sun. These are considered LaGrange points\cite{euler1767motu}, but are
beyond the scope of this initial analysis of interplanetary mission feasibility.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{fig/patched_conics}
\caption{Patched Conics Example Figure}
\label{patched_conics_fig}
\end{figure}
This effectively breaks the trajectory into a series of orbits defined by the Two-Body
problem (conics), patched together by distinct transition points. These transition
points occur along the spheres of influence of the planets nearest to the spacecraft.
Generally speaking, for the orbits handled by this algorithm, the speeds involved are
enough that the orbits are always elliptical around the Sun and hyperbolic around the
planets.
\section{Gravity Assist Maneuvers}
As previously mentioned, there are methods for utilizing the orbital energy of the other
planets in the Solar System. This is achieved via a technique known as a Gravity Assist,
or a Gravity Flyby. During a gravity assist, the spacecraft enters into the
gravitational sphere of influence of the planet and, because of its excess velocity,
proceeds to exit the sphere of influence. Relative to the planet, the speed of the
spacecraft increases as it approaches, then decreases as it departs. From the
perspective of the planet, the velocity of the spacecraft is unchanged. However, the
planet is also orbiting the Sun.
From the perspective of a Sun-centered frame, though, this is effectively an elastic
collision. The overall momentum remains the same, with the spacecraft either gaining or
losing some in the process (dependent on the directions of travel). The planet also
loses or gains momentum enough to maintain the overall system momentum, but this amount
is negligible compared to the total momentum of the planet. The overall effect is that
the spacecraft arrives at the planet from one direction and, because of the influence of
the planet, leaves in a different direction\cite{negri2020historical}.
This effect can be used strategically. The ``bend'' due to the flyby is actually
tunable via the exact placement of the fly-by in the b-frame, or the frame centered at
the planet, from the perspective of the spacecraft at $v_\infty$. By modifying the
turning angle of this bend. In doing so, one can effectively achieve a (restricted) free
impulsive thrust event.
\section{Flyby Periapsis}
Now that we understand gravity assists, the natural question is then how to leverage
them for achieving certain velocity changes. This can be achieved via a technique called
``B-Plane Targeting''\cite{cho2017b}. But first, we must consider mathematically the
effect that a gravity flyby can have on the velocity of a spacecraft as it orbits the
Sun. Specifically, we can determine the turning angle of the bend mentioned in the
previous section, given an excess hyperbolic velocity entering the planet's sphere of
influence ($v_{\infty, in}$) and a target excess hyperbolic velocity as the spacecraft
leaves the sphere of influence ($v_{\infty, out}$):
\begin{equation}
\delta = \arccos \left( \frac{v_{\infty,in} \cdot v_{\infty,out}}{|v_{\infty,in}|
|v_{\infty,out}|} \right)
\end{equation}
From this turning angle, we can also determine, importantly, the periapsis of the flyby
that we must target in order to achieve the required turning angle. The actual location
of the flyby point can also be determined by B-Plane Targeting, but this technique was
not necessary in this implementation as a preliminary feasibility tool, and so is beyond
the scope of this thesis. The periapsis of the flyby, however, can provide a useful
check on what turning angles are possible for a given flyby, since the periapsis:
\begin{equation}
r_p = \frac{\mu}{v_\infty^2} \left[ \frac{1}{\sin\left(\frac{\delta}{2}\right)} - 1 \right]
\end{equation}
Cannot be lower than some safe value that accounts for the radius of the planet and
perhaps its atmosphere if applicable.
\section{Multiple Gravity Assist Techniques}
Now that we can leverage gravity flybys for their change in velocity direction, the
final remaining question is that of stringing together flybys. How, for instance, do we
know which planets can provide feasible flyby opportunities given a known hyperbolic
energy leaving the previous planet?
\subsection{Lambert's Problem}
The answer comes from the application of the solution to the problem, posed by
Johann Lambert in the 18th century\cite{blanchard1969unified}, of how to determine,
given an initial position, a final position (the ephemeris of the two planets), and
a time of flight between the two positions, what velocity was necessary to connect
the two states.
The actual numerical solution to this boundary value problem is not important to
include here, but there have been a large number of algorithms written to solve
Lambert's problem quickly and robustly for given inputs\cite{jordan1964application}.
\subsection{Planetary Ephemeris}
Applying Lambert's problem to interplanetary travel requires knowing the positions
of the planets in the inertial reference frame at a specific epoch. Fortunately,
many packages have been developed for this purpose. The most commonly used for this
is the SPICE package, developed by NASA in the 1980's. This software package, which
has ports that are widely available in a number of languages, including Julia,
contains many useful functions for astrodynamics.
The primary use of SPICE in this thesis, however, was to determine the planetary
ephemeris at a known epoch. Using the NAIF0012 and DE430 kernels, ephemeris in the
ecliptic plane J2000 frame could be easily determined. A method for quickly
determining the ephemeris using a polynomial fit was also employed as an option for
faster ephemeris-finding, but ultimately not used.
\subsection{Porkchop Plots}
Armed with a routine for quickly determining the outgoing velocity necessary to
reach a planet at a given time, as well as the ephemeris of the planets in question
at any given time, one can produce a grid of departure and arrival times between two
planetary encounters. Within this grid, one can then plot a variety of useful
values.
The solution to Lambert's equation provides both the velocity vectors at departure
and the velocity vectors at arrival. Often, these will be overlayed on the gridded
time plots, as normalized values, or sometimes converted to characteristic energy
$C_3$. This ``porkchop plot'' allows for a quick and concise view of what orbital
energies are required to reach a planet at a given time from a given location, as
well as an idea of what outgoing velocities one can expect.
Using porkchop plots such as the one in Figure~\ref{porkchop}, mission designers can
quickly visualize which natural trajectories are possible between planets. Using the
fact that incoming and outgoing $v_\infty$ magnitudes must be the same for a flyby,
a savvy mission designer can even begin to work out what combinations of flybys
might be possible for a given timeline, spacecraft state, and planet selection.
%TODO: Create my own porkchop plot
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/porkchop}
\caption{A sample porkchop plot of an Earth-Mars transfer}
\label{porkchop}
\end{figure}
However, this is an impulsive thrust-centered approach. The solution to Lambert's
problem assumes a natural trajectory. However, to the low-thrust designer, this is
needlessly limiting. A natural trajectory is unnecessary when the trajectory can be
modified by a continuous thrust profile along the arc. Therefore, for the hybrid problem
of optimizing both flyby selection and thrust profiles, porkchop plots are less helpful,
and an algorithmic approach is preferred.
\chapter{Algorithm Overview} \label{algorithm}
In this section, we will review the actual execution of the algorithm developed. As an
overview, the routine was developed to enable the determination of an optimized spacecraft
trajectory from the selection of some very basic mission parameters. Those parameters
include:
\begin{itemize}
\item Spacecraft dry mass
\item Thruster Specific Impulse
\item Thruster Maximum Thrusting Force
\item Thruster Duty Cycle Percentage
\item Number of Thruster on Spacecraft
\item Total Starting Weight of the Spacecraft
\item A Maximum Acceptable $V_\infty$ at arrival and $C_3$ at launch
\item The Launch Window Timing and the Latest Arrival
\item A cost function relating the mass usage, $v_\infty$ at arrival, and $C_3$ at
launch to a cost
\item A list of flyby planets starting with Earth and ending with the destination
\end{itemize}
Which allows for extremely automated optimization of the trajectory, while still providing
the mission designer with the flexibility to choose the particular flyby planets to
investigate.
This is achieved via an optimal control problem in which the ``inner loop'' is a
non-linear programming problem to determine the optimal low-thrust control law and flyby
parameters given a suitable initial guess. Then an ``outer loop'' monotonic basin hopping
algorithm is used to traverse the search space and more carefully optimize the solutions
found by the inner loop.
\section{Trajectory Composition}
In this thesis, a specific nomenclature will be adopted to define the stages of an
interplanetary mission in order to standardize the discussion about which aspects of the
software affect which phases of the mission.
Overall, a mission is considered to be the entire overall trajectory. In the context of
this software procedure, a mission is taken to always begin at the Earth, with some
initial launch C3 intended to be provided by an external launch vehicle. This C3 is not
fully specified by the mission designer, but instead is optimized as a part of the
overall cost function (and normalized by a designer-specified maximum allowable value).
This overall mission can then be broken down into a variable number of ``phases''
defined as beginning at one planetary body with some excess hyperbolic velocity and
ending at another. The first phase of the mission is from the Earth to the first flyby
planet. The final phase is from the last flyby planet to the planet of interest.
Each of these phases are then connected by a flyby event at the boundary. Each flyby
event must satisfy the following conditions:
\begin{enumerate}
\item The planet at the end of one phase must match the planet at the beginning of
the next phase.
\item The magnitude of the excess hyperbolic velocity coming into the planet (at the
end of the previous phase) must equal the magnitude of the excess hyperbolic
velocity leaving the planet (at the beginning of the next phase).
\item The flyby ``turning angle'' must be such that the craft maintains a safe
minimum altitude above the surface or atmosphere of the flyby planet.
\end{enumerate}
These conditions then effectively stitch the separate mission phases into a single
coherent mission, allowing for the optimization of both individual phases and the entire
mission as a whole. This nomenclature is similar to the nomenclature adopted by Jacob
Englander in his Hybrid Optimal Control Problem paper, but does not allow for missions
with multiple targets, simplifying the optimization.
\section{Inner Loop Implementation}\label{inner_loop_section}
The optimization routine can be reasonable separated into two separate ``loops'' wherein
the first loop is used, given an initial guess, to find valid trajectories within the
region of the initial guess and submit the best. The outer loop is then used to traverse
the search space and supply the initial loop with a number of well chosen initial
guesses.
Figure~\ref{nlp} provides an overview of the process of breaking a mission guess down
into an NLP, but there are essentially three primary routines involved in the inner
loop. A given state is propagated forward using the LaGuerre-Conway Kepler solution
algorithm, which itself is used to generate powered trajectory arcs via the
Sims-Flanagan transcribed propagator. Finally, these powered arcs are connected via a
multiple-shooting non-linear optimization problem. The trajectories describing each
phase complete one ``Mission Guess'' which is fed to the non-linear solver to generate
one valid trajectory within the vicinity of the original Mission Guess.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{flowcharts/nlp}
\caption{A flowchart of the Non-Linear Problem Solving Formulation}
\label{nlp}
\end{figure}
\subsection{LaGuerre-Conway Kepler Solver}\label{conway_pseudocode}
The most basic building block of any trajectory is a physical model for simulating
natural trajectories from one point forward in time. The approach taken by this
paper uses the solution to Kepler's equation put forward by
Conway\cite{laguerre_conway} in 1986 in order to provide simple and very
processor-efficient propagation without the use of integration. The code logic
itself is actually quite simple, providing an approach similar to the Newton-Raphson
approach for finding the roots of the Battin form of Kepler's equation.
The following pseudo-code outlines the approach taken for the elliptical case. The
approach is quite similar when $a<0$:
% TODO: Some symbols here aren't recognized by the font
\begin{singlespacing}
\begin{verbatim}
i = 0
# First declare some useful variables from the state
sig0 = (position ⋅ velocity) / √(mu)
a = 1 / ( 2/norm(position) - norm(velocity)^2/mu )
coeff = 1 - norm(position)/a
# This loop is essentially a second-order Newton solver for ΔE
ΔM = ΔE_new = √(mu/a^3) * time
ΔE = 1000
while abs(ΔE - ΔE_new) > 1e-10
ΔE = ΔE_new
F = ΔE - ΔM + sig0 / √(a) * (1-cos(ΔE)) - coeff * sin(ΔE)
dF = 1 + sig0 / √(a) * sin(ΔE) - coeff * cos(ΔE)
d2F = sig0 / √(a) * cos(ΔE) + coeff * sin(ΔE)
ΔE_new = ΔE - n*F / ( dF + sign(dF) * √(abs((n-1)^2*dF^2 - n*(n-1)*F*d2F )))
i += 1
end
# ΔE can then be used to determine the F/Ft and G/Gt coefficients
F = 1 - a/norm(position) * (1-cos(ΔE))
G = a * sig0/ √(mu) * (1-cos(ΔE)) + norm(position) * √(a) / √(μ) * sin(ΔE)
r = a + (norm(position) - a) * cos(ΔE) + sig0 * √(a) * sin(ΔE)
Ft = -√(a)*√(mu) / (r*norm(position)) * sin(ΔE)
Gt = 1 - a/r * (1-cos(ΔE))
# Which provide transformations from the original position and velocity to the
# final
final_position = F*position + G*velocity
final_velocity = Ft*position + Gt*velocity
\end{verbatim}
\end{singlespacing}
This approach was validated by generating known good orbits in the 2 Body Problem.
For example, from the orbital parameters of a certain state, the orbital period can
be determined. If the system is then propagated for an integer multiple of the orbit
period, the state should remain exactly the same as it began. In
Figure~\ref{laguerre_plot} an example of such an orbit is provided.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/laguerre_plot}
\caption{Example of a natural trajectory propagated via the Laguerre-Conway
approach to solving Kepler's Problem}
\label{laguerre_plot}
\end{figure}
% TODO: Consider adding a paragraph about the improvements in processor time
\subsection{Sims-Flanagan Propagator}
Until this point, we've not yet discussed how best to model the low-thrust
trajectory arcs themselves. The Laguerre-Conway algorithm efficiently determines
natural trajectories given an initial state, but it still remains, given a control
law, that we'd like to determine the trajectory of a system with continuous input
thrust.
For this, we leverage the Sims-Flanagan transcription mentioned earlier. This allows
us to break a single phase into a number of ($n$) different arcs. At the center of
each of these arcs we can place a small impulsive burn, scaled appropriately for the
thruster configured on the spacecraft of interest. Therefore, for any given phase,
we actually split the trajectory into $2n$ sub-trajectories, with $n$ scaled
impulsive thrust events. As $n$ is increased, the trajectory becomes increasingly
accurate as a model of low-thrust propulsion in the 2BP. This allows the mission
designer to trade-off speed of propagation and the fidelity of the results quite
effectively.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/spiral_plot}
\caption{An example trajectory showing that classic continuous-thrust orbit
shapes, such as this orbit spiral, are easily achievable using a Sims-Flanagan
model}
\label{sft_plot}
\end{figure}
Figure~\ref{sft_plot} shows that the Sims-Flanagan transcription model can be used
to effectively model these types of orbit trajectories. In fact, the Sims-Flanagan
model is capable of modeling nearly any low-thrust trajectory with a sufficiently
high number of $n$ samples.
Finally, it should be noted that, in any proper propagation scheme, mass should be
decremented proportionally to the thrust used. The Sims-Flanagan Transcription
assumes that the thrust for a given sub-trajectory is constant across the entirety
of that sub-trajectory. Therefore, the mass used by that particular thrusting event
can be determined by knowledge of the percentage of maximum thrust being provided
and the mass flow rate (a function of the duty cycle percentage ($d$), thrust ($f$),
and the specific impulse of the thruster ($I_{sp}$), commonly used to measure
efficiency)\cite{sutton2016rocket}:
\begin{equation}
\Delta m = \Delta t \frac{f d}{I_{sp} g_0}
\end{equation}
Where $\Delta m$ is the fuel used in the sub-trajectory, $\Delta t$ is the time of
flight of the sub-trajectory, and $g_0$ is the standard gravity at the surface of
Earth.
\subsection{Non-Linear Problem Solver}
Now that we have the basic building blocks of a continuous-thrust trajectory, we can
leverage one of the many non-linear optimization packages to find solutions near to
a (proposed) trajectory. This trajectory need not be valid.
For the purposes of discussion in this Section, we will assume that the inner-loop
algorithm starts with just such a ''Mission Guess``, which represents the proposed
trajectory. However, we'll briefly mention what quantities are needed for this
input:
A Mission Guess object contains:
\begin{singlespacing}
\begin{itemize}
\item The spacecraft and thruster parameters for the mission
\item A launch date
\item A launch $v_\infty$ vector representing excess Earth velocity
\item For each phase of the mission:
\begin{itemize}
\item The planet that the spacecraft will encounter (either flyby or
complete the mission) at the end of the phase
\item The $v_{\infty,out}$ vector representing excess velocity at the
planetary flyby (or launch if phase 1) at the beginning of the phase
\item The $v_{\infty,in}$ vector representing excess velocity at the
planetary flyby (or completion of mission) at the end of the phase
\item The time of flight for the phase
\item The unit-thrust profile in a sun-fixed frame represented by a
series of vectors with each element ranging from 0 to 1.
\end{itemize}
\end{itemize}
\end{singlespacing}
From this information, as can be seen in Figure~\ref{nlp}, we can formulate the
mission in terms of a non-linear problem. Specifically, the Mission Guess object can
be represented as a vector, $x$, the propagation function as a function $F$, and the
constraints as another function $G$ such that $G(x) = \vec{0}$.
This is a format that we can apply directly to the IPOPT solver, which Julia (the
programming language used) can utilize via bindings supplied by the SNOW.jl
package\cite{snow}.
IPOPT also requires the derivatives of both the $F$ and $G$ functions in the
formulation above. Generally speaking, a project designer has two options for
determining derivatives. The first option is to analytically determine the
derivatives, guaranteeing accuracy, but requiring processor time if determined
algorithmically and sometimes simply impossible or mathematically very rigorous to
determine manually. The second option is to numerically derive the derivatives,
using a technique such as finite differencing. This limits the accuracy, but can be
faster than algorithmic symbolic manipulation and doesn't require rigorous manual
derivations.
However, the Julia language has an excellent interface to a new technique, known as
automatic differentiation\cite{RevelsLubinPapamarkou2016}. Automatic differentiation
takes a slightly different approach to numerical derivation. It takes advantage of
the fact that any algorithmic function, no matter how complicated, can be broken
down into a series of smaller arithmetic functions, down to the level of simple
arithmetic. Since all of these simple arithmetic functions have a known derivative,
we can define a new datatype that carries through the function both the float and a
second number representing the derivative. Then, by applying (to the derivative) the
chain rule for every minute arithmetic function derivative as that arithmetic
function is applied to the main float value, the derivative can be determined,
accurate to the machine precision of the float type being used, with a processing
equivalent of two function calls (this of course depends on the simplicity of the
chained derivatives compared to the function pieces themselves). Generally speaking
this is much faster than the three or more function calls necessary for accurate
finite differencing and removes the need for the designer to tweak the epsilon value
in order to achieve maximum precision.
\section{Outer Loop Implementation}
Now we have the tools in place for, given a potential ''mission guess`` in the
vicinity of a valid guess, attempting to find a valid and optimal solution in that
vicinity. Now what remains is to develop a routine for efficiently generating these
random mission guesses in such a way that thoroughly searches the entirety of the
solution space with enough granularity that all spaces are considered by the inner loop
solver.
Once that has been accomplished, all that remains is an ''outer loop`` that can generate
new guesses or perturb existing valid missions as needed in order to drill down into a
specific local minimum. In this thesis, that is accomplished through the use of a
Monotonic Basin Hopping algorithm. This will be described more thoroughly in
Section~\ref{mbh_subsection}, but Figure~\ref{mbh_flow} outlines the process steps of
the algorithm.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{flowcharts/mbh}
\caption{A flowchart visualizing the steps in the monotonic basin hopping
algorithm}
\label{mbh_flow}
\end{figure}
\subsection{Random Mission Generation}\label{random_gen_section}
At a basic level, the algorithm needs to produce a mission guess (represented by all
of the values described in Section~\ref{inner_loop_section}) that contains random
values within reasonable bounds in the space. This leaves a number of variables open
to for implementation. For instance, it remains to be determined which distribution
function to use for the random values over each of those variables, which bounds to
use, as well as the possibilities for any improvements to a purely random search.
Currently, the first value set for the mission guess is that of $n$, which is the
number of sub-trajectories that each arc will be broken into for the Sims-Flanagan
based propagator. For this implementation, that was chosen to be 20, based upon a
number of tests in which the calculation time for the propagation was compared
against the accuracy of a much higher $n$ value for some known thrust controls, such
as a simple spiral orbit trajectory. This value of 20 tends to perform well and
provide reasonable accuracy, without producing too many variables for the NLP
optimizer to control for (since the impulsive thrust at the center of each of the
sub-trajectories is a control variable). This leaves some room for future
improvements, as will be discussed in Section~\ref{improvement_section}.
The bounds for the launch date are provided by the user in the form of a launch
window, so the initial launch date is just chosen as a standard random value from a
uniform distribution within those bounds.
A unit launch direction is then also chosen as a 3-length vector of uniform random
numbers, then normalized. This unit vector is then multiplied by a uniform random
number between 0 and the square root of the maximum launch $C_3$ specified by the
user to generate an initial $\vec{v_\infty}$ vector at launch.
Next, the times of flight of each phase of the mission is then decided. Since launch
date has already been selected, the maximum time of flight can be calculated by
subtracting the launch date from the latest arrival date provided by the mission
designer. Then, each leg is chosen from a uniform distribution with bounds currently
set to a minimum flight time of 30 days and a maximum of 70\% of the maximum time of
flight. These leg flight times are then iteratively re-generated until the total
time of flight (represented by the sum of the leg flight times) is less than the
maximum time of flight. This allows for a lot of flexibility in the leg flight
times, but does tend toward more often producing longer missions, particularly for
missions with more flybys.
Then, the internal components for each phase are generated. It is at this step, that
the mission guess generator splits the outputs into two separate outputs. The first
is meant to be truly random, as is generally used as input for a monotonic basin
hopping algorithm. The second utilizes a Lambert's solver to determine the
appropriate hyperbolic velocities (both in and out) at each flyby to generate a
natural trajectory arc. For this Lambert's case, the mission guess is simply seeded
with zero thrust controls and outputted to the monotonic basin hopper. The intention
here is that if the time of flights are randomly chosen so as to produce a
trajectory that is possible with a control in the vicinity of a natural trajectory,
we want to be sure to find that trajectory. More detail on how this is handled is
available in Section~\ref{mbh_subsection}.
However, for the truly random mission guess, there are still the $v_\infty$ values
and the initial thrust guesses to generate. For each of the phases, the incoming
excess hyperbolic velocity is calculated in much the same way that the launch
velocity was calculated. However, instead of multiplying the randomly generate unit
direction vector by a random number between 0 and the square root of the maximum
launch $C_3$, bounds of 0 and 10 kilometers per second are used instead, to provide
realistic flyby values.
The outgoing excess hyperbolic velocity at infinity is then calculated by again
choosing a uniform random unit direction vector, then by multiplying this value by
the magnitude of the incoming $v_{\infty}$ since this is a constraint of a
non-powered flyby.
From these two velocity vectors the turning angle, and thus the periapsis of the
flyby, can then be calculated by the following equations:
\begin{align}
\delta &= \arccos \left( \frac{\vec{v}_{\infty,in} \cdot
\vec{v}_{\infty,out}}{|v_{\infty,in}| \cdot {|v_{\infty,out}}|} \right) \\
r_p &= \frac{\mu}{\vec{v}_{\infty,in} \cdot \vec{v}_{\infty,out}} \cdot \left(
\frac{1}{\sin(\delta/2)} - 1 \right)
\end{align}
If this radius of periapse is then found to be less than the minimum safe radius
(currently set to the radius of the planet plus 100 kilometers), then the process is
repeated with new random flyby velocities until a valid seed flyby is found. These
checks are also performed each time a mission is perturbed or generated by the nlp
solver.
The final requirement then, is the thrust controls, which are actually quite simple.
Since the thrust is defined as a 3-vector of values between -1 and 1 representing
some percentage of the full thrust producible by the spacecraft thrusters in that
direction, the initial thrust controls can then be generated as a $20 \times 3$
matrix of uniform random numbers within that bound.
\subsection{Monotonic Basin Hopping}\label{mbh_subsection}
Now that a generator has been developed for mission guesses, we can build the
monotonic basin hopping algorithm. Since the implementation of the MBH algorithm
used in this paper differs from the standard implementation, the standard version
won't be described here. Instead, the variation used in this paper, with some
performance improvements, will be considered.
The aim of a monotonic basin hopping algorithm is to provide an efficient method for
completely traversing a large search space and providing many seed values within the
space for an ''inner loop`` solver or optimizer. These solutions are then perturbed
slightly, in order to provide higher fidelity searching in the space near valid
solutions in order to fully explore the vicinity of discovered local minima. This
makes it an excellent algorithm for problems with a large search space, including
several clusters of local minima, such as this application.
The algorithm contains two loops, the size of each of which can be independently
modified (generally by specifying a ''patience value``, or number of loops to
perform, for each) to account for trade-offs between accuracy and performance depending on
mission needs and the unique qualities of a certain search space.
The first loop, the ''search loop``, first calls the random mission generator. This
generator produces two random missions as described in
Section~\ref{random_gen_section} that differ only in that one contains random flyby
velocities and control thrusts and the other contains Lambert's-solved flyby
velocities and zero control thrusts. For each of these guesses, the NLP solver is
called. If either of these mission guesses have converged onto a valid solution, the
lower loop, the ''drill loop`` is entered for the valid solution. After the
convergence checks and potentially drill loops are performed, if a valid solution
has been found, this solution is stored in an archive. If the solution found is
better than the current best solution in the archive (as determined by a
user-provided cost function of fuel usage, $C_3$ at launch, and $v-\infty$ at
arrival) then the new solution replaces the current best solution and the loop is
repeated. Taken by itself, the search loop should quickly generate enough random
mission guesses to find all ''basins`` or areas in the solution space with valid
trajectories, but never attempts to more thoroughly explore the space around valid
solutions within these basins.
The drill loop, then, is used for this purpose. For the first step of the drill
loop, the current solution is saved as the ''basin solution``. If it's better than
the current best, it also replaces the current best solution. Then, until the
stopping condition has been met (generally when the ''drill counter`` has reached
the ''drill patience`` value) the current solution is perturbed slightly by adding
or subtracting a small random value to the components of the mission.
The performance of this perturbation in terms of more quickly converging upon the
true minimum of that particular basin, as described in detail by
Englander\cite{englander2014tuning}, is highly dependent on the distribution
function used for producing these random perturbations. While the intuitive choice
of a simple Gaussian distribution would make sense to use, it has been found that a
long-tailed distribution, such as a Cauchy distribution or a Pareto distribution is
more robust in terms of well chose boundary conditions and initial seed solutions as
well as more performant in time required to converge upon the minimum for that basin.
Because of this, the perturbation used in this implementation follows a
bi-directional, long-tailed Pareto distribution generated by the following
probability density function:
\begin{equation}
1 +
\left[ \frac{s}{\epsilon} \right] \cdot
\left[ \frac{\alpha - 1}{\frac{\epsilon}{\epsilon + r}^{-\alpha}} \right]
\end{equation}
Where $s$ is a random array of signs (either plus one or minus one) with dimension
equal to the perturbed variable and bounds of -1 and 1, $r$ is a uniformly
distributed random array with dimension equal to the perturbed variable and bounds
of 0 and 1, $\epsilon$ is a small value (nominally set to $1e-10$), and $\alpha$ is
a tuning parameter to determine the size of the tails and width of the distribution
set to $1.01$, but easily tunable.
The perturbation function, then steps through each parameter of the mission,
generating a new mission guess with the parameters modified by the above Pareto
distribution. After this perturbation, the NLP solver is then called again to find
a valid solution in the vicinity of this new guess. If the solution is better than
the current basin solution, it replaces that value and the drill counter is reset to
zero. If it is better than the current total best, it replaces that value as well.
Otherwise, the drill counter increments and the process is repeated. Therefore, the
drill patience allows the mission designer to determine a maximum number of
iterations to perform without any improvements in a row before ending a given drill
loop. This process can be repeated essentially ''search patience`` number of times
in order to fully traverse all basins.
\chapter{Results Analysis} \label{results}
The algorithm described in this thesis is quite flexible in its design and could be used as
a tool for a mission designer on a variety of different mission types. However, to consider
a relatively simple but representative mission design objective, a sample mission to Saturn
was investigated.
\section{Mission Constraints}
The sample mission was defined to represent a general case for a near-future low-thrust
trajectory to Saturn. No constraints were placed on the flyby planets, but a number of
constraints were placed on the algorithm to represent a realistic mission scenario.
The first choice required by the application is one not necessarily designable to the
initial mission designer (though not necessarily fixed in the design either) and is that
of the spacecraft parameters. The application accepts as input a spacecraft object
containing: the dry mass of the craft, the fuel mass at launch, the number of onboard
thrusters, and the specific impulse, maximum thrust and duty cycle of each thruster.
For this mission, the spacecraft was chosen to have a dry mass of only 200 kilograms for
a fuel mass of 3300 kilograms. This was chosen in order to have an overall mass roughly
in the same zone as that of the Cassini spacecraft, which launched with 5712 kilograms
of total mass, with the fuel accounting for 2978 of those kilograms\cite{cassini}. The
dry mass of the craft was chosen to be extremely low in order to allow for a variety of
''successful`` missions in which the craft didn't run out of fuel. That way, the
delivered dry mass to Saturn could be thought of as a metric of success, without
discounting mission that may have delivered just under whatever more realistic dry mass
one might set, in case those missions are in the vicinity of actually valid missions.
The thruster was chosen to have a specific impulse of 3200 seconds, a maximum thrust of
250 millinewtons, and a 100\% duty cycle. This puts the thruster roughly in line with
having an array of three NSTAR ion thrusters, which were used on the Dawn and Deep Space
1 missions\cite{polk2001performance}.
Also of relevance to the mission were the maximum $C_3$ at launch and $v_\infty$ at
arrival values. In order to not exclude the possibility of a non-capture flyby mission,
it was decided to not include the arrival $v_\infty$ term in the cost function and,
because of this, the maximum value was set to be extremely high at 500 kilometers per
second, in order to fully explore the space. In practice, though, the algorithm only
looks at flybys below 10 kilometers per second in magnitude. The maximum launch $C_3$
energy was set conservatively to 200 kilometers per second squared. This is upper limit
is only possible, for the given start mass, using a heavy launch system such as the
SLS\cite{stough2021nasa} or the comparable SpaceX Starship, though at values below about
half of this maximum, it begins to become possible to use existing launch solutions.
Finally, the mission is meant to represent a near future mission. Therefore the launch
window was set to allow for a launch in any day in 2023 or 2024 and a maximum total time
of flight of 20 years. This is longer than most typical Saturn missions, but allows for
some creative trajectories for higher efficiency.
It should be noted that each of these trajectories was found using an $n$ value of 20 as
mentioned previously, but in post-processing, the trajectory was refined to utilize a
slightly higher fidelity model that uses 60 sub-trajectories per orbit. This serves to
provide better plots for display, higher fidelity analyses, as well as to highlight the
efficacy of the lower fidelity method. Orbits can be found quickly in the lower fidelity
model and easily refined later by re-running the NLP solver at a higher $n$ value.
Finally, the relevant values for the two selected missions are listed below for
reference:
\begin{table}[h!]
\begin{small}
\centering
\begin{tabular}{ | c c c c c c | }
\hline
\bfseries Flyby Selection &
\bfseries Launch Date &
\bfseries Mission Length &
\bfseries Launch $C_3$ &
\bfseries Arrival $V_\infty$ &
\bfseries Fuel Usage \\
& & (years) & $\left( \frac{km}{s} \right)^2$ & ($\frac{km}{s}$) & (kg) \\
\hline
EMS & 2024-06-27 & 7.9844 & 60.41025 & 5.816058 & 446.9227 \\
EMJS & 2023-11-08 & 14.1072 & 40.43862 & 3.477395 & 530.6683 \\
\hline
\end{tabular}
\end{small}
\caption{Comparison of the two most optimal trajectories}
\label{results_table}
\end{table}
\subsection{Cost Function}
Each mission optimization also allows for the definition of a cost function. This
cost function accepts as inputs all parameters of the mission, the maximum $C_3$ at
launch and the maximum excess hyperbolic velocity at arrival.
The cost function used for this mission first generated normalized values for fuel
usage and launch energy. The fuel usage number is determined by dividing the fuel
used by the mass at launch and the $C_3$ number is determined by dividing the $C_3$
at launch by the maximum allowed. These two numbers are then weighted, with the fuel
usage value getting a weight of three and the launch energy value getting a weight
of one. The values are summed and returned as the cost value.
\subsection{Flybys Analyzed}
Since the algorithm itself makes no decisions on the actual choice of flybys, that
leaves the mission designer to determine which flyby planets would make good
potential candidates. A mission designer can then re-run the algorithm for each of
these flyby plans and determine which optimized trajectories best fit the needs of
the mission.
For this particular mission scenario, the following flyby profiles were
investigated:
\begin{itemize}
\item EJS
\item EMJS
\item EMMJS
\item EMS
\item ES
\item EVMJS
\item EVMS
\item EVVJS
\end{itemize}
\section{Faster, Less Efficient Trajectory}
In order to showcase the flexibility of the optimization algorithm (and the chosen cost
function), two different missions were chosen to highlight. One of these missions is a
slower, more efficient trajectory more typical of common low-thrust trajectories. The
other is a faster trajectory, quite close to a natural trajectory, but utilizing more
launch energy to arrive at the planet.
It is the faster trajectory that we'll analyze first. Most interesting about this
particular trajectory is that it's actually quite efficient despite its speed, in
contrast to the usual dichotomy of low-thrust travel. The cost function used for this
analysis did not include the time of flight as a component of the overall cost, and yet
this trajectory still managed to be the lowest cost trajectory of all trajectories found
by the algorithm.
The mission begins in late June of 2024 and proceeds first to an initial gravity assist
with Mars after three and one half years to rendezvous in mid-December 2027.
Unfortunately, the launch energy required to effectively used the gravity assist with
Mars at this time is quite high. The $C_3$ value was found to be $60.4102$ kilometers
per second squared. However, for this phase, the thrusters are almost entirely turned
off, allowing for a nearly-natural trajectory to Mars rendezvous.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/EMS_plot}
\caption{Depictions of the faster Earth-Mars-Saturn trajectory found by the
algorithm to be most efficient; planetary ephemeris arcs are shown during the phase
in which the spacecraft approached them}
\label{ems}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/EMS_plot_noplanets}
\caption{Another depiction of the EMS trajectory, without the planetary ephemeris
arcs}
\label{ems_nop}
\end{figure}
The second and final leg of this trip exits the Mars flyby and, initially burns quite
heavily along the velocity vector in order to increase it's semi-major axis. After an
initial period of thrusting, though, the spacecraft effectively coasts with minor
adjustments until its rendezvous with Saturn just four and a half years later in June of
2032. The arrival $v_\infty$ is not particularly small, at $5.816058$ kilometers per
second, but this is to be expected as the arrival excess velocity was not considered as
a part of the cost function. If capture was not the final intention of the mission, this
may be of little concern. Otherwise, the low fuel usage of $446.92$ kilograms for a
$3500$ kilogram launch mass leaves much margin for a large impulsive thrust to enter
into a capture orbit at Saturn.
In this case the algorithm effectively realized that a higher-powered launch from
the Earth, then a natural coasting arc to Mars flyby would provide the spacecraft with
enough velocity that a short but efficient powered-arc to Saturn was possible with
effective thrusting. It also determined that the most effective way to achieve this
flyby was to increase orbital energy in the beginning of the arc, when increasing
the semi-major axis value is most efficient. All of these concepts are known to skilled
mission designers, but finding a trajectory that combined all of these concepts would
have required much time-consuming analysis of porkchop plots and combinations of
mission-design techniques. This approach is far more automatic than the traditional
approach.
The final quality to note with this trajectory is that it shows a tangible benefit of
the addition of the Lambert's solver in the monotonic basin hopping algorithm. Since the
initial arc is almost entirely natural, with very little thrust, it is extremely likely
that the trajectory was found in the Lambert's Solution half of the MBH algorithm
procedure.
\section{Slower, More Efficient Trajectory}
Next we'll analyze the nominally second-best trajectory. While the cost function
provided to the algorithm can be a useful tool for narrowing down the field of search
results, it can also be very useful to explore options that may or may not be of similar
"efficiency" in terms of the cost function, but beneficial for other reasons. By
outputting many different optimal trajectories, the MBH algorithm can allow for this
type of mission design flexibility.
To highlight the flexibility, a second trajectory has been selected, which has nearly
equal value by the cost function, coming in slightly lower. However, this trajectory
appears to offer some benefits to the mission designer who would like to capture into
the gravitational field of Saturn or minimize launch energy requirements, perhaps for a
smaller mission, at the expense of increased speed.
The first leg of this three-leg trajectory is quite similar to the first leg of the
previous trajectory. However, this time the launch energy is considerably lower, with a
$C_3$ value of only $40.4386$ kilometer per second squared. Rather than employ an almost
entirely natural coasting arc to Mars, however, this trajectory performs some thrusting
at about the apoapsis point of its orbit in order to raise the periapsis enough to
rendezvous at roughly the same incidence angle in Mars' orbit, but one revolution ahead.
In this case, the launch was a bit earlier, occurring in November of 2023, with the Mars
flyby occurring in mid-April of 2026. This will prove to be helpful in comparison with
the other result, as this mission profile is much longer.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/EMJS_plot}
\caption{Depictions of the slower Earth-Mars-Jupiter-Saturn trajectory found by the
algorithm to be the second most efficient; planetary ephemeris arcs are shown during
the phase in which the spacecraft approached them}
\label{emjs}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/EMJS_plot_noplanets}
\caption{Another depiction of the EMJS trajectory, without the planetary ephemeris
arcs}
\label{emjs_nop}
\end{figure}
The second phase of this trajectory also functions quite similarly to the second phase
of the previous trajectory. In this case, there is a little bit more thrusting necessary
simply for steering to the Jupiter flyby than was necessary for Saturn rendezvous in the
previous trajectory. However, most of this thrusting is for orbit raising in the
beginning of the phase, very similarly to the previous result. In this trajectory, the
Jupiter flyby occurs late July of 2029.
Finally, this mission also has a third phase. The Jupiter flyby provides quite a strong
$\Delta V$ for the spacecraft, allowing the following phase to largely be a coasting arc
to Saturn almost one revolution later. Because of this long coasting period, the mission
length increases considerably during this leg, arriving at Saturn in December of 2037,
over 8 years after the Jupiter flyby.
However, there are many advantages to this approach relative to the other trajectory.
While the fuel use is also slightly higher at $530.668$ kilograms, plenty of payload
mass is still capable of delivery into the vicinity of Saturn. Also, it should be noted
that the incoming excess hyperbolic velocity at arrival to Saturn is significantly
lower, at only $3.4774$ kilometers per second, meaning that less of the delivered
payload mass would need to be taken up by impulsive thrusters and fuel for Saturn orbit
capture, should the mission designer desire this.
Also, as mentioned before, the launch energy requirements are quite a bit lower. Having
a second mission trajectory capable of launching on a smaller vehicle could be valuable
to a mission designer presenting possibilities. According to an analysis of the Delta IV
and Atlas V launch configurations\cite{c3capabilities} in Figure~\ref{c3}, this
reduction of $C_3$ from around 60 to around 40 brings the sample mission to just within
range of both the Delta IV Heavy and the Atlas V in its largest configuration, neither
of which are possible for the other result, meaning that either different launch
vehicles must be found or mission specifications must change.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig/c3}
\caption{Plot of Delta IV and Atlas V launch vehicle capabilities as they relate to
payload mass}
\label{c3}
\end{figure}
\chapter{Conclusion} \label{conclusion}
\section{Overview of Results}
A mission designer's job is quite a difficult one and it can be very useful to have
tools to automate some of the more complex analysis. This paper attempted to explore one
such tool, meant for automating the initial analysis and discovery of useful
interplanetary, low-thrust trajectories including the difficult task of optimizing the
flyby parameters. This makes the mission designer's job significantly simpler in that
they can simply explore a number of different flyby selection options in order to get a
good understanding of the mission scope and search space for a given spacecraft, launch
window, and target.
In performing this examination, two results were selected for further analysis. These
results are outlined in Table~\ref{results_table}. As can be seen in the table, both
resulting trajectories have trade-offs in mission length, launch energy, fuel usage, and
more. However, both results should be considered very useful low-thrust trajectories in
comparison to other missions that have launched on similar interplanetary trajectories,
using both impulsive and low-thrust arcs with planetary flybys. Each of these missions
should be feasible or nearly feasible (feasible with some modifications) using existing
launch vehicle and certainly even larger missions should be reasonable with advances in
launch capabilities currently being explored.
\section{Recommendations for Future Work}\label{improvement_section}
In the course of producing this algorithm, a large number of improvement possibilities
were noted. This work was based, in large part, on the work of Jacob Englander in a
number of papers\cite{englander2014tuning}\cite{englander2017automated}
\cite{englander2012automated} in which he explored the hybrid optimal control problem of
multi-objective low-thrust interplanetary trajectories.
In light of this, there are a number of additional approaches that Englander took in
preparing his algorithm that were not implemented here in favor of reducing complexity
and time constraints. For instance, many of the Englander papers explore the concept of
an outer loop that utilizes a genetic algorithm to compare many different flyby planet
choice against each other. This would create a truly automated approach to low-thrust
interplanetary mission planning. However, a requirement of this approach is that the
monotonic basin hopping algorithm algorithm must converge on optimal solutions very
quickly. Englander typically runs his for 20 minutes each for evolutionary fitness
evaluation, which is over an order of magnitude faster than the implementation in this
paper to achieve satisfactory results.
Further improvements to performance stem from the field of computer science. An
evolutionary algorithm such as the one proposed by Englander would benefit from high
levels of parallelization. Therefore, it would be worth considering a GPU-accelerated or
even cluster-computing capable implementation of the monotonic basin hopping algorithm.
These cluster computing concepts scale very well with new cloud infrastructures such as
that provided by AWS or DigitalOcean.
Finally, the monotonic basin hopping algorithm as currently written provides no
guarantees of actual global optimization. Generally optimization is achieved by running
the algorithm until it fails to produce newer, better trajectories for a sufficiently
long time. But it would be worth investigating the robustness of the NLP solver as well
as the robustness of the MBH algorithm basin drilling procedures in order to quantify
the search granularity needed to completely traverse the search space. From this
information, a new MBH algorithm could be written that is guaranteed to explore the
entire space.
\bibliographystyle{plain}
\nocite{*}
\bibliography{thesis}
\appendix
\end{document}