She did. Now I'm done!

This commit is contained in:
Connor
2022-03-15 22:20:45 -06:00
parent d00b977581
commit 298eb38ff1
7 changed files with 198 additions and 299 deletions

View File

@@ -6,10 +6,11 @@
highly non-linear, unpredictable systems such as this. The field that developed to
approach this problem is known as Non-Linear Programming (NLP) Optimization.
A Non-Linear Programming Problem is defined by an attempt to optimize a function
A Non-Linear Programming Problem involves finding a solution that optimizes a function
$f(\vec{x})$, subject to constraints $\vec{g}(\vec{x}) \le 0$ and $\vec{h}(\vec{x}) = 0$
where $n$ is a positive integer, $x$ is any subset of $R^n$, $g$ and $h$ can be vector
valued functions of any size, and at least one of $f$, $g$, and $h$ must be non-linear.
valued functions of any size, and at least one of $f$, $\vec{g}$, and $\vec{h}$ must be
non-linear.
There are, however, two categories of approaches to solving an NLP. The first category,
indirect methods, involve declaring a set of necessary and/or sufficient conditions for
@@ -20,10 +21,10 @@
The other category is the direct methods. In a direct optimization problem, the cost
function itself provides a value that an iterative numerical optimizer can measure
itself against. The optimal solution is then found by varying the inputs $\vec{x}$ until the
cost function is reduced to a minimum value, often determined by its derivative
jacobian. A number of tools have been developed to optimize NLPs via this direct method
in the general case.
itself against. The optimal solution is then found by varying the inputs $\vec{x}$ until
the cost function is reduced to a minimum value, often determined by its derivative
jacobian. A number of tools have been developed to formulate NLPs for optimization via
this direct method in the general case.
Both of these methods have been applied to the problem of low-thrust interplanetary
trajectory optimization \cite{Casalino2007IndirectOM} to find local optima over
@@ -40,7 +41,7 @@
Therefore, a direct optimization method was leveraged by transcribing the problem into
an NLP and using IPOPT to find the local minima.
\subsubsection{Non-Linear Solvers}
\subsection{Non-Linear Solvers}
One of the most common packages for the optimization of NLP problems is
SNOPT\cite{gill2005snopt}, which is a proprietary package written primarily using a
@@ -63,7 +64,7 @@
libraries that port these are quite modular in the sense that multiple algorithms can be
tested without changing much source code.
\subsubsection{Interior Point Linesearch Method}
\subsection{Interior Point Linesearch Method}
As mentioned above, this project utilized IPOPT which leveraged an Interior Point
Linesearch method. A linesearch algorithm is one which attempts to find the optimum
@@ -74,7 +75,7 @@
step the initial guess, now labeled $x_{k+1}$ after the addition of the ``step''
vector and iterates this process until predefined termination conditions are met.
\subsubsection{Shooting Schemes for Solving a Two-Point Boundary Value Problem}
\subsection{Shooting Schemes for Solving a Two-Point Boundary Value Problem}
One straightforward approach to trajectory corrections is a single shooting
algorithm, which propagates a state, given some control variables forward in time to
@@ -82,31 +83,22 @@
iterative process, using the correction scheme, until the target state and the
propagated state matches.
As an example, we can consider the Two-Point Boundary Value Problem (TPBVP) defined
by:
As an example, we can consider the one-dimensional Two-Point Boundary Value Problem
(TPBVP) defined by:
\begin{equation}
y''(t) = f(t, y(t), y'(t)), y(t_0) = y_0, y(t_f) = y_f
\end{equation}
\noindent
We can then redefine the problem as an initial-value problem:
\begin{equation}
y''(t) = f(t, y(t), y'(t)), y(t_0) = y_0, y'(t_0) = x
y''(t) = f(t, y(t), y'(t)), y(t_0) = y_0, y'(t_0) = \dot{y}_0
\end{equation}
\noindent
With $y(t,x)$ as a solution to that problem. Furthermore, if $y(t_f, x) = y_f$, then
the solution to the initial-value problem is also the solution to the TPBVP as well.
Therefore, we can use a root-finding algorithm, such as the bisection method,
Newton's Method, or even Laguerre's method, to find the roots of:
\begin{equation}
F(x) = y(t_f, x) - y_f
\end{equation}
\noindent
To find the solution to the IVP at $x_0$, $y(t_f, x_0)$ which also provides a
solution to the TPBVP. This technique for solving a Two-Point Boundary Value
Problem can be visualized in Figure~\ref{single_shoot_fig}.
@@ -114,7 +106,7 @@
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/single_shoot}
\caption{Visualization of a single shooting technique over a trajectory arc}
\caption{Single shooting over a trajectory arc}
\label{single_shoot_fig}
\end{figure}
@@ -133,8 +125,8 @@
each of these points we can then define a separate control, which may include the
states themselves. The end state of each arc and the beginning state of the next
must then be equal for a valid arc (with the exception of velocity discontinuities
if allowed for maneuvers at that point), as well as the final state matching the
target final state.
if allowed for maneuvers or gravity assists at that point), as well as the final
state matching the target final state.
\begin{figure}[H]
\centering
@@ -144,7 +136,7 @@
\end{figure}
In this example, it can be seen that there are now more constraints (places where
the states need to match up, creating an $x_{error}$ term) as well as control
the states need to match up, creating an $\vec{x}_{error}$ term) as well as control
variables (the $\Delta V$ terms in the figure). This technique actually lends itself
very well to low-thrust arcs and, in fact, Sims-Flanagan Transcribed low-thrust arcs
in particular, because there actually are control thrusts to be optimized at a