diff --git a/LaTeX/approach.tex b/LaTeX/approach.tex index bc094c4..6a8d7b7 100644 --- a/LaTeX/approach.tex +++ b/LaTeX/approach.tex @@ -1,47 +1,50 @@ \chapter{Algorithm Overview} \label{algorithm} - In this section, we will review the actual execution of the algorithm developed. As an - overview, the routine was developed to enable the determination of an optimized spacecraft - trajectory from the selection of some very basic mission parameters. Those parameters + This thesis will attempt to develop an algorithm for the preliminary analysis of feasibility in + designing a low-thrust interplanetary mission to an outer planet by leveraging a monotonic basin + hopping algorithm. In this section, we will review the actual execution of the algorithm + developed. As an overview, the routine was designed to enable the determination of an optimized + spacecraft trajectory from the selection of some very basic mission parameters. Those parameters include: \begin{itemize} - \item Spacecraft dry mass - \item Thruster Specific Impulse - \item Thruster Maximum Thrusting Force + \setlength\itemsep{-0.5em} + \item Spacecraft dry mass in kilograms + \item Thruster Specific Impulse in seconds + \item Thruster Maximum Thrusting Force in Newtons \item Thruster Duty Cycle Percentage - \item Number of Thruster on Spacecraft - \item Total Starting Weight of the Spacecraft - \item A Maximum Acceptable $V_\infty$ at arrival and $C_3$ at launch - \item The Launch Window Timing and the Latest Arrival + \item Number of Thrusters on Spacecraft + \item Total starting mass of the Spacecraft in kilograms + \item A Maximum Acceptable $V_\infty$ at arrival in kilometers per second + \item A Maximum Acceptable $C_3$ at launch in kilometers per second squared + \item The Launch Window Boundaries + \item The Latest Arrival Date \item A cost function relating the mass usage, $v_\infty$ at arrival, and $C_3$ at launch to a cost \item A list of flyby planets starting with Earth and ending with the destination \end{itemize} - Which allows for extremely automated optimization of the trajectory, while still providing - the mission designer with the flexibility to choose the particular flyby planets to - investigate. + Which allows for an automated approach to optimization of the trajectory, while still providing + the mission designer with the flexibility to choose the particular flyby planets to investigate. - This is achieved via an optimal control problem in which the ``inner loop'' is a - non-linear programming problem to determine the optimal low-thrust control law and flyby - parameters given a suitable initial guess. Then an ``outer loop'' monotonic basin hopping - algorithm is used to traverse the search space and more carefully optimize the solutions - found by the inner loop. + This is achieved via an optimal control problem in which the ``inner loop'' involves solving a + TPBVP to find the optimal solution given a suitable initial guess. Then an ``outer loop'' + monotonic basin hopping algorithm is used to traverse the search space and determine the global + optima by repeated use of control perturbation and the inner loop. \section{Trajectory Composition} In this thesis, a specific nomenclature will be adopted to define the stages of an - interplanetary mission in order to standardize the discussion about which aspects of the - software affect which phases of the mission. + interplanetary mission in order to standardize the discussion about which aspects affect + which phases of the mission. - Overall, a mission is considered to be the entire overall trajectory. In the context of - this software procedure, a mission is taken to always begin at the Earth, with some - initial launch C3 intended to be provided by an external launch vehicle. This C3 is not - fully specified by the mission designer, but instead is optimized as a part of the - overall cost function (and normalized by a designer-specified maximum allowable value). + Overall, an end-to-end trajectory is considered to be the entire overall trajectory. In this + context a trajectory begins at the Earth, with some initial launch C3 intended to be + provided by an external launch vehicle. This C3 is not fully specified by the trajectory + designer, but instead can be considered a part of the overall cost function for optimization + of the Two-Point Boundary Value Problem. - This overall mission can then be broken down into a variable number of ``phases'' + This overall trajectory can then be broken down into a variable number of ``phases'' defined as beginning at one planetary body with some excess hyperbolic velocity and ending at another. The first phase of the mission is from the Earth to the first flyby planet. The final phase is from the last flyby planet to the planet of interest. @@ -59,11 +62,11 @@ minimum altitude above the surface or atmosphere of the flyby planet. \end{enumerate} - These conditions then effectively stitch the separate mission phases into a single - coherent mission, allowing for the optimization of both individual phases and the entire - mission as a whole. This nomenclature is similar to the nomenclature adopted by Jacob - Englander in his Hybrid Optimal Control Problem paper, but does not allow for missions - with multiple targets, simplifying the optimization. + These conditions then effectively stitch the separate phases into a single coherent mission, + allowing for the optimization of both individual phases and the entire trajectory as a whole. + This nomenclature is similar to the nomenclature adopted by Jacob Englander in his Hybrid + Optimal Control Problem paper, but does not allow for trajectories with multiple targets, + simplifying the optimization. \section{Inner Loop Implementation}\label{inner_loop_section} @@ -77,15 +80,36 @@ into an NLP, but there are essentially three primary routines involved in the inner loop. A given state is propagated forward using the LaGuerre-Conway Kepler solution algorithm, which itself is used to generate powered trajectory arcs via the - Sims-Flanagan transcribed propagator. Finally, these powered arcs are connected via a - multiple-shooting non-linear optimization problem. The trajectories describing each - phase complete one ``Mission Guess'' which is fed to the non-linear solver to generate - one valid trajectory within the vicinity of the original Mission Guess. + Sims-Flanagan transcribed propagator. Finally, these powered arcs are connected using a + multiple-shooting approach driven optimization. The trajectories describing each + phase complete one ``Guess'' which is fed to the non-linear solver to generate + one valid trajectory within the vicinity of the original Guess. + + In this formulation the cost function $F$ is a user provided function of the input Guess. + The constraint function $G$ defines the following conditions that must be met: + + \begin{itemize} + \item For every phase other than the final: + \begin{itemize} + \item The minimum periapsis of the hyperbolic flyby arc must be above some + user-specified minimum safe altitude. + \item The magnitude of the incoming hyperbolic velocity must match the magnitude + of the outgoing hyperbolic velocity. + \item The spacecraft position must match the planet's position (within bounds) + at the end of the phase. + \end{itemize} + \item For the final phase: + \begin{itemize} + \item The spacecraft position must match the planet's position (within bounds) + at the end of the phase. + \item The final mass must be greater than the dry mass of the craft. + \end{itemize} + \end{itemize} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{flowcharts/nlp} - \caption{A flowchart of the Non-Linear Problem Solving Formulation} + \caption{A flowchart of the TPBVP Solution Approach} \label{nlp} \end{figure} @@ -141,7 +165,9 @@ For example, from the orbital parameters of a certain state, the orbital period can be determined. If the system is then propagated for an integer multiple of the orbit period, the state should remain exactly the same as it began. In - Figure~\ref{laguerre_plot} an example of such an orbit is provided. + Figure~\ref{laguerre_plot} an example of such an orbit is provided in which the final + state was tested against the initial state and found to be equal to the original to + within $1 \times 10^{-12}$ in magnitude. \begin{figure}[H] \centering @@ -150,7 +176,6 @@ approach to solving Kepler's Problem} \label{laguerre_plot} \end{figure} - % TODO: Consider adding a paragraph about the improvements in processor time \subsection{Sims-Flanagan Propagator} @@ -179,10 +204,14 @@ \label{sft_plot} \end{figure} - Figure~\ref{sft_plot} shows that the Sims-Flanagan transcription model can be used - to effectively model these types of orbit trajectories. In fact, the Sims-Flanagan - model is capable of modeling nearly any low-thrust trajectory with a sufficiently - high number of $n$ samples. + Figure~\ref{sft_plot} shows that the Sims-Flanagan transcription model can be used to + effectively model these types of orbit trajectories by plotting a very common ``spiral'' + trajectory in which the thrust is always on and the thrust direction is always in line + with the direction of the velocity vector. As can be seen, this produces a spiraling + trajectory in which the distance between spirals becomes increasingly larger as the + trajectory achieves higher and higher distances from the Sun. In fact, the Sims-Flanagan + model is capable of modeling nearly any low-thrust trajectory with a sufficiently high + number of $n$ samples. Finally, it should be noted that, in any proper propagation scheme, mass should be decremented proportionally to the thrust used. The Sims-Flanagan Transcription @@ -199,7 +228,8 @@ Where $\Delta m$ is the fuel used in the sub-trajectory, $\Delta t$ is the time of flight of the sub-trajectory, and $g_0$ is the standard gravity at the surface of - Earth. + Earth. From knowledge of the mass flow rate, we can then decrement the mass + appropriately based on the magnitude of the thrust vector at each point. \subsection{Non-Linear Problem Solver} @@ -208,11 +238,11 @@ a (proposed) trajectory. This trajectory need not be valid. For the purposes of discussion in this Section, we will assume that the inner-loop - algorithm starts with just such a ''Mission Guess``, which represents the proposed + algorithm starts with just such a ''Guess``, which represents the proposed trajectory. However, we'll briefly mention what quantities are needed for this input: - A Mission Guess object contains: + A Guess object contains: \begin{singlespacing} \begin{itemize} \item The spacecraft and thruster parameters for the mission @@ -233,24 +263,25 @@ \end{itemize} \end{singlespacing} - From this information, as can be seen in Figure~\ref{nlp}, we can formulate the - mission in terms of a non-linear problem. Specifically, the Mission Guess object can - be represented as a vector, $x$, the propagation function as a function $F$, and the - constraints as another function $G$ such that $G(x) = \vec{0}$. + From this information, as can be seen in Figure~\ref{nlp}, we can formulate the mission + in terms of a non-linear programming problem. Specifically, the variables describing the + trajectory contained within the Guess object can be represented as an input vector, + $\vec{x}$, the cost function produced by an entire trajectory propagation as $F$, and + the constraints that the trajectory must satisfy as another function $\vec{G}$ such that + $\vec{G}(\vec{x}) = \vec{0}$. This is a format that we can apply directly to the IPOPT solver, which Julia (the programming language used) can utilize via bindings supplied by the SNOW.jl package\cite{snow}. - IPOPT also requires the derivatives of both the $F$ and $G$ functions in the - formulation above. Generally speaking, a project designer has two options for - determining derivatives. The first option is to analytically determine the - derivatives, guaranteeing accuracy, but requiring processor time if determined - algorithmically and sometimes simply impossible or mathematically very rigorous to - determine manually. The second option is to numerically derive the derivatives, - using a technique such as finite differencing. This limits the accuracy, but can be - faster than algorithmic symbolic manipulation and doesn't require rigorous manual - derivations. + IPOPT also requires the derivatives of both the $F$ and $G$ functions in the formulation + above with respect to the input $\vec{x}$ vector. There are two options for determining + derivatives. The first option is to analytically determine the derivatives, guaranteeing + accuracy, but requiring processor time if determined algorithmically and sometimes + simply impossible or mathematically very rigorous to determine manually. The second + option is to numerically approximate the derivatives, using a technique such as finite + differencing. This limits the accuracy, but can be faster than algorithmic symbolic + manipulation and doesn't require rigorous manual derivations. However, the Julia language has an excellent interface to a new technique, known as automatic differentiation\cite{RevelsLubinPapamarkou2016}. Automatic differentiation @@ -271,10 +302,10 @@ \section{Outer Loop Implementation} - Now we have the tools in place for, given a potential ''mission guess`` in the + Now we have the tools in place for, given a potential ''guess`` in the vicinity of a valid guess, attempting to find a valid and optimal solution in that vicinity. Now what remains is to develop a routine for efficiently generating these - random mission guesses in such a way that thoroughly searches the entirety of the + random guesses in such a way that thoroughly searches the entirety of the solution space with enough granularity that all spaces are considered by the inner loop solver. @@ -293,14 +324,14 @@ \label{mbh_flow} \end{figure} - \subsection{Random Mission Generation}\label{random_gen_section} + \subsection{Random Trajectory Generation}\label{random_gen_section} - At a basic level, the algorithm needs to produce a mission guess (represented by all - of the values described in Section~\ref{inner_loop_section}) that contains random - values within reasonable bounds in the space. This leaves a number of variables open - to for implementation. For instance, it remains to be determined which distribution - function to use for the random values over each of those variables, which bounds to - use, as well as the possibilities for any improvements to a purely random search. + At a basic level, the algorithm needs to produce a guess (represented by all of the + values described in Section~\ref{inner_loop_section}) that contains random values within + reasonable bounds in the space. However, that still leaves the determination of which + distribution function to use for the random values over each of those variables, which + bounds to use, as well as the possibilities for any improvements to a purely random + search. Currently, the first value set for the mission guess is that of $n$, which is the number of sub-trajectories that each arc will be broken into for the Sims-Flanagan @@ -351,22 +382,20 @@ velocity was calculated. However, instead of multiplying the randomly generate unit direction vector by a random number between 0 and the square root of the maximum launch $C_3$, bounds of 0 and 10 kilometers per second are used instead, to provide - realistic flyby values. + realistic flyby values\cite{englander2014tuning}. The outgoing excess hyperbolic velocity at infinity is then calculated by again choosing a uniform random unit direction vector, then by multiplying this value by the magnitude of the incoming $v_{\infty}$ since this is a constraint of a non-powered flyby. - From these two velocity vectors the turning angle, and thus the periapsis of the - flyby, can then be calculated by the following equations: + From these two velocity vectors the turning angle, and thus the periapsis of the flyby, + can then be calculated by Equation~\ref{turning_angle_eq} and the following equation: - \begin{align} - \delta &= \arccos \left( \frac{\vec{v}_{\infty,in} \cdot - \vec{v}_{\infty,out}}{|v_{\infty,in}| \cdot {|v_{\infty,out}}|} \right) \\ - r_p &= \frac{\mu}{\vec{v}_{\infty,in} \cdot \vec{v}_{\infty,out}} \cdot \left( + \begin{equation} + r_p = \frac{\mu}{\vec{v}_{\infty,in} \cdot \vec{v}_{\infty,out}} \cdot \left( \frac{1}{\sin(\delta/2)} - 1 \right) - \end{align} + \end{equation} If this radius of periapse is then found to be less than the minimum safe radius (currently set to the radius of the planet plus 100 kilometers), then the process is @@ -375,17 +404,96 @@ solver. The final requirement then, is the thrust controls, which are actually quite simple. - Since the thrust is defined as a 3-vector of values between -1 and 1 representing - some percentage of the full thrust producible by the spacecraft thrusters in that - direction, the initial thrust controls can then be generated as a $20 \times 3$ - matrix of uniform random numbers within that bound. + Since the thrust is defined as a 3-vector of values between -1 and 1 representing some + percentage of the full thrust producible by the spacecraft thrusters in that direction, + the initial thrust controls can then be generated as a $20 \times 3$ matrix of uniform + random numbers within that bound. The number 20 was chosen as the number of + subtrajectories per phase to provide reasonable fidelity for allowing phases to run + longer (on the order of 2 or 3 orbits) without sacrificing speed per Englander + \cite{englander2012automated}. One possible improvement would be to choose the number + more intelligently based on the expected number of revolutions. \subsection{Monotonic Basin Hopping}\label{mbh_subsection} - Now that a generator has been developed for mission guesses, we can build the + Now that a generator has been developed for guesses, we can build the monotonic basin hopping algorithm. Since the implementation of the MBH algorithm used in this paper differs from the standard implementation, the standard version won't be described here. Instead, the variation used in this paper, with some performance improvements, will be considered. + The aim of a monotonic basin hopping algorithm is to provide an efficient method for + completely traversing a large search space and providing many seed values within the + space for an ``inner loop'' solver or optimizer. These solutions are then perturbed + slightly, in order to provide higher fidelity searching in the space near valid + solutions in order to fully explore the vicinity of discovered local minima. This + makes it an excellent algorithm for problems with a large search space, including + several clusters of local minima, such as this application. + + The algorithm contains two loops, the size of each of which can be independently + modified (generally by specifying a ``patience value'', or number of loops to + perform, for each) to account for trade-offs between accuracy and performance depending on + mission needs and the unique qualities of a certain search space. + + The first loop, the ``search loop'', first calls the random mission generator. This + generator produces two random missions as described in + Section~\ref{random_gen_section} that differ only in that one contains random flyby + velocities and control thrusts and the other contains Lambert's-solved flyby + velocities and zero control thrusts. For each of these guesses, the NLP solver is + called. If either of these mission guesses have converged onto a valid solution, the + lower loop, the ``drill loop'' is entered for the valid solution. After the + convergence checks and potentially drill loops are performed, if a valid solution + has been found, this solution is stored in an archive. If the solution found is + better than the current best solution in the archive (as determined by a + user-provided cost function of fuel usage, $C_3$ at launch, and $v-\infty$ at + arrival) then the new solution replaces the current best solution and the loop is + repeated. Taken by itself, the search loop should quickly generate enough random + mission guesses to find all ``basins'' or areas in the solution space with valid + trajectories, but never attempts to more thoroughly explore the space around valid + solutions within these basins. + + The drill loop, then, is used for this purpose. For the first step of the drill + loop, the current solution is saved as the ``basin solution''. If it's better than + the current best, it also replaces the current best solution. Then, until the + stopping condition has been met (generally when the ``drill counter'' has reached + the ``drill patience'' value) the current solution is perturbed slightly by adding + or subtracting a small random value to the components of the mission. + + The performance of this perturbation in terms of more quickly converging upon the + true minimum of that particular basin, as described in detail by + Englander\cite{englander2014tuning}, is highly dependent on the distribution + function used for producing these random perturbations. While the intuitive choice + of a simple Gaussian distribution would make sense to use, it has been found that a + long-tailed distribution, such as a Cauchy distribution or a Pareto distribution is + more robust in terms of well chose boundary conditions and initial seed solutions as + well as more performant in time required to converge upon the minimum for that basin. + + Because of this, the perturbation used in this implementation follows a + bi-directional, long-tailed Pareto distribution generated by the following + probability density function\cite{englander2014tuning}: + + \begin{equation} + 1 + + \left[ \frac{s}{\epsilon} \right] \cdot + \left[ \frac{\alpha - 1}{\frac{\epsilon}{\epsilon + r}^{-\alpha}} \right] + \end{equation} + + \noindent + Where $s$ is a random array of signs (either plus one or minus one) with dimension + equal to the perturbed variable and bounds of -1 and 1, $r$ is a uniformly + distributed random array with dimension equal to the perturbed variable and bounds + of 0 and 1, $\epsilon$ is a small value (nominally set to $1e-10$), and $\alpha$ is + a tuning parameter to determine the size of the tails and width of the distribution + set to $1.01$, but easily tunable. + + The perturbation function then steps through each parameter of the mission, + generating a new guess with the parameters modified by the Pareto distribution. + After this perturbation, the NLP solver is then called again to find a valid + solution in the vicinity of this new guess. If the solution is better than the + current basin solution, it replaces that value and the drill counter is reset to + zero. If it is better than the current total best, it replaces that value as well. + Otherwise, the drill counter increments and the process is repeated. Therefore, the + drill patience allows the mission designer to determine a maximum number of + iterations to perform without improvement in a row before ending the drill loop. + This process can be repeated essentially ''search patience`` number of times in + order to fully traverse all basins. diff --git a/LaTeX/fig/flyby.png b/LaTeX/fig/flyby.png new file mode 100644 index 0000000..fa32bfa Binary files /dev/null and b/LaTeX/fig/flyby.png differ diff --git a/LaTeX/fig/flyby.svg b/LaTeX/fig/flyby.svg new file mode 100644 index 0000000..16c291f --- /dev/null +++ b/LaTeX/fig/flyby.svg @@ -0,0 +1,697 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + v∞,in/p + v∞,out/p + vp/sun + + + v∞,in/sun = v∞,in/p + vp/sun + v∞,out/sun = v∞,out/p + vp/sun + + + + + + + v∞,in/sun + v∞,out/sun + + + + + + + + δ + Δvsun + + + diff --git a/LaTeX/fig/laguerre_plot.png b/LaTeX/fig/laguerre_plot.png index 7ab3fc9..1b6792c 100644 Binary files a/LaTeX/fig/laguerre_plot.png and b/LaTeX/fig/laguerre_plot.png differ diff --git a/LaTeX/fig/lamberts.png b/LaTeX/fig/lamberts.png new file mode 100644 index 0000000..6011468 Binary files /dev/null and b/LaTeX/fig/lamberts.png differ diff --git a/LaTeX/fig/multiple_shoot.png b/LaTeX/fig/multiple_shoot.png index 7d5d229..9c4dfff 100644 Binary files a/LaTeX/fig/multiple_shoot.png and b/LaTeX/fig/multiple_shoot.png differ diff --git a/LaTeX/fig/multiple_shoot.svg b/LaTeX/fig/multiple_shoot.svg index 269b0ad..2d0e480 100644 --- a/LaTeX/fig/multiple_shoot.svg +++ b/LaTeX/fig/multiple_shoot.svg @@ -7,7 +7,7 @@ viewBox="0 0 210 297" version="1.1" id="svg152246" - inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20, custom)" + inkscape:version="1.1.2 (0a00cf5339, 2022-02-04, custom)" sodipodi:docname="multiple_shoot.svg" inkscape:export-filename="/home/connor/projects/thesis/LaTeX/fig/multiple_shoot.png" inkscape:export-xdpi="300" @@ -28,12 +28,12 @@ showgrid="false" showborder="false" inkscape:zoom="1.0897757" - inkscape:cx="189.94734" - inkscape:cy="546.4427" - inkscape:window-width="747" - inkscape:window-height="1024" - inkscape:window-x="1157" - inkscape:window-y="40" + inkscape:cx="135.34895" + inkscape:cy="526.25508" + inkscape:window-width="1912" + inkscape:window-height="1040" + inkscape:window-x="0" + inkscape:window-y="32" inkscape:window-maximized="1" inkscape:current-layer="layer1" /> ΔVinit + id="tspan183826">1,init ΔVfinal + id="tspan187896">1,final ΔVinit + id="tspan201316">2,init ΔVfinal + id="tspan193652">2,final ΔVinit + id="tspan214678">3,init ΔVfinal + id="tspan208778">3,final xerror + id="tspan223440">3,error xerror + id="tspan226472">2,error xerror + id="tspan153634">1,error + x1 + x2 + x3 + + + + + + + + + + + + + + inkscape:current-layer="layer1" + inkscape:snap-object-midpoints="true" /> + id="defs217602"> + + + + + + + + + + + + + + Hyperbola + + + + V∞ + + V∞ + + diff --git a/LaTeX/fig/sft.png b/LaTeX/fig/sft.png index 635706f..3a6cddd 100644 Binary files a/LaTeX/fig/sft.png and b/LaTeX/fig/sft.png differ diff --git a/LaTeX/fig/sft.svg b/LaTeX/fig/sft.svg index 730d9a7..58a4bef 100644 --- a/LaTeX/fig/sft.svg +++ b/LaTeX/fig/sft.svg @@ -7,7 +7,7 @@ viewBox="0 0 210 297" version="1.1" id="svg209710" - inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20, custom)" + inkscape:version="1.1.2 (0a00cf5339, 2022-02-04, custom)" sodipodi:docname="sft.svg" inkscape:export-filename="/home/connor/projects/thesis/LaTeX/fig/sft.png" inkscape:export-xdpi="300" @@ -28,12 +28,12 @@ showgrid="false" showborder="false" inkscape:zoom="0.77058782" - inkscape:cx="255.64899" - inkscape:cy="444.46589" - inkscape:window-width="747" - inkscape:window-height="1024" - inkscape:window-x="3077" - inkscape:window-y="40" + inkscape:cx="177.1375" + inkscape:cy="469.12239" + inkscape:window-width="1912" + inkscape:window-height="1040" + inkscape:window-x="0" + inkscape:window-y="32" inkscape:window-maximized="1" inkscape:current-layer="layer1" /> Earth + x="62.325996" + y="124.89163" + id="tspan215017">Sun + Δv1 + + Δv2 + + Δv3 + + Δv4 + + Δv5 + + Δv6 + + Δv7 + + x1 + + x2 + + x3 + + x4 + + x5 + + x6 + + x7 + + x8 + diff --git a/LaTeX/fig/spiral_plot.png b/LaTeX/fig/spiral_plot.png index 4ff2e16..1f25f68 100644 Binary files a/LaTeX/fig/spiral_plot.png and b/LaTeX/fig/spiral_plot.png differ diff --git a/LaTeX/fig/type1.svg b/LaTeX/fig/type1.svg new file mode 100644 index 0000000..70d8740 --- /dev/null +++ b/LaTeX/fig/type1.svg @@ -0,0 +1,241 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + Type I, Clockwise + Type I, Counterclockwise + Type II, Clockwise + Type II, Counterclockwise + + + + + + diff --git a/LaTeX/flowcharts/nlp.png b/LaTeX/flowcharts/nlp.png index 9afc1ae..94fd862 100644 Binary files a/LaTeX/flowcharts/nlp.png and b/LaTeX/flowcharts/nlp.png differ diff --git a/LaTeX/flowcharts/nlp.svg b/LaTeX/flowcharts/nlp.svg index 5f556aa..06b458f 100644 --- a/LaTeX/flowcharts/nlp.svg +++ b/LaTeX/flowcharts/nlp.svg @@ -7,7 +7,7 @@ viewBox="0 0 279.4 215.9" version="1.1" id="svg5" - inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)" + inkscape:version="1.1.2 (0a00cf5339, 2022-02-04, custom)" sodipodi:docname="nlp.svg" inkscape:export-filename="/home/connor/projects/thesis/LaTeX/flowcharts/nlp.png" inkscape:export-xdpi="96" @@ -27,13 +27,13 @@ inkscape:document-units="mm" showgrid="true" units="in" - inkscape:zoom="0.92997315" - inkscape:cx="375.8173" - inkscape:cy="436.57175" - inkscape:window-width="1916" - inkscape:window-height="1045" - inkscape:window-x="1920" - inkscape:window-y="31" + inkscape:zoom="1.3151806" + inkscape:cx="416.29262" + inkscape:cy="398.04418" + inkscape:window-width="1912" + inkscape:window-height="1040" + inkscape:window-x="0" + inkscape:window-y="32" inkscape:window-maximized="1" inkscape:current-layer="layer1"> - - - G == 0? + x="125.17616" + y="161.30736">|G|< ϵ? Output Output Mission + id="tspan290538">Mission + inkscape:export-ydpi="150" + sodipodi:nodetypes="cc" /> - - - F + id="tspan290540">F(x) G + id="tspan290542">G(x) X + id="tspan290544">X Generate Generate X + id="tspan290550">X NLP Problem + id="tspan290554">NLP Problem Mission Guess + id="tspan290556">Guess Launch Date + id="tspan290558">Launch Date Launch V∞ + id="tspan290562">Launch V∞ V∞ in (per phase) + id="tspan290564">V∞ in (per phase) V∞ out (per phase) + id="tspan290566">V∞ out (per phase) TOF (per phase) + id="tspan290568">TOF (per phase) Thrust Profile + id="tspan290570">Thrust Profile (per phase) - Intermediate Phase: end state == planet state + v∞ - Intermediate Phase: norm(v∞out) == norm(v∞in) - Intermediate Phase: minimum flyby height acceptable - Final Phase: end position == planet position - Final Phase: final mass > dry mass - F = 1.0 + id="tspan290572"> (per phase) + style="fill:none;stroke:#000000;stroke-width:0.521;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Mend)" + d="m 190,55 v 105 0 l -47.5,0" + id="path271077" + sodipodi:nodetypes="cccc" /> + d="m 157.5,110 32.5,0" + id="path271326" + sodipodi:nodetypes="cc" /> + d="M 190,85 H 157.5" + id="path271521" + sodipodi:nodetypes="cc" /> - - - - + d="m 190,55 -31.49999,0" + id="path271521-2" + sodipodi:nodetypes="cc" /> False + id="tspan290574">False True + id="tspan290576">True + + + + + + + + + diff --git a/LaTeX/thesis.bib b/LaTeX/thesis.bib index c31f443..9cd2fab 100644 --- a/LaTeX/thesis.bib +++ b/LaTeX/thesis.bib @@ -277,4 +277,23 @@ URL = {https://arc.aiaa.org/doi/abs/10.2514/6.2006-6746}, eprint = {https://arc.aiaa.org/doi/pdf/10.2514/6.2006-6746} } +@article{battin1984elegant, + title={An elegant Lambert algorithm}, + author={Battin, Richard H and Vaughan, Robin M}, + journal={Journal of Guidance, Control, and Dynamics}, + volume={7}, + number={6}, + pages={662--670}, + year={1984} +} +@article{wales1997global, + title={Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters containing up to 110 atoms}, + author={Wales, David J and Doye, Jonathan PK}, + journal={The Journal of Physical Chemistry A}, + volume={101}, + number={28}, + pages={5111--5116}, + year={1997}, + publisher={ACS Publications} +} diff --git a/LaTeX/thesis.tex b/LaTeX/thesis.tex index 2ed7edc..cc1a09c 100644 --- a/LaTeX/thesis.tex +++ b/LaTeX/thesis.tex @@ -4,6 +4,7 @@ \usepackage{amssymb} \usepackage{hyperref} \usepackage{amsmath} +\usepackage{amsfonts} \usepackage{float} \usepackage{xfrac} diff --git a/LaTeX/trajectory_design.tex b/LaTeX/trajectory_design.tex index aee2060..f892a2e 100644 --- a/LaTeX/trajectory_design.tex +++ b/LaTeX/trajectory_design.tex @@ -81,6 +81,59 @@ \ddot{\vec{r}} = - \frac{\mu}{r^2} \hat{r} \end{equation} + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as: + + + We may also wish to utilize the total orbital energy for a spacecraft within this model. + Since the spacecraft is acting only under the gravitational influence of the planet and + no other forces, we can define the total specific mechanical energy as + \cite{vallado2001fundamentals}: + + \begin{equation} \label{energy} + \xi = \frac{v^2}{2} - \frac{\mu}{r} + \end{equation} + + \noindent + Where the first term represents the kinetic energy of the spacecraft and the second term + represents the gravitational potential energy. + \subsection{Kepler's Laws} Now that we've fully qualified the forces acting within the Two Body Problem, we can concern @@ -282,13 +335,10 @@ \section{Interplanetary Considerations}\label{interplanetary} - The question of interplanetary travel opens up a host of additional new complexities. While - optimizations for simple single-body trajectories are far from simple, it can at least be - said that the assumptions of the Two Body Problem remain fairly valid. In interplanetary - travel, the primary body most responsible for gravitational forces might be a number of - different bodies, dependent on the phase of the mission. In fact, at some points along the - trajectory, there may not be a ``primary'' body, but instead a number of different forces of - roughly equal magnitude vying for ``primary'' status. + In interplanetary travel, the primary body most responsible for gravitational forces might + be a number of different bodies, dependent on the phase of the mission. In fact, at some + points along the trajectory, there may not be a ``primary'' body, but instead a number of + different forces of roughly equal magnitude vying for ``primary'' status. In the ideal case, every relevant body would be considered as an ``n-body'' perturbation during the entire trajectory. For some approaches, this method is sufficient and preferred. @@ -296,7 +346,7 @@ can be applied in this case to simplify the model. Interplanetary travel does not merely complicate trajectory optimization. The increased - complexity of the search space also opens up new opportunities for orbit strategies. The + complexity of the search space also opens up new opportunities for mission designers. The primary strategy investigated by this thesis will be the gravity assist, a technique for utilizing the gravitational energy of a planet to modify the direction of solar velocity. @@ -306,45 +356,13 @@ search space, but some of these tools can also be leveraged by the automated optimization algorithm. - \subsection{Launch Considerations} - - Before considering the dynamics and techniques that interplanetary travel imposes upon - the trajectory optimization problem we must first concern ourself with getting to - interplanetary space. Generally speaking, interplanetary trajectories require a lot of - orbital energy and the simplest and quickest way to impart orbital energy to a satellite - is by using the entirety of the launch energy that a launch vehicle can provide. - - In practice, this value, for a particular mission, is actually determined as a parameter - of the mission trajectory to be optimized. The excess velocity at infinity of the - hyperbolic orbit of the spacecraft that leaves the Earth can be used to derive the - launch energy. This is usually qualified as the quantity $C_3$, which is actually double - the kinetic orbital energy with respect to the Sun, or simply the square of the excess - hyperbolic velocity at infinity\cite{wie1998space}. - - This algorithm and many others will take, essentially for granted, that the initial - orbit at the beginning of the mission will be some hyperbolic orbit with velocity enough - to leave the Earth. That initial $v_\infty$ will be used as a tunable parameter in the - NLP solver. This allows the mission designer to include the launch $C_3$ in the cost - function and, hopefully, determine the mission trajectory that includes the least - initial launch energy. This can then be fed back into a mass-$C_3$ curve for prospective - launch providers to determine what the maximum mass any launch provider is capable of - imparting that specific $C_3$ to. - - A similar approach is taken at the end of the mission. This algorithm, and many others, - doesn't attempt to exactly match the velocity of the planet at the end of the mission. - Instead, the excess hyperbolic velocity is also treated as a parameter that can be - minimized by the cost function. If a mission is to then end in insertion, a portion of - the mass budget can then be used for an impulsive thrust engine, which can provide a - final insertion burn at the end of the mission. This approach also allows flexibility - for missions that might end in a flyby rather than insertion. - \subsection{Patched Conics} The first hurdle to deal with in interplanetary space is the problem of reconciling Two-Body dynamics with the presence of multiple and varying planetary bodies. The most - common method for approaching this is the method of patched - conics\cite{bate2020fundamentals}. In this model, we break the interplanetary trajectory - up into a series of smaller sub-trajectories. During each of these sub-trajectories, a + common method for approaching this is the method of patched conics + \cite{bate2020fundamentals}. In this model, we break the interplanetary trajectory up + into a series of smaller sub-trajectories. During each of these sub-trajectories, a single primary is considered to be responsible for the trajectory of the orbit, via the Two-Body problem. @@ -356,8 +374,7 @@ Solar System, the spacecraft is either within the Sphere of Influence of a planetary body or the Sun. However, there are points in the Solar System where the gravitational influence of two planetary bodies are roughly equivalent to each other and to the - influence of the Sun. These are considered LaGrange points\cite{euler1767motu}, but are - beyond the scope of this initial analysis of interplanetary mission feasibility. + influence of the Sun. \begin{figure}[H] \centering @@ -366,12 +383,74 @@ \label{patched_conics_fig} \end{figure} - This effectively breaks the trajectory into a series of orbits defined by the Two-Body - problem (conics), patched together by distinct transition points. These transition - points occur along the spheres of influence of the planets nearest to the spacecraft. - Generally speaking, for the orbits handled by this algorithm, the speeds involved are - enough that the orbits are always elliptical around the Sun and hyperbolic around the - planets. + This effectively breaks the trajectory into a series of arcs each governed by a distinct + Two-Body problem patched together by distinct transition points. These transition points + occur along the spheres of influence of the planets nearest to the spacecraft. + + Therefore, we must understand how to convert our spacecraft's state from the Sun frame + to the planetary frame as it crosses this boundary. An elliptical orbit about the sun + will have enough orbital energy to represent a hyperbolic orbit around the planet. So we + first need to determine the velocity of the spacecraft relative to the planet as it + crosses the SOI, which we can determine by subtraction \cite{vallado2001fundamentals}: + + \begin{equation} + \vec{v}_{sc/p} = \vec{v}_{sc/sun} - \vec{v}_{planet/sun} + \end{equation} + + Since the orbit around the planet is hyperbolic, in order to characterize the hyperbola + we must determine the velocity of the spacecraft when it has infinite distance relative + to the planet. Since this never occurs, a further approximation is made that the + velocity that the spacecraft has (relative to the planet) as it crosses the SOI can be + modeled as the $\vec{v}_\infty$ of that hyperbolic arc. + + As an example, we may wish to determine the velocity relative to the planet that the + spacecraft has at the periapsis of its hyperbolic trajectory during the flyby. This + could be useful, perhaps, for sizing the $\Delta V<$ required during the insertion stage + of the mission if the spacecraft is intended to be captured into an elliptical orbit + around its target planet. For a given incoming hyperbolic $\vec{v}_\infty$, we can first + determine the specific mechanical energy of the hyperbola at infinite distance by using + Equation~\ref{energy}: + + \begin{equation} + \xi = \frac{v^2}{2} - \frac{\mu}{r} = \frac{v_\infty^2}{2} + \end{equation} + + We can then leverage the conservation of energy to determine the velocity at a + particular point, $r_{ins}$: + + \begin{align} + \xi_{ins} &= \frac{v_{ins}^2}{2} - \frac{\mu}{r_{ins}} \\ + \xi_{ins} &= \xi_\infty = \frac{v_\infty^2}{2} \\ + v_{ins} &= \sqrt{\frac{2\mu}{r_{ins}} + v_\infty^2} + \end{align} + + \subsection{Launch Considerations} + + Generally speaking, an interplanetary mission begins with launch. For a satellite of + given size, a certain amount of orbital energy can be imparted to the satellite by the + launch vehicle. In practice, this value, for a particular mission, is actually + determined as a parameter of the mission trajectory to be optimized. The excess velocity + at infinity of the hyperbolic orbit of the spacecraft that leaves the Earth can be used + to derive the launch energy. This is usually qualified as the quantity $C_3$, which is + actually double the kinetic orbital energy with respect to the Sun, or simply the square + of the excess hyperbolic velocity at infinity\cite{wie1998space}. + + This algorithm will assume that the initial trajectory at the beginning of the mission + will be some hyperbolic orbit with velocity enough to leave the Earth. That initial + $v_\infty$ will be used as a tunable parameter in the NLP solver. This allows the + mission designer to include the launch $C_3$ in the cost function and, hopefully, + determine the mission trajectory that includes the least initial launch energy. This can + then be fed back into a mass-$C_3$ curve for prospective launch providers to determine + what the maximum mass any launch provider is capable of imparting that specific $C_3$ + to. + + A similar approach is taken at the end of the mission. This algorithm doesn't attempt to + exactly match the velocity of the planet at the end of the mission. Instead, the excess + hyperbolic velocity is also treated as a parameter that can be minimized by the cost + function. If a mission is to then end in insertion, a portion of the mass budget can + then be used for an impulsive thrust engine, which can provide a final insertion burn at + the end of the mission. This approach also allows flexibility for missions that might + end in a flyby rather than insertion. \subsection{Gravity Assist Maneuvers} @@ -392,6 +471,20 @@ the spacecraft arrives at the planet from one direction and, because of the influence of the planet, leaves in a different direction\cite{negri2020historical}. + This can be visualized in Figure~\ref{grav_assist_fig}, which shows the bend in the + spacecraft's velocity due to the hyperbolic arc as it passes the planet. This turns the + direction of the spacecraft's velocity relative to the planet, which has an overall + effect on kinetic energy that can be seen by adding the two vectors to the velocity of + the planet relative to the sun. By passing in front of the planet or behind it (relative + to its velocity), energy can be removed or added to the spacecraft by the maneuver. + + \begin{figure}[H] + \centering + \includegraphics[width=0.8\textwidth]{fig/flyby} + \caption{Visualization of velocity changes during a gravity assist} + \label{grav_assist_fig} + \end{figure} + This effect can be used strategically. The ``bend'' due to the flyby is actually tunable via the exact placement of the fly-by in the b-frame, or the frame centered at the planet, from the perspective of the spacecraft at $v_\infty$. By modifying the @@ -401,25 +494,22 @@ \subsection{Flyby Periapsis} Now that we understand gravity assists, the natural question is then how to leverage - them for achieving certain velocity changes. This can be achieved via a technique called - ``B-Plane Targeting''\cite{cho2017b}. But first, we must consider mathematically the - effect that a gravity flyby can have on the velocity of a spacecraft as it orbits the - Sun. Specifically, we can determine the turning angle of the bend mentioned in the - previous section, given an excess hyperbolic velocity entering the planet's sphere of - influence ($v_{\infty, in}$) and a target excess hyperbolic velocity as the spacecraft - leaves the sphere of influence ($v_{\infty, out}$): + them for achieving certain velocity changes\cite{cho2017b}. But first, we must consider + mathematically the effect that a gravity flyby can have on the velocity of a spacecraft + as it orbits the Sun. Specifically, we can determine the turning angle of the bend + mentioned in the previous section, given an excess hyperbolic velocity entering the + planet's sphere of influence ($\vec{v}_{\infty, in}$) and a target excess hyperbolic + velocity as the spacecraft leaves the sphere of influence ($\vec{v}_{\infty, out}$): - \begin{equation} - \delta = \arccos \left( \frac{v_{\infty,in} \cdot v_{\infty,out}}{|v_{\infty,in}| - |v_{\infty,out}|} \right) + \begin{equation}\label{turning_angle_eq} + \delta = \arccos \left( \frac{\vec{v}_{\infty,in} \cdot + \vec{v}_{\infty,out}}{|\vec{v}_{\infty,in}| |\vec{v}_{\infty,out}|} \right) \end{equation} From this turning angle, we can also determine, importantly, the periapsis of the flyby - that we must target in order to achieve the required turning angle. The actual location - of the flyby point can also be determined by B-Plane Targeting, but this technique was - not necessary in this implementation as a preliminary feasibility tool, and so is beyond - the scope of this thesis. The periapsis of the flyby, however, can provide a useful - check on what turning angles are possible for a given flyby, since the periapsis: + that we must target in order to achieve the required turning angle. The periapsis of the + flyby, however, can provide a useful check on what turning angles are possible for a + given flyby, since the periapsis: \begin{equation} r_p = \frac{\mu}{v_\infty^2} \left[ \frac{1}{\sin\left(\frac{\delta}{2}\right)} - 1 \right] @@ -443,9 +533,121 @@ a time of flight between the two positions, what velocity was necessary to connect the two states. - The actual numerical solution to this boundary value problem is not important to - include here, but there have been a large number of algorithms written to solve - Lambert's problem quickly and robustly for given inputs\cite{jordan1964application}. + There are many algorithms developed to solve Lambert's problem, but the universal + variable method is used here for its robustness in finding trajectories regardless + of geometry. This method is concerned with the determination of the variables $y$ + and $A$ by a method of iterating $\psi$, which represent the square root of the + distance traveled between the two points. These variables can then be used to build + $f$ and $g$ functions, which can completely constrain the initial and final states. + This problem can be solved by any root-finding method, with bisection being used + here for its robustness given any initial guess \cite{battin1984elegant}. + + Firstly, some geometric considerations must be accounted for. For any initial + position, $\vec{r}_0$, and final position, $\vec{r}_f$, and time of flight $\Delta + t$, there are actually two separate transfer orbits that can connect the two points + with paths that traverse less than one full orbit. For each of these, there are + actually then two trajectories that can connect the points + \cite{vallado2001fundamentals}. The first of the two will have a $\Delta \theta$ of + less than 180 degrees, which we classify as a Type I trajectory, and the second will + have a $\Delta \theta$ of greater than 180 degrees, which we call a Type II + trajectory. They will also differ in their direction of motion (clockwise or + counter-clockwise about the focus). This can be seen in Figure~\ref{type1type2}. + + \begin{figure}[H] + \centering + \includegraphics[width=0.8\textwidth]{fig/lamberts} + \caption{Visualization of the possible solutions to Lambert's Problem} + \label{type1type2} + \end{figure} + + The iteration used in this thesis will start by first calculating the change in true + anomaly, $\Delta \theta$, as well as the cosine of this value, which can be found + by: + + \begin{align} + \cos (\Delta \theta) &= \frac{\vec{r}_1 \cdot \vec{r}_2}{|\vec{r}_1| |\vec{r}_2|} \\ + \Delta \theta &= \arctan(y_2/x_2) - \arctan(y_1/x_1) + \end{align} + + The direction of motion is then chosen such that counter-clockwise orbits are + considered, as travelling in the same direction as the planets is generally more + efficient. Next, the variable $A$ is defined: + + \begin{equation} + A = DM \sqrt{|r_1| |r_2| (1 - \cos(\Delta \theta))} + \end{equation} + + A is independent of $\psi$, and therefore won't need updating as the iteration + proceeds. Then $\psi$ is initialized to any number within its bounds + ($[-4\pi,4\pi^2]$), arbitrarily set to 0, representing a parabolic arc as a starting + point. + + From here, the iteration loop can begin. Specifically, time of flight is calculated + at each step and compared to the expected value. The iteration proceeds until the + time of flight matches the expected value to within a provided tolerance. In order + to calculate the time of flight at each step, we must first calculate some useful + coefficients: + + \begin{equation}\label{loop_start} + c_2 = \begin{cases} + \frac{1-\cos(\sqrt{\psi})}{\psi} \quad &\text{if} \, \psi > 10^{-6} \\ + \frac{1-\cosh(\sqrt{-\psi})}{\psi} \quad &\text{if} \, \psi < -10^{-6} \\ + 1/2 \quad &\text{if} \, 10^{-6} > \psi > -10^{-6} + \end{cases} + \end{equation} + + \begin{equation} + c_3 = \begin{cases} + \frac{\sqrt{\psi} - \sin sqrt{\psi}}{\psi^{3/2}} \quad &\text{if} \, \psi > 10^{-6} \\ + \frac{\sinh\sqrt{-\psi} - \sqrt{-\psi}}{(-\psi)^{3/2}} \quad &\text{if} \, \psi < -10^{-6} \\ + 1/6 \quad &\text{if} \, 10^{-6} > \psi > -10^{-6} + \end{cases} + \end{equation} + + \noindent + Where the conditions of this piecewise function represent the elliptical, + hyperbolic, and parabolic cases, respectively. Once we have these, we can calculate + another variable, $y$: + + \begin{equation} + y = |r_1| + |r_2| + \frac{A (c_3 \psi - 1)}{\sqrt{c_2}} + \end{equation} + + We can then finally calculate the variable $\chi$, and from that, the time of + flight: + + \begin{equation} + \chi = sqrt{\frac{y}{c_2}} + \end{equation} + + \begin{equation} + \Delta t = \frac{c_3 \chi^3 + A \sqrt{y}}{\sqrt{c_2}} + \end{equation} + + Based on the value of this time of flight and how it compares to the expected value, + the bounds on $\psi$ are adjusted, a new $\psi$ is calculated at the midpoint + between the bounds, and the iteration begins again at Equation~\ref{loop_start}. If + the time of flight is sufficiently close to the expected value, the algorithm is + allowed to complete. + + The resulting $f$ and $g$ functions (and the derivative of $g$, $\dot{g}$) can then + be calculated: + + \begin{align} + f &= 1 - \frac{y}{|r_1|} \\ + g &= A \sqrt{\frac{y}{\mu}} \\ + \dot{g} &= 1 - \frac{y}{|r_2|} + \end{align} + + And from these, we can calculate the velocities of the transfer points as: + + \begin{align} + \vec{v}_1 &= \frac{\vec{r}_1 - f \vec{r}_2}{g} \\ + \vec{v}_2 &= \frac{\dot{g} \vec{r}_2 - \vec{r}_1}{g} + \end{align} + + \noindent + Fully constraining the connecting orbit. \subsubsection{Planetary Ephemeris} @@ -454,13 +656,12 @@ many packages have been developed for this purpose. The most commonly used for this is the SPICE package, developed by NASA in the 1980's. This software package, which has ports that are widely available in a number of languages, including Julia, - contains many useful functions for astrodynamics. + contains many useful functions for astrodynamics. The primary use of SPICE in this thesis, however, was to determine the planetary ephemeris at a known epoch. Using the NAIF0012 and DE430 kernels, ephemeris in the - ecliptic plane J2000 frame could be easily determined. A method for quickly - determining the ephemeris using a polynomial fit was also employed as an option for - faster ephemeris-finding, but ultimately not used. + ecliptic plane J2000 frame could be easily determined for a given epoch, provided as + a decimal Julian Day since the J2000 epoch. \subsubsection{Porkchop Plots} @@ -492,42 +693,123 @@ \end{figure} However, this is an impulsive thrust-centered approach. The solution to Lambert's - problem assumes a natural trajectory. However, to the low-thrust designer, this is - needlessly limiting. A natural trajectory is unnecessary when the trajectory can be - modified by a continuous thrust profile along the arc. Therefore, for the hybrid problem - of optimizing both flyby selection and thrust profiles, porkchop plots are less helpful, - and an algorithmic approach is preferred. + problem assumes a natural trajectory. A natural trajectory is unnecessary when the + trajectory can be modified by a continuous thrust profile along the arc. Therefore, + for the hybrid problem of optimizing both flyby selection and thrust profiles, + porkchop plots are less helpful, and an algorithmic approach is preferred. \section{Low Thrust Considerations} \label{low_thrust} - Thus far, the techniques that have been discussed can be equally useful for both impulsive and - continuous thrust mission profiles. In this section, we'll discuss the intricacies of continuous - low-thrust trajectories in particular. There are many methods for optimizing such profiles and - we'll briefly discuss the difference between a direct and indirect optimization of a low-thrust - trajectory as well as introduce the concept of a control law and the notation used in this - thesis for modelling low-thrust trajectories more simply. + In this section, we'll discuss the intricacies of continuous low-thrust trajectories in + particular. There are many methods for optimizing such profiles and we'll briefly discuss + the difference between a direct and indirect optimization of a low-thrust trajectory as well + as introduce the concept of a control law and the notation used in this thesis for modelling + low-thrust trajectories more simply. + + \subsection{Specific Impulse} + + The primary advantage of continuous thrust methods over their impulsive counterparts is + in their fuel-efficiency in generating changes in velocity. Put specifically, all + thrusters are capable of translating a mass flow (the rate of mass ejection from the + thruster during operation) to a thrust imparted to the craft. Low thrust techniques + suffer from limitations in the amount of thrust they can produce, but benefit from high + efficiency by means of achieving that thrust by means of very low mass ejection rates. + + This efficiency is often captured in a single variable called specific impulse, often + denoted as $I_{sp}$. We can derive the specific impulse by starting with the rocket + thrust equation\cite{sutton2016rocket}: + + \begin{equation} + F = \dot{m} v_e + \Delta p A_e + \end{equation} + + \noindent + Where $F$ is the thrust imparted, $\dot{m}$ is the fuel mass rate, $v_e$ is the exhaust + velocity of the fuel, $\Delta p$ is the change in pressure across the exhaust opening, + and $A_e$ is the area of the exhaust opening. We can then define a new variable + $v_{eq}$, such that the thrust equation becomes: + + \begin{align} + v_{eq} &= v_e - \frac{\Delta p A_e}{\dot{m}} \\ + F &= \dot{m} v_{eq} \label{isp_1} + \end{align} + + \noindent + And we can then take the integral of this value with respect to time to find the total + impulse, dividing by the weight of the fuel to derive the specific impulse: + + \begin{align} + I &= \int F dt = \int \dot{m} v_{eq} dt = m_e v_{eq} \\ + I_{sp} &= \frac{I}{m_e g_0} = \frac{m_e v_{eq}}{m_e g_0} = \frac{v_{eq}}{g_0} + \end{align} + + Plugging Equation~\ref{isp_1} into the previous equation we can derive the following + formula for $I_{sp}$: + + \begin{equation} \label{isp_real} + I_{sp} = \frac{F}{\dot{m} g_0} + \end{equation} + + \noindent + Which is generally taken to be a value with units of seconds and effectively represents + the efficiency with which a thruster converts mass to thrust. + + \subsection{Sims-Flanagan Transcription} + + this thesis chose to use a model well suited for modeling low-thrust paths: the + Sims-Flanagan transcription (SFT)\cite{sims1999preliminary}. The SFT allows for + flexibility in the trade-off between fidelity and performance, which makes it very + useful for this sort of preliminary analysis. + + First the continuous arc is subdivided into a number ($N$) of individual consistent + timesteps of length $\frac{tof}{N}$ where the $tof$ represents the total length of time + for that particular mission phase. The control thrust is then applied as an impulsive + maneuver at the center of each of these time steps. This approach can be seen visualized + in Figure~\ref{sft_fig}. + + \begin{figure}[H] + \centering + \includegraphics[width=0.6\textwidth]{fig/sft} + \caption{Example of an orbit raising using the Sims-Flanagan Transcription with 7 + Sub-Trajectories} + \label{sft_fig} + \end{figure} + + Using the SFT, it is relatively straightforward to propagate a state (in the context of + the Two-Body Problem) that utilizes a continuous low-thrust control, without the need + for computationally expensive numerical integration algorithms, by simply solving + Kepler's equation (using the LaGuerre-Conway algorithm introduced in + Section~\ref{laguerre}) $N+1$ times. First, the state is propagated to the middle of the + first arc. Then a discontinuity is allowed in the velocity at that point and the state + is propagated again to the middle of the next arc. That process is repeated $N-1$ times, + and then finally, the last half-arc is propagated after applying the final velocity + change. + + This greatly reduces the computation complexity, which is particularly useful for cases + in which low-thrust trajectories need to be calculated many millions of times, as is the + case in this thesis. The fidelity of the model can also be easily fine-tuned. By simply + increasing the number of sub-arcs, one can rapidly approach a fidelity equal to a + continuous low-thrust trajectory within the Two-Body Problem, with only + linearly-increasing computation time\cite{sims1999preliminary}. \subsection{Low-Thrust Control Laws} - In determining a low-thrust arc, a number of variables must be accounted for and, ideally, - optimized. Generally speaking, this means that a control law must be determined for the - thruster. This control law functions in exactly the same way that an impulsive thrust - control law might function. However, instead of determining the proper moments at which - to thrust, a low-thrust control law must determine the appropriate direction, magnitude, - and presence of a thrust at each point along its continuous orbit. + In determining a low-thrust arc, a number of variables must be accounted for and, + ideally, optimized. Generally speaking, this means that a control law must be determined + for the thruster. This involves determining the appropriate direction, magnitude, and + presence of a thrust at each point along its continuous orbit. \subsubsection{Angle of Thrust} - Firstly, we can examine the most important quality of the low-thrust control law, the - direction at which to point the thrusters while they are on. The methods for determining this - direction varies greatly depending on the particular control law chosen for that - mission. Often, this process involves first determining a useful frame to think about - the kinematics of the spacecraft. In this case, we'll use a frame often used in these - low-thrust control laws: the spacecraft $\hat{R} \hat{\theta} \hat{H}$ frame. In this - frame, the $\hat{R}$ direction is the radial direction from the center of the primary to - the center of the spacecraft. The $\hat{H}$ hat is perpendicular to this, in the - direction of orbital momentum (out-of-plane) and the $\hat{\theta}$ direction completes - the right-handed orthonormal frame. + Firstly, we can examine the direction at which to point the thrusters while they are on. + The methods for determining this direction varies greatly depending on the particular + control law chosen for that mission. Often, this process involves first determining a + useful frame to think about the kinematics of the spacecraft. In this case, we'll use a + frame often used in these low-thrust control laws: the spacecraft $\hat{R} \hat{\theta} + \hat{H}$ frame. In this frame, the $\hat{R}$ direction is the radial direction from the + center of the primary to the center of the spacecraft. The $\hat{H}$ hat is + perpendicular to this, in the direction of orbital momentum (out-of-plane) and the + $\hat{\theta}$ direction completes the right-handed orthonormal frame. This frame is useful because, for a given orbit, especially a nearly circular one, the $\hat{\theta}$ direction is nearly aligned with the velocity direction for that orbit at @@ -538,34 +820,28 @@ direction for most effectively increasing (or decreasing if negative) the angular momentum and orbital energy of the trajectory. - Therefore, at each point, the first controls of a control-law, whichever frame or - convention is used to define them, need to represent a direction in 3-dimensional space - that the force of the thrusters will be applied. + Using these conventions, we can then redefine our thrust vector in terms of the $\alpha$ + and $\beta$ angles in the chosen frame: + + \begin{align} + F_r &= F \cos(\beta) \sin (\alpha) \\ + F_\theta &= F \cos(\beta) \cos (\alpha) \\ + F_h &= F \sin(\beta) + \end{align} \subsubsection{Thrust Magnitude} However, there is actually another variable that can be varied by the majority of electric thrusters. Either by controlling the input power of the thruster or the duty - cycle, the thrust magnitude can also be varied in the direction of thrust, limited by - the maximum thrust available to the thruster. Not all control laws allow for this - fine-tuned control of the thruster. Generally speaking, it's most efficient either to - thrust or not to thrust. Therefore, controlling the thrust magnitude may provide too - much complexity at too little benefit. + cycle, the thrust magnitude can also be varied, limited by the maximum thrust available + to the thruster. Not all control laws allow for this fine-tuned control of the thruster. - The algorithm used in this thesis, however, does allow the magnitude of the thrust - control to be varied. In certain cases it actually can be useful to have some fine-tuned - control over the magnitude of the thrust. Since the optimization in this algorithm is - automatic, it is relatively straightforward to consider the control thrust as a - 3-dimensional vector in space limited in magnitude by the maximum thrust, which allows - for that increased flexibility. - - \subsubsection{Thrust Presence} - - The alternative to this approach of modifying the thrust magnitude, is simply to modify - the presence or absence of thrust. At certain points along an arc, the efficiency of - thrusting, even in the most advantageous direction, may be such that a thrust is - undesirable (in that it will lower the overall efficiency of the mission too much) or, - in fact, be actively harmful. + The algorithm used in this thesis does vary the magnitude of the thrust control. In + certain cases it actually can be useful to have some fine-tuned control over the + magnitude of the thrust. Since the optimization in this algorithm is automatic, it is + relatively straightforward to consider the control thrust as a 3-dimensional vector in + space limited in magnitude by the maximum thrust, which allows for that increased + flexibility. For instance, we can consider the case of a simple orbit raising. Given an initial orbit with some eccentricity and some semi-major axis, we can define a new orbit that we'd @@ -599,61 +875,3 @@ thrusting only at the moment on the orbit when the transition will be most efficient. For a low-thrust mission, however, the control law must be continuous rather than discrete and therefore the control law inherently gains a lot of complexity. - - \subsection{Direct vs Indirect Optimization} - - As previously mentioned, there are two different approaches to optimizing non-linear - problems such as trajectory optimizations in interplanetary space. These methods are the - direct method, in which a cost function is developed and used by numerical root-finding - schemes to drive cost to the nearest local minimum, and the indirect method, in which a - set of sufficient and necessary conditions are developed that constrain the optimal - solution and used to solve a boundary-value problem to find the optimal solution. - - Both of these methods have been applied to the problem of low-thrust interplanetary - trajectory optimization \cite{Casalino2007IndirectOM}. The common opinion of the - difference between these two methods is that the indirect methods are more difficult to - converge and require a better initial guess than the direct methods. However, they also - require less parameters to describe the trajectory, since the solution of a boundary - value problem doesn't require discretization of the control states. - - In this implementation, robustness is incredibly valuable, as the Monotonic Basin - Hopping algorithm is leveraged to attempt to find all minima basins in the solution - space by ``hopping'' around with different initial guesses. Since these initial guesses - are not guaranteed to be close to any particular valid trajectory, it is important that - the optimization routine be robust to poor initial guesses. Therefore, a direct - optimization method was leveraged by transcribing the problem into an NLP and using - IPOPT to find the local minima. - - \subsection{Sims-Flanagan Transcription} - - The major problem with optimizing low thrust paths is that the control law must necessarily be - continuous. Also, since indirect optimization approaches are, in the context of - interplanetary trajectories including flybys, quite difficult the problem must - necessarily be reformulated as a discrete one in order to apply a direct approach. Therefore, - this thesis chose to use a model well suited for discretizing low-thrust paths: the - Sims-Flanagan transcription (SFT)\cite{sims1999preliminary}. - - The SFT is actually quite a simple method for discretizing low-thrust arcs. First the - continuous arc is subdivided into a number ($N$) of individual consistent timesteps of length - $\frac{tof}{N}$. The control thrust is then applied at the center of each of these time - steps. This approach can be seen visualized in Figure~\ref{sft_fig}. - - \begin{figure}[H] - \centering - \includegraphics[width=0.6\textwidth]{fig/sft} - \caption{Example of an orbit raising using the Sims-Flanagan Transcription with 7 - Sub-Trajectories} - \label{sft_fig} - \end{figure} - - Using the SFT, it is relatively straightforward to propagate a state (in the context of the - Two-Body Problem) that utilizes a continuous low-thrust control, without the need for - computationally expensive numeric integration algorithms, by simply solving Kepler's equation - (using the LaGuerre-Conway algorithm introduced in Section~\ref{laguerre}) $N$ times. This - greatly reduces the computation complexity, which is particularly useful for cases in which - low-thrust trajectories need to be calculated many millions of times, as is the case in this - thesis. The fidelity of the model can also be easily fine-tuned. By simply increasing the - number of sub-arcs, one can rapidly approach a fidelity equal to a continuous low-thrust - trajectory within the Two-Body Problem, with only linearly-increasing computation time. - - diff --git a/LaTeX/trajectory_optimization.tex b/LaTeX/trajectory_optimization.tex index e901733..7d016ec 100644 --- a/LaTeX/trajectory_optimization.tex +++ b/LaTeX/trajectory_optimization.tex @@ -1,61 +1,57 @@ \chapter{Trajectory Optimization} \label{traj_optimization} - \section{Solving Boundary Value Problems} + \section{Optimization of Boundary Value Problems} - This section probably needs more work. + An approach is necessary, in trajectory optimization and many other fields, to optimize + highly non-linear, unpredictable systems such as this. The field that developed to + approach this problem is known as Non-Linear Programming (NLP) Optimization. - \section{Optimization} - - \subsection{Non-Linear Problem Optimization} - - Now we can consider the formulation of the problem in a more useful way. For instance, given a - desired final state in position and velocity we can relatively easily determine the initial - state necessary to end up at that desired state over a pre-defined period of time by solving - Kepler's equation. In fact, this is often how impulsive trajectories are calculated since, - other than the impulsive thrusting event itself, the trajectory is entirely natural. - - However, often in trajectory design we want to consider a number of other inputs. For - instance, a low thrust profile, a planetary flyby, the effects of rotating a solar panel on - solar radiation pressure, etc. Once these inputs have been accepted as part of the model, the - system is generally no longer analytically solvable, or, if it is, is too complex to calculate - directly. - - Therefore an approach is needed, in trajectory optimization and many other fields, to optimize - highly non-linear, unpredictable systems such as this. The field that developed to approach - this problem is known as Non-Linear Problem (NLP) Optimization. + A Non-Linear Programming Problem is defined by an attempt to optimize a function + $f(\vec{x})$, subject to constraints $\vec{g}(\vec{x}) \le 0$ and $\vec{h}(\vec{x}) = 0$ + where $n$ is a positive integer, $x$ is any subset of $R^n$, $g$ and $h$ can be vector + valued functions of any size, and at least one of $f$, $g$, and $h$ must be non-linear. There are, however, two categories of approaches to solving an NLP. The first category, - indirect methods, involve declaring a set of necessary and/or sufficient conditions for declaring - the solution optimal. These conditions then allow the non-linear problem (generally) to be - reformulated as a two point boundary value problem. Solving this boundary value problem can - provide a control law for the optimal path. Indirect approaches for spacecraft trajectory + indirect methods, involve declaring a set of necessary and/or sufficient conditions for + optimality. These conditions then allow the problem to be reformulated as a two point + boundary value problem. Solving this boundary value problem involves determining a + control law for the optimal path. Indirect approaches for spacecraft trajectory optimization have given us the Primer Vector Theory\cite{jezewski1975primer}. The other category is the direct methods. In a direct optimization problem, the cost - function itself is calculated to provide the optimal solution. The problem is usually - thought of as a collection of dynamics and controls. Then these controls can be modified - to minimize the cost function. A number of tools have been developed to optimize NLPs - via this direct method in the general case. For this particular problem, direct - approaches were used as the low-thrust interplanetary system dynamics adds too much - complexity to quickly optimize indirectly and the individual optimization routines - needed to proceed as quickly as possible. + function itself provides a value that an iterative numerical optimizer can measure + itself against. The optimal solution is then found by varying the inputs $x$ until the + cost function is reduced to a minimum value, often determined by its derivative + jacobian. A number of tools have been developed to optimize NLPs via this direct method + in the general case. + + Both of these methods have been applied to the problem of low-thrust interplanetary + trajectory optimization \cite{Casalino2007IndirectOM} to find local optima over + low-thrust control laws. It has often been found that indirect methods are more + difficult to converge and require a better initial guess than the direct methods. + However, they also require less parameters to describe the trajectory, since the + solution of a boundary value problem doesn't require discretization of the control + states. + + In this implementation, robustness is incredibly valuable, as the Monotonic Basin + Hopping algorithm, discussed later, is leveraged to attempt to find all minima basins in + the solution space by ``hopping'' around with different initial guesses. It is, + therefore, important that the optimization routine be robust to poor initial guesses. + Therefore, a direct optimization method was leveraged by transcribing the problem into + an NLP and using IPOPT to find the local minima. \subsubsection{Non-Linear Solvers} - For these types of non-linear, constrained problems, a number of tools have been developed - that act as frameworks for applying a large number of different algorithms. This allows for - simple testing of many different algorithms to find what works best for the nuances of the - problem in question. - One of the most common of these NLP optimizers is SNOPT\cite{gill2005snopt}, which - is a proprietary package written primarily using a number of Fortran libraries by - the Systems Optimization Laboratory at Stanford University. It uses a sparse - sequential quadratic programming approach. + One of the most common packages for the optimization of NLP problems is + SNOPT\cite{gill2005snopt}, which is a proprietary package written primarily using a + number of Fortran libraries by the Systems Optimization Laboratory at Stanford + University. It uses a sparse sequential quadratic programming algorithm as its + back-end optimization scheme. Another common NLP optimization packages (and the one used in this implementation) - is the Interior Point Optimizer or IPOPT\cite{wachter2006implementation}. It can be - used in much the same way as SNOPT and uses an Interior Point Linesearch Filter - Method and was developed as an open-source project by the organization COIN-OR under - the Eclipse Public License. + is the Interior Point Optimizer or IPOPT\cite{wachter2006implementation}. It uses + an Interior Point Linesearch Filter Method and was developed as an open-source + project by the organization COIN-OR under the Eclipse Public License. Both of these methods utilize similar approaches to solve general constrained non-linear problems iteratively. Both of them can make heavy use of derivative Jacobians and Hessians @@ -67,14 +63,14 @@ libraries that port these are quite modular in the sense that multiple algorithms can be tested without changing much source code. - \subsubsection{Linesearch Method} + \subsubsection{Interior Point Linesearch Method} As mentioned above, this project utilized IPOPT which leveraged an Interior Point Linesearch method. A linesearch algorithm is one which attempts to find the optimum of a non-linear problem by first taking an initial guess $x_k$. The algorithm then determines a step direction (in this case through the use of either automatic differentiation or finite differencing to calculate the derivatives of the - non-linear problem) and a step length. The linesearch algorithm then continues to + cost function) and a step length. The linesearch algorithm then continues to step the initial guess, now labeled $x_{k+1}$ after the addition of the ``step'' vector and iterates this process until predefined termination conditions are met. @@ -83,15 +79,42 @@ was sufficient merely that the non-linear constraints were met, therefore optimization (in the particular step in which IPOPT was used) was unnecessary. - \subsubsection{Multiple-Shooting Algorithms} + \subsubsection{Shooting Schemes for Solving a Two-Point Boundary Value Problem} - Now that we have software defined to optimize non-linear problems, what remains is - determining the most effective way to define the problem itself. The most simple - form of a trajectory optimization might employ a single shooting algorithm, which - propagates a state, given some control variables forward in time to the epoch of - interest. The controls over this time period are then modified in an iterative - process, using the NLP optimizer, until the target state and the propagated state - matches. This technique can be visualized in Figure~\ref{single_shoot_fig}. + One straightforward approach to trajectory corrections is a single shooting + algorithm, which propagates a state, given some control variables forward in time to + the epoch of interest. The controls over this time period are then modified in an + iterative process, using the correction scheme, until the target state and the + propagated state matches. + + As an example, we can consider the Two-Point Boundary Value Problem (TPBVP) defined + by: + + \begin{equation} + y''(t) = f(t, y(t), y'(t)), y(t_0) = y_0, y(t_f) = y_f + \end{equation} + + \noindent + We can then redefine the problem as an initial-value problem: + + \begin{equation} + y''(t) = f(t, y(t), y'(t)), y(t_0) = y_0, y'(t_0) = x + \end{equation} + + \noindent + With $y(t,x)$ as a solution to that problem. Furthermore, if $y(t_f, x) = y_f$, then + the solution to the initial-value problem is also the solution to the TPBVP as well. + Therefore, we can use a root-finding algorithm, such as the bisection method, + Newton's Method, or even Laguerre's method, to find the roots of: + + \begin{equation} + F(x) = y(t_f, x) - y_f + \end{equation} + + \noindent + To find the solution to the IVP at $x_0$, $y(t_f, x_0)$ which also provides a + solution to the TPBVP. This technique for solving a Two-Point Boundary Value + Problem can be visualized in Figure~\ref{single_shoot_fig}. \begin{figure}[H] \centering @@ -108,15 +131,15 @@ as this one. However, some problems require the use of a more flexible algorithm. In these cases, - sometimes a multiple-shooting algorithm can provide that flexibility and allow the - NLP solver to find the optimal control faster. In a multiple shooting algorithm, - rather than having a single target point at which the propagated state is compared, - the target orbit is broken down into multiple arcs, then end of each of which can be - seen as a separate target. At each of these points we can then define a separate - control. The end state of each arc and the beginning state of the next must then be - equal for a valid arc, as well as the final state matching the target final state. - This changes the problem to have far more constraints, but also increased freedom - due to having more control variables. + sometimes a multiple-shooting algorithm can provide that flexibility and reduced + sensitivity. In a multiple shooting algorithm, rather than having a single target + point at which the propagated state is compared, the target orbit is broken down + into multiple arcs, then end of each of which can be seen as a separate target. At + each of these points we can then define a separate control, which may include the + states themselves. The end state of each arc and the beginning state of the next + must then be equal for a valid arc (with the exception of velocity discontinuities + if allowed for maneuvers at that point), as well as the final state matching the + target final state. \begin{figure}[H] \centering @@ -136,81 +159,21 @@ \section{Monotonic Basin Hopping Algorithms} - % TODO: This needs to be rewritten to be general, then add the appropriate specific - % implementation details to the approach chapter + The techniques discussed thus far are useful for finding local optima. However, we would + also like to traverse the search space in an attempt to determine the global optima over the + entire search space. One approach to this would be to discretize the search space and test + each point as an input to a local optimization scheme. In order to achieve sufficient + coverage of the search space, however, this often requires long processing times in a + high-dimensional environment. - The aim of a monotonic basin hopping algorithm is to provide an efficient method for - completely traversing a large search space and providing many seed values within the - space for an ''inner loop`` solver or optimizer. These solutions are then perturbed - slightly, in order to provide higher fidelity searching in the space near valid - solutions in order to fully explore the vicinity of discovered local minima. This - makes it an excellent algorithm for problems with a large search space, including - several clusters of local minima, such as this application. - - The algorithm contains two loops, the size of each of which can be independently - modified (generally by specifying a ''patience value``, or number of loops to - perform, for each) to account for trade-offs between accuracy and performance depending on - mission needs and the unique qualities of a certain search space. - - The first loop, the ''search loop``, first calls the random mission generator. This - generator produces two random missions as described in - Section~\ref{random_gen_section} that differ only in that one contains random flyby - velocities and control thrusts and the other contains Lambert's-solved flyby - velocities and zero control thrusts. For each of these guesses, the NLP solver is - called. If either of these mission guesses have converged onto a valid solution, the - lower loop, the ''drill loop`` is entered for the valid solution. After the - convergence checks and potentially drill loops are performed, if a valid solution - has been found, this solution is stored in an archive. If the solution found is - better than the current best solution in the archive (as determined by a - user-provided cost function of fuel usage, $C_3$ at launch, and $v-\infty$ at - arrival) then the new solution replaces the current best solution and the loop is - repeated. Taken by itself, the search loop should quickly generate enough random - mission guesses to find all ''basins`` or areas in the solution space with valid - trajectories, but never attempts to more thoroughly explore the space around valid - solutions within these basins. - - The drill loop, then, is used for this purpose. For the first step of the drill - loop, the current solution is saved as the ''basin solution``. If it's better than - the current best, it also replaces the current best solution. Then, until the - stopping condition has been met (generally when the ''drill counter`` has reached - the ''drill patience`` value) the current solution is perturbed slightly by adding - or subtracting a small random value to the components of the mission. - - The performance of this perturbation in terms of more quickly converging upon the - true minimum of that particular basin, as described in detail by - Englander\cite{englander2014tuning}, is highly dependent on the distribution - function used for producing these random perturbations. While the intuitive choice - of a simple Gaussian distribution would make sense to use, it has been found that a - long-tailed distribution, such as a Cauchy distribution or a Pareto distribution is - more robust in terms of well chose boundary conditions and initial seed solutions as - well as more performant in time required to converge upon the minimum for that basin. - - Because of this, the perturbation used in this implementation follows a - bi-directional, long-tailed Pareto distribution generated by the following - probability density function: - - \begin{equation} - 1 + - \left[ \frac{s}{\epsilon} \right] \cdot - \left[ \frac{\alpha - 1}{\frac{\epsilon}{\epsilon + r}^{-\alpha}} \right] - \end{equation} - - Where $s$ is a random array of signs (either plus one or minus one) with dimension - equal to the perturbed variable and bounds of -1 and 1, $r$ is a uniformly - distributed random array with dimension equal to the perturbed variable and bounds - of 0 and 1, $\epsilon$ is a small value (nominally set to $1e-10$), and $\alpha$ is - a tuning parameter to determine the size of the tails and width of the distribution - set to $1.01$, but easily tunable. - - The perturbation function then steps through each parameter of the mission, - generating a new guess with the parameters modified by the Pareto distribution. - After this perturbation, the NLP solver is then called again to find a valid - solution in the vicinity of this new guess. If the solution is better than the - current basin solution, it replaces that value and the drill counter is reset to - zero. If it is better than the current total best, it replaces that value as well. - Otherwise, the drill counter increments and the process is repeated. Therefore, the - drill patience allows the mission designer to determine a maximum number of - iterations to perform without improvement in a row before ending the drill loop. - This process can be repeated essentially ''search patience`` number of times in - order to fully traverse all basins. + To solve this problem, a technique was described by Wales and Doye in 1997 + \cite{wales1997global} called a basin-hopping algorithm. This algorithm performs a random + perturbation of the input states, optimizes using a local optimizer, then either accepts the + new coordinates and performs a further perturbation, or rejects them and tests a new set of + randomly-generated inputs. + This allows the algorithm to test many different regions of the search space in order to + determine which ``basins'' contain local optima. If these local optima are found, the + algorithm can attempt to improve the local optima based on parameters defined by the + algorithm designer (by perturbation and re-applying the local optimization scheme) or search + for a new basin in the search space. diff --git a/julia/lamberts_fig.jl b/julia/lamberts_fig.jl new file mode 100644 index 0000000..f689993 --- /dev/null +++ b/julia/lamberts_fig.jl @@ -0,0 +1,17 @@ +using PlotlyJS: savefig + +r1 = oe_to_xyz([Earth.a, 0.6, 0.001, 0., 0., deg2rad(10.)], Sun.μ)[1:3] +r2 = oe_to_xyz([Earth.a, 0.6, 0.001, 0., 0., deg2rad(140.)], Sun.μ)[1:3] +tof = 0.5year +v1 = Thesis.lamberts1(r1,r2,tof)[1] +v2 = Thesis.lamberts2(r1,r2,tof)[1] +state1 = [ r1; v1; 1000. ] +state2 = [ r1; v2; 1000. ] +path1 = prop(state1, tof) +path2 = prop(state2, tof) +plot([path2, path1], + title="Lambert's Problem Solutions", + colors=["#F00", "#0FF"], + labels=["Type I", "Type II"], + mode="light") + diff --git a/julia/src/utilities/lamberts.jl b/julia/src/utilities/lamberts.jl index 67bb787..fdb7136 100644 --- a/julia/src/utilities/lamberts.jl +++ b/julia/src/utilities/lamberts.jl @@ -71,6 +71,120 @@ function lamberts(planet1::Body,planet2::Body,leave::DateTime,arrive::DateTime) end +function lamberts1(r1::Vector{Float64},r2::Vector{Float64},tof_req::Float64) + μ = Sun.μ + r1mag = norm(r1) + r2mag = norm(r2) + + cos_dθ = dot(r1,r2)/(r1mag*r2mag) + dθ = atan(r2[2],r2[1]) - atan(r1[2],r1[1]) + dθ = dθ > 2π ? dθ-2π : dθ + dθ = dθ < 0.0 ? dθ+2π : dθ + DM = -1 + A = DM * √(r1mag * r2mag * (1 + cos_dθ)) + dθ == 0 || A == 0 && error("Can't solve Lambert's Problem") + + ψ, c2, c3 = 0, 1//2, 1//6 + ψ_down = -4π ; ψ_up = 4π^2 + y = r1mag + r2mag + (A*(ψ*c3 - 1)) / √(c2) ; χ = √(y/c2) + tof = ( χ^3*c3 + A*√(y) ) / √(μ) + + i = 0 + while abs(tof-tof_req) > 1e-2 + y = r1mag + r2mag + (A*(ψ*c3 - 1)) / √(c2) + while y/c2 <= 0 + # println("You finally hit that weird issue... ") + ψ += 0.1 + if ψ > 1e-6 + c2 = (1 - cos(√(ψ))) / ψ ; c3 = (√(ψ) - sin(√(ψ))) / √(ψ^3) + elseif ψ < -1e-6 + c2 = (1 - cosh(√(-ψ))) / ψ ; c3 = (-√(-ψ) + sinh(√(-ψ))) / √((-ψ)^3) + else + c2 = 1//2 ; c3 = 1//6 + end + y = r1mag + r2mag + (A*(ψ*c3 - 1)) / √(c2) + end + χ = √(y/c2) + + tof = ( c3*χ^3 + A*√(y) ) / √(μ) + tof < tof_req ? ψ_down = ψ : ψ_up = ψ + ψ = (ψ_up + ψ_down) / 2 + + if ψ > 1e-6 + c2 = (1 - cos(√(ψ))) / ψ ; c3 = (√(ψ) - sin(√(ψ))) / √(ψ^3) + elseif ψ < -1e-6 + c2 = (1 - cosh(√(-ψ))) / ψ ; c3 = (-√(-ψ) + sinh(√(-ψ))) / √((-ψ)^3) + else + c2 = 1//2 ; c3 = 1//6 + end + + i += 1 + i > 500 && return [NaN,NaN,NaN],[NaN,NaN,NaN] + end + + f = 1 - y/r1mag ; g_dot = 1 - y/r2mag ; g = A * √(y/μ) + v0t = (r2 - f*r1)/g ; vft = (g_dot*r2 - r1)/g + return v0t, vft, tof_req + +end + +function lamberts2(r1::Vector{Float64},r2::Vector{Float64},tof_req::Float64) + μ = Sun.μ + r1mag = norm(r1) + r2mag = norm(r2) + + cos_dθ = dot(r1,r2)/(r1mag*r2mag) + dθ = atan(r2[2],r2[1]) - atan(r1[2],r1[1]) + dθ = dθ > 2π ? dθ-2π : dθ + dθ = dθ < 0.0 ? dθ+2π : dθ + DM = 1 + A = DM * √(r1mag * r2mag * (1 + cos_dθ)) + dθ == 0 || A == 0 && error("Can't solve Lambert's Problem") + + ψ, c2, c3 = 0, 1//2, 1//6 + ψ_down = -4π ; ψ_up = 4π^2 + y = r1mag + r2mag + (A*(ψ*c3 - 1)) / √(c2) ; χ = √(y/c2) + tof = ( χ^3*c3 + A*√(y) ) / √(μ) + + i = 0 + while abs(tof-tof_req) > 1e-2 + y = r1mag + r2mag + (A*(ψ*c3 - 1)) / √(c2) + while y/c2 <= 0 + # println("You finally hit that weird issue... ") + ψ += 0.1 + if ψ > 1e-6 + c2 = (1 - cos(√(ψ))) / ψ ; c3 = (√(ψ) - sin(√(ψ))) / √(ψ^3) + elseif ψ < -1e-6 + c2 = (1 - cosh(√(-ψ))) / ψ ; c3 = (-√(-ψ) + sinh(√(-ψ))) / √((-ψ)^3) + else + c2 = 1//2 ; c3 = 1//6 + end + y = r1mag + r2mag + (A*(ψ*c3 - 1)) / √(c2) + end + χ = √(y/c2) + + tof = ( c3*χ^3 + A*√(y) ) / √(μ) + tof < tof_req ? ψ_down = ψ : ψ_up = ψ + ψ = (ψ_up + ψ_down) / 2 + + if ψ > 1e-6 + c2 = (1 - cos(√(ψ))) / ψ ; c3 = (√(ψ) - sin(√(ψ))) / √(ψ^3) + elseif ψ < -1e-6 + c2 = (1 - cosh(√(-ψ))) / ψ ; c3 = (-√(-ψ) + sinh(√(-ψ))) / √((-ψ)^3) + else + c2 = 1//2 ; c3 = 1//6 + end + + i += 1 + i > 500 && return [NaN,NaN,NaN],[NaN,NaN,NaN] + end + + f = 1 - y/r1mag ; g_dot = 1 - y/r2mag ; g = A * √(y/μ) + v0t = (r2 - f*r1)/g ; vft = (g_dot*r2 - r1)/g + return v0t, vft, tof_req + +end + function porkchop(planet1::Body, planet2::Body, departures::Vector{DateTime}, arrivals::Vector{DateTime}) v∞_in = [ norm(lamberts(planet1, planet2, depart, arrive)[2]) for depart in departures, arrive in arrivals ] v∞_out = [ norm(lamberts(planet1, planet2, depart, arrive)[1]) for depart in departures, arrive in arrivals ] diff --git a/julia/src/utilities/plotting.jl b/julia/src/utilities/plotting.jl index 3333c9d..ae41a22 100644 --- a/julia/src/utilities/plotting.jl +++ b/julia/src/utilities/plotting.jl @@ -89,14 +89,22 @@ function standard_layout(limit::Float64, title::AbstractString; mode="dark") plot_bgcolor="rgba(255,255,255,0.0)", scene_aspectmode = "data", scene_xaxis = attr( + title = "X (km)", + exponentformat = "power", autorange = true, color="rgb(0,0,0)" ), scene_yaxis = attr( + title = "Y (km)", + exponentformat = "power", autorange = true, color="rgb(0,0,0)" ), - scene_zaxis_visible = false + scene_zaxis = attr( + title = "Z (km)", + exponentformat = "power", + visible = false, + ), ) end end @@ -287,7 +295,7 @@ function plot(paths::Vector{Matrix{Real}}; traces = [ trace... ] for i = 2:length(paths) color = colors === nothing ? random_color() : colors[i] - trace, new_limit = gen_plot(paths[i],label=labels[i],color=colors[i],markers=markers) + trace, new_limit = gen_plot(paths[i],label=labels[i],color=color,markers=markers) push!(traces, trace...) limit = max(limit, new_limit) end