1 Star 0 Fork 0

出生八斤半/robio2024

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
my.tex 34.08 KB
一键复制 编辑 原始数据 按行查看 历史
“chushengbajinban” 提交于 2024-10-18 16:42 . bk
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%2345678901234567890123456789012345678901234567890123456789012345678901234567890
% 1 2 3 4 5 6 7 8
\documentclass[letterpaper, 10 pt, conference]{ieeeconf} % Comment this line out if you need a4paper
%\documentclass[a4paper, 10pt, conference]{ieeeconf} % Use this line for a4 paper
\IEEEoverridecommandlockouts % This command is only needed if
% you want to use the \thanks command
\overrideIEEEmargins % Needed to meet printer requirements.
%In case you encounter the following error:
%Error 1010 The PDF file may be corrupt (unable to open PDF file) OR
%Error 1000 An error occurred while parsing a contents stream. Unable to analyze the PDF file.
%This is a known problem with pdfLaTeX conversion filter. The file cannot be opened with acrobat reader
%Please use one of the alternatives below to circumvent this error by uncommenting one or the other
%\pdfobjcompresslevel=0
%\pdfminorversion=4
% See the \addtolength command later in the file to balance the column lengths
% on the last page of the document
% The following packages can be found on http:\\www.ctan.org
%\usepackage{graphics} % for pdf, bitmapped graphics files
%\usepackage{epsfig} % for postscript graphics files
%\usepackage{mathptmx} % assumes new font selection scheme installed
%\usepackage{times} % assumes new font selection scheme installed
\usepackage{amsmath} % assumes amsmath package installed
\usepackage{amsfonts}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage{epstopdf}
%\usepackage{amssymb} % assumes amsmath package installed
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{booktabs}
\title{\LARGE \bf
Real-time Trajectory Planning Framework for UAV \\
in Unknown and Dynamic Environment
}
\author{Zheyuan Mei% <-this % stops a space
\thanks{The author is with the State Key Laboratory of Mechanical System and Vibration,
School of Mechanical Engineering, Shanghai Jiaotong University, Shanghai, 200240, China
(email: \tt\small chushengbajinban@sjtu.edu.cn)}%
}
\begin{document}
\maketitle
\thispagestyle{empty}
\pagestyle{empty}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
This paper introduces a novel trajectory planning framework for unmanned aerial vehicles (UAV)
navigating in unknown and dynamic environments. Our approach uses an enhanced Simple Online and
Realtime Tracking (SORT) algorithm adapted for three-dimensional space to track dynamic obstacles
in real time. Future trajectories of these obstacles are predicted by Bezier curves,
with an analytical solution obtained through the Lagrange multiplier method.
This allows for fast computations, making the method well-suited for scenarios with multiple
dynamic obstacles.
The framework also extends traditional path planning and trajectory optimization techniques to
handle dynamic collisions effectively. A key feature is its ability to detect and respond
to hazardous areas created by blind spots, enabling proactive avoidance of potential collisions
with unseen dynamic obstacles.
Both simulations and real-world experiments validate the effectiveness of our approach,
demonstrating its ability to navigate complex and dynamic environments efficiently and safely.
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{INTRODUCTION}
Trajectory planning for unmanned aerial vehicles (UAV) in unknown and dynamic environments is a critical challenge,
particularly as UAV are increasingly used in applications such as surveillance, delivery, search and rescue,
and environmental monitoring. These environments often contain both static and dynamic obstacles, as shown in
Fig.\ref{fig:realworld}, which appear unpredictably, making real-time path planning essential to ensure the
safety and operational efficiency of UAV. Effective trajectory planning must not only account for the current
positions of obstacles but also anticipate their potential future movements to proactively avoid collisions.
Recent advancements in dynamic obstacle avoidance have significantly enhanced UAV capabilities.
Traditional path planning methods, such as the A* algorithm, are designed for static environments and
lack the flexibility needed for dynamic conditions.
To overcome these limitations, hybrid methods have been developed.
For example, Ghambar et al.\cite{ghambariUAVPathPlanning2020} integrated A* with evolutionary algorithms like
differential evolution to enable efficient path recalculation when encountering moving obstacles.
Chen et al.\cite{chenDynamicPathPlanning2020} introduced a rolling planning algorithm with A* for
continuous path adjustments based on real-time feedback.
Zammit et al.\cite{zammit3DRealtimePath2021}\cite{zammitRealtime3DUAV2023} further improved these approaches
by combining A* with ripple reduction and sampling methods to boost computational efficiency and responsiveness.
However, these methods often struggle with predicting future states and proactively avoiding potential risks.
The Artificial Potential Field (APF) method is another widely adopted approach due to its simplicity and capability
for real-time application, for instance, Budiyanto et al.\cite{budiyantoUAVObstacleAvoidance2015a} modified APF to
dynamically alter repulsion forces based on obstacle speed and distance.
However, APF methods face inherent challenges such as local minima traps and instability when dealing with dynamic obstacles.
To mitigate these issues, enhancements have been made by introducing steering forces and modified potential fields to better
handle the dynamic nature of real-world environments\cite{fengUAVDynamicPath2021},
while Rao et al.\cite{raoIntegratedRealtimeUAV2023}
combined APF with trajectory optimization using the Chebyshev pseudospectral method.
Despite these improvements, challenges such as scalability and uncertainty in predictions persist.
The application of Model Predictive Control (MPC) in dynamic obstacle avoidance has also been explored in recent studies.
Yao et al.\cite{yaoRealtimePathPlanning2015} proposed a 3D path planning method using varying receding-horizon optimization
inspired by MPC, although the effectiveness of the approach in significantly larger or more cluttered
environments is not fully addressed.
Xu et al.\cite{xuDPMPCPlannerRealtimeUAV2022} developed a two-layer DPMPC framework that combines static trajectory planning
with real-time obstacle avoidance, effectively managing uncertainties but facing potential scalability issues
in cluttered environments.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/fig_realworld.jpg}
\caption{The drone navigates in an environment with unknown dynamic and static obstacle}
\label{fig:realworld}
\end{figure}
This paper addresses these gaps by proposing a comprehensive framework for UAV trajectory planning in unknown
and dynamic environments that emphasizes computational efficiency and robust obstacle avoidance.
The proposed method enables UAV to effectively track multiple dynamic obstacles and predict their future trajectories
analytically, reducing computational overhead and enhancing real-time performance. Compared to existing methods,
our approach improves safety and reliability by not only reacting to immediate collision risks but also proactively
avoiding potential future collisions, even with obstacles emerging from blind spots. We demonstrate the efficiency
and robustness of our method in various complex and dynamic environments through simulations. Additionally,
real-world experiments validate its capability to handle challenging scenarios.
Our contributions are summarized as follows:
\begin{enumerate}
\item We introduce a robust and efficient framework that supports continuous tracking and analytical future
trajectory prediction for multiple dynamic obstacles, using Bezier curves and Bernstein polynomials.
\item We develop a comprehensive approach to manage and avoid both imminent and potential collision risks,
ensuring safer UAV navigation in unpredictable and cluttered environments.
\end{enumerate}
\section{PROBLEM STATEMENT AND SYSTEM OVERVIEW}
\subsection{Problem Statement}
For a drone navigating in an unknown and dynamic environment, the mission is to reach a target while avoiding
collisions with both static and dynamic obstacles.
After the perception module constructs a local static map $\mathcal{M}$ of the environment and
detects dynamic obstacles, the objective is to plan a safe trajectory, avoiding currently
detected obstacles and also account for potential collisions in future.
\subsection{System Overview}
The system architecture, illustrated in Fig.\ref{fig:system_overview}
consists of three main components: perception, planning, and control.
Onboard sensors of drone gather information about both the static environment and dynamic obstacles.
The static dat is used to construct a static grid map, while the processed dynamic obstacle information,
along with the static map, is fed into the path searching and trajectory optimization modules.
The optimized trajectory is then sent to the control module to guide the drone's flight.
This work primarily focuses on the post-processing of dynamic obstacles,
path searching and trajectory optimization for dynamic environments.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{fig/fig_system.png}
\caption{A diagram of the system architecture}
\label{fig:system_overview}
\end{figure}
\section{METHOD}
% TODO 统一符号 M B
In this section, we begin by discussing the real-time tracking and prediction of dynamic obstacles.
Based on this, we present a framework for path searching and trajectory optimization designed for
dynamic obstacles to ensure safe navigation in complex and ever-changing environments
\subsection{Real-time Tracking of Dynamic Obstacles}
\label{sec:tracking}
In environment with multiple dynamic obstacles, it is essential not only to detect and recognize
these obstacles in single frame, but also to track them across multiple frames,
which enhances the accuracy and reliability of decision-making processes.
Object detection is performed using YOLOv5 on RGBD images, with depth data used to determine
obstacle positions in 3D space through simple transformations.
To support tracking in 3D space, we modified the SORT
algorithm \cite{bewleySimpleOnlineRealtime2016a} accordingly.
Each dynamic obstacle is tracked by Kalman filter, which operates with a six-dimensional
state vector, representing the obstacle's 3D position \([p_x, p_y, p_z]\) and velocity \([v_x, v_y, v_z]\).
The state transition matrix \(F\) and measurement matrix \(H\) are defined as:
\begin{equation}
F = \begin{pmatrix}
I_{3 \times 3} & I_{3 \times 3} \\
0_{3 \times 3} & I_{3 \times 3}
\end{pmatrix}, \quad
H = \begin{pmatrix}
I_{3 \times 3} & 0_{3 \times 3}
\end{pmatrix}
\end{equation}
To associate new detections with predictions by Kalman filter, we construct a cost matrix based on
the Euclidean distance between predicted tracker positions and new detections.
The Hungarian method is then used to solve this the assignment problem,
ensuring optimal matching and accurate tracking of each dynamic obstacle.
\subsection{Prediction of Dynamic Obstacles}
\label{sec:prediction}
Predicting the trajectories of multiple dynamic obstacles in real-time is also crucial for safe navigation,
which is based on Bezier curve and can be expressed in a Matrix form
(using a fifth-order Bezier curve as an example):
\begin{equation}
\label{eq:bezier}
B(t)=\mathbf{T}(t) \cdot \mathbf{M} \cdot \mathbf{c}
\end{equation}
where \( \mathbf{T}(t) = [1, t, t^2, t^3, t^4, t^5] \) is time vector,
\( \mathbf{c} = [c_0, c_1, c_2, c_3, c_4, c_5]^T \), \( c_i \) is the control point of the Bezier curve,
and \( \mathbf{M} \) is the transformation matrix derived from the Bernstein polynomial coefficients:
\begin{equation}
\mathbf{M} = \left[(-1)^{i-j} \binom{i}{j} \, \middle| \, i \geq j\right]_{6 \times 6}
\end{equation}
In the dynamic object prediction problem, $\mathbf{p}_t \in \mathbb{R}^3$ denotes the position of a
dynamic obstacle at time $t$ in the world coordinate system.
A queue of length $l$, obtained from the tracking module in \ref{sec:tracking},
is maintained to store the historical positions over $l$ frames, represented as
$\mathbf{P} = [\mathbf{p}_{t_1}, \mathbf{p}_{t_2}, \dots, \mathbf{p}_{t_l}]^T$.
As mentioned in \cite{hanFasttrackerRobustAerial2021}, assuming the desired prediction time interval
is $[t_l, t_p]$, a loss function is constructed to fit a
Bezier curve for the interval $[t_1, t_p]$, resulting in the predicted trajectory for $[t_l, t_p]$
with the abuse of dimensions:
\begin{equation}
\label{eq:bezier_loss_init}
J_{\text {pred}}=\sum_{i=1}^l w_{t_i}\left\|\hat{B}\left(t_i\right)
-\mathbf{p}_{t_i}\right\|_2^2+w_r \int_{t_1}^{t_p}\left\|\hat{B}^{(2)}(t)\right\|_2^2 d t
\end{equation}
where $\hat{B}$ is the predicted trajectory, the first term is to minimize the distance residual
while a regularizer is added to avoid over-fitting.
By substituting Eq(\ref{eq:bezier}), the first term of Eq(\ref{eq:bezier_loss_init})
can be transformed into a matrix form:
\begin{equation}
\label{eq:first_term_of_loss}
\sum_{i=1}^l w_{t_i}\left\|\hat{B}\left(t_i\right) -\mathbf{p}_{t_i}\right\|_2^2 =
(\tilde{\mathbf{T}} \mathbf{M C}-\mathbf{P})^T W_t(\tilde{\mathbf{T}} \mathbf{M C}-\mathbf{P})
\end{equation}
where $\tilde{\mathbf{T}} = [\mathbf{T}(t_1), \mathbf{T}(t_2), \ldots, \mathbf{T}(t_l)]^T$.
Due to the convex hull property of Bezier curves, points on the curve always lie within the
convex hull formed by its control points. Thus, the second term in the loss function
can be approximated as:
\begin{equation}
\label{eq:second_term_of_loss}
\int_{t_1}^{t_p}\left\|\hat{B}^{(2)}(t)\right\|_2^2 d t
\simeq\left\|\mathbf{D}_{n-1} \mathbf{D}_n \mathbf{C}\right\|_2^2
\end{equation}
where $\mathbf{C}=\left[\begin{array}{llllll}\mathbf{c}_0 & \mathbf{c}_1 & \mathbf{c}_2 &
\mathbf{c}_3 & \mathbf{c}_4 & \mathbf{c}_5\end{array}\right]^T$
represents a high-dimensional representation of the matrix formed by the Bezier curve control points.
$\mathbf{D}_n$ is the difference matrix, such as:
\begin{equation}
\mathbf{D}_3 = \begin{pmatrix}
-1 & 1 & 0 \\
0 & -1 & 1
\end{pmatrix}
\end{equation}
Based on Eq(\ref{eq:first_term_of_loss}) and Eq(\ref{eq:second_term_of_loss}), the loss function can
be formulated in Matrix form:
\begin{equation}
\label{eq:bezier_loss_matrix_constraints}
\begin{array}{ll}
\min _{\mathbf{C}} & \mathbf{C}^T \mathbf{Q C}-2 \mathbf{P}^T \mathbf{U}^T \mathbf{C} \\
\text { s.t. } & \mathbf{A C}=\mathbf{p}_{t_l}
\end{array}
\end{equation}
where
\begin{equation}
\begin{aligned}
\mathbf{Q} & =\mathbf{M}^T \tilde{\mathbf{T}}^T W_t \tilde{\mathbf{T}} \mathbf{M}
+\mathbf{D}_n{ }^T \mathbf{D}_{n-1}{ }^T W_r \mathbf{D}_{n-1} \mathbf{D}_n \\
\mathbf{U} & =\mathbf{M}^T \tilde{\mathbf{T}}^T W_t \\
\mathbf{A} & =\mathbf{T}\left(t_l\right) \mathbf{M}
\end{aligned}
\end{equation}
Eq(\ref{eq:bezier_loss_matrix_constraints}) is a quadratic programming problem with equality constraints,
which can be transformed into an unconstrained optimization problem using the Lagrange multiplier method.
Define the Lagrangian as:
\begin{equation}
\mathcal{L}(\mathbf{C}, \lambda)=\mathbf{C}^T \mathbf{Q C}-2 \mathbf{P}^T \mathbf{U}^T \mathbf{C}
+2\lambda^T\left(\mathbf{A C}-\mathbf{p}_{t_l}\right)
\end{equation}
where $\lambda$ is the vector of Lagrange multipliers. Taking the partial derivatives of $\mathcal{L}$
with respect to $\mathbf{C}$ and $\lambda$ and setting them to zero, we obtain:
\begin{equation}
\begin{aligned}
& \frac{\partial \mathcal{L}}{\partial \mathbf{C}}=2 \mathbf{Q C}-2 \mathbf{U} \mathbf{P}+2 \mathbf{A}^T \lambda=0 \\
& \frac{\partial \mathcal{L}}{\partial \lambda}=\mathbf{A C}-\mathbf{p}_{t_i}=0
\end{aligned}
\end{equation}
which can be written in matrix form:
\begin{equation}
\label{eq:bezier_optimization_matrix}
\left(\begin{array}{cc}
\mathbf{Q} & \mathbf{A}^T \\
\mathbf{A} & \mathbf{0}
\end{array}\right)\left(\begin{array}{l}
\mathbf{C} \\
\lambda
\end{array}\right)=\left(\begin{array}{l}
\mathbf{U P} \\
\mathbf{p}_{t_i}
\end{array}\right)
\end{equation}
The analytical solution for the Bezier control points $\mathbf{C}$ is given by:
\begin{equation}
\widehat{\mathbf{C}}=\mathbf{E U P}-\mathbf{Q}^{-1} \mathbf{A}^T \mathbf{S}^{-1} \mathbf{p}_{t_l}
\end{equation}
where $\mathbf{S}=-\mathbf{A} \mathbf{Q}^{-1} \mathbf{A}^T$ is the Schur complement matrix of matrix $\mathbf{Q}$
in the left-hand side matrix of Eq(\ref{eq:bezier_optimization_matrix}), and
$\mathbf{E}=\mathbf{Q}^{-1}+\mathbf{Q}^{-1} \mathbf{A}^T \mathbf{S}^{-1} \mathbf{A} \mathbf{Q}^{-1}$.
Finally, the predicted position of dynamic obstacles within the interval $[t_l, t_p]$ is:
\begin{equation}
\hat{\mathbf{p}}_t=\hat{B}(t)=\mathbf{T}(t) \mathbf{M} \hat{\mathbf{C}}
\end{equation}
\subsection{Path Planning}
\label{sec:path_planning}
The primary goal of the path planning module is to find a safe path without considering dynamic
obstacles, while still accounting for potential threats from blind spots.
Dynamic obstacle avoidance will be covered in \ref{sec:trajectory_optimization}.
Before path searching, we update the \textbf{hazardous area} caused by $\textbf{blind regions}$
based on the current position $\mathbf{p}$ and orientation $\theta$.
The \textbf{blind region} is the area hidden from the drone's view blocked by obstacles, while the
\textbf{hazardous area} is where there's a high risk of collisions with dynamic obstacles emerging
from these blind spots. Attention to the hazardous area is crucial for UAV safety.
\begin{algorithm}
\caption{Update Hazardous Areas} \label{alg:update_blind_region}
\begin{algorithmic}[1]
\STATE \textbf{Input}: current position $\mathbf{p}$, orientation $\theta$, map $\mathcal{M}$
\STATE \textbf{Param}: maximum visibility range $L$, half of the field of view $\Theta$,
maximum radius of $R$
\STATE $\mathcal{M}.\textbf{clearHazardousArea}()$
\FOR{$\Delta \theta$ in $[\theta -\pi/2, \theta + \pi/2]$}
\STATE $\mathbf{p}_{end} \leftarrow \mathbf{p}+\mathbf{n}_{\Delta \theta}L$
\STATE $\mathbf{p}_{block} \leftarrow \textbf{checkLineVisibility}(\mathbf{p},\mathbf{p}_{end})$
\IF{$\mathbf{p}_{block}\neq \text{NULL}$ and $\textbf{isBoundaryBlock}(\mathbf{p}_{block})$}
\IF{$\textbf{isRightBlock}(\mathbf{p}_{block})$}
\STATE $\alpha \leftarrow \Delta \theta-\Theta$
\ELSE
\STATE$\alpha \leftarrow \Delta \theta+\Theta$
\ENDIF
\STATE $r \leftarrow \textbf{min}(\textbf{getObstacleWidth}(\mathbf{p}_{block}),R)$
\STATE $\mathbf{p}_{hazardous} \leftarrow \mathbf{p}_{block}+0.5*r\mathbf{n}_{\alpha}$
\STATE $\mathcal{M}.\textbf{updateHazardousArea}(\mathbf{p}_{hazardous},r)$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
As shown in Algorithm \ref{alg:update_blind_region} and Fig.\ref{fig:visibility_obstruction},
we check and evaluate each angle within the interval $[\theta -\pi/2, \theta + \pi/2]$.
For each angle, $\textbf{checkLineVisibility()}$ checks if the line segment between $\mathbf{p}$
and $\mathbf{p}_{end}$, which is $L$ units away in the direction $\Delta \theta$, is visible.
If an obstacle blocks this line, \textbf{isBoundaryBlock()} checks if the blocked point
$\mathbf{p}_{block}$ is on the obstacle's boundary, as indicated by the black squares and circles in
Fig.\ref{fig:visibility_obstruction}. Based on $\mathbf{p}_{block}$'s position relative
to the obstacle, we determine the direction $\alpha$ of the hazardous area's center.
The hazardous area's radius is set based on the obstacle's width and a maximum radius $R$.
Finally, \textbf{updateHazardousArea()} updates the map with the center
$\mathbf{p}_{hazardous}$ and radius $r$ of the hazardous area for use in path planning.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig/fig_visibility_obstruction.png}
\caption{Diagram for updating hazardous areas caused by blind spots}
\label{fig:visibility_obstruction}
\end{figure}
% Path search
After updating the hazardous areas, The path search process has three stages:
initial extension, A-Star search, and visibility-based pruning, as shown in Fig.\ref{fig:path_search}.
We use a minimum-jerk approach \cite{mellingerMinimumSnapTrajectory2011}
to extend the initial trajectory by connecting the endpoint of the previous
segment with the next local target. If no prior segment exists, the minimum-jerk curve connects the current
position to the target.
Collision detection is then performed along initial trajectory within the static grid map. If collisions are detected,
A-Star search is initiated using the collision's start and end points. The resulting A-Star path is then connected
to the collision-free segments of the initial trajectory.
During this process, we adjust the cost calculation $g(\mathbf{p}_{start},\mathbf{p}_{i})$ within the A-Star algorithm:
\begin{equation}
\begin{aligned}
\widehat{g}(\mathbf{p}_{start},\mathbf{p}_{i}) & =
\widehat{g}(\mathbf{p}_{start},\mathbf{p}_{i-1})
+ g(\mathbf{p}_{i-1},\mathbf{p}_{i}) \\
& + \lambda_g \textbf{getHazardousAreaCost}(\mathbf{p}_{i})
\end{aligned}
\end{equation}
where $\widehat{g}$ is the modified actual cost, $g(\mathbf{p}_{i-1},\mathbf{p}_{i})$ is
the original cost from the previous to the current position,
$\lambda_g$ is the weight for the hazardous area cost,
and \textbf{getHazardousAreaCost}() calculates the cost of passing through a hazardous area by querying the
map $\mathcal{M}$, updated in Algorithm \ref{alg:update_blind_region}.
This additional cost makes the A-Star algorithm more sensitive to hazardous areas caused by blind spots,
helping to avoid potential dynamic obstacles.
Lastly, we prune the path based on visibility, as shown in Fig.\ref{fig:path_search},
with details described in Algorithm 2 of \cite{zhouRobustRealtimeUav2020}.
This pruning straightens the composite path, reducing complexity and facilitating further optimization.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig/fig_path_search.png}
\caption{Diagram of the path search process}
\label{fig:path_search}
\end{figure}
\subsection{Trajectory Optimization in Dynamic Environments}
\label{sec:trajectory_optimization}
Trajectory optimization utilizes B-spline optimization, as outlined in
\cite{zhouRobustEfficientQuadrotor2019}.
This approach optimizes trajectory smoothness and dynamic feasibility
while ensuring effective obstacle avoidance in static environments.
Our method extends this framework for dynamic
environments by incorporating specific loss functions into the optimization process.
According to \cite{zhouRobustEfficientQuadrotor2019}, the trajectory is parameterized by a 3D B-spline curve,
defined by its degree $p_b$, knot vector, and $N_c$ control points $\left\{\mathbf{Q}_i\right\} \in \mathbb{R}^3$.
The goal of the optimization problem is to adjust these control points,
leveraging the properties of B-splines to influence the entire curve.
Since all the objective functions below are based on the waypoints of the B-spline,
rather than directly on the control points, we will first define the expression for the waypoints
$\mathbf{p}_k$ when $p_b = 3$:
\begin{equation}
\mathbf{p}_k=\frac{1}{6}\left(\mathbf{Q}_{k-1}+4 \mathbf{Q}_{k}+ \mathbf{Q}_{k+1}\right)
\end{equation}
where we only sample the waypoint $\mathbf{p}_k$ that are most influenced by $\mathbf{Q}_{k}$.
\subsubsection{Dynamic Obstacle Avoidance Term}
The dynamic obstacle avoidance term ensures that the trajectory maintains a safe distance
from dynamic obstacles. The loss function is:
\begin{equation}
J_{\text {dyn}}=\sum_{i=1}^{N_{\text {obs}}} \sum_{j=p_b}^{N_c-p_b}
\max \left(0, d_{\text {min}}-||\mathbf{p}^{\text {obs}}_i(t_j)-\mathbf{p}_j||\right)^3
\end{equation}
where $N_{\text {obs}}$ is the number of dynamic obstacles, $p_b$ is the order of the B-spline curve,
$d_{\text {min}}$ is the minimum safe distance,
$t_j$ is the time for waypoint $\mathbf{p}_j$, and $\mathbf{p}^{\text {obs}}_i(t_j)$ is the predicted
position of the $i$-th obstacle at $t_j$, obtained in \ref{sec:prediction}.
\subsubsection{Hazardous Area Term}
This term ensures the trajectory avoids regions near blind spots,
shown in Fig.\ref{fig:visibility_obstruction}, defined as:
\begin{equation}
J_{\text {blind}}=\sum_{i=p_b}^{N_c-p_b} \textbf{getHazardousAreaCost}(\mathbf{p}_{i})
\end{equation}
the gradient of which is calculated by querying costs of surrounding points on map $\mathcal{M}$.
Both $J_{\text {dyn}}$ and $J_{\text {blind}}$ are weighted by $\lambda_{\text {dyn}}$ and $\lambda_{\text {blind}}$
respectively and integrated into the final loss function. Other terms, like smoothness, feasibility,
and static obstacle avoidance, follow \cite{zhouRobustEfficientQuadrotor2019} and are not elaborated here.
\section{EXPERIMENT}
\subsection{Experiment Settings}
We design a quadrotor drone as the aerial experimental platform, as shown in Fig.\ref{fig:drone}.
This platform is equipped with an Intel RealSense LiDAR Camera L515 providing RGBD image, which
is used for mapping the static environment and detecting dynamic obstacles,
while an Intel RealSense T265 tracking camera is used for localization.
All modules are running on onboard computer NVIDIA Jetson Orin NX, with an 8-core Arm Cortex-A78 CPU.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{fig/fig_drone.jpg}
\caption{Quadrotor platform used in realworld experiments}
\label{fig:drone}
\end{figure}
\subsection{Real-World Experiments}
\begin{figure*}[ht]
\centering
\subfigure[
Dense and randomly placed static obstacles combined with erratically flying dynamic obstacles]{
\label{Fig.sub.1}
\includegraphics[height=9cm]{fig/fig_exp1.jpg}}
\hfill
\subfigure[
A high wall obstructs the field of view, with dynamic obstacles emerging from blind spots.]{
\label{Fig.sub.2}
\includegraphics[height=9cm]{fig/fig_exp2.jpg}}
\caption{Real-world Experiments.\textbf{Top}: Snapshot of the entire scene.
\textbf{Middle}:Visualization of several extracted moments.
\textbf{Bottom}: First-person view of ego drone.
The red curve represents the ego's trajectory.
The green curve (in Top) and a series of green spheres (in Middle)
represent the actual and predicted trajectories
of dynamic obstacles, respectively.
The white box highlights dynamic obstacles identified from the first-person perspective.}
\label{realwprld exp}
\end{figure*}
In real-world experiments, we tested the framework in two different environments. In each scenario,
the ego drone started from the origin and flew towards a designated target at 1.5 $m/s$,
performing dynamic obstacle avoidance. A manually operated red drone acted as a dynamic obstacle,
flying randomly to interfere with the ego drone's path, as shown in Fig.\ref{fig:realworld}.
In the scenario shown in Fig.\ref{Fig.sub.1}, several static obstacles were randomly placed to
create a dense obstacle environment, with dynamic obstacles moving through the gaps.
Two specific moments from the flight process were highlighted:
At moment \normalsize{\textcircled{\scriptsize{1}}}\normalsize,
the ego drone detected a dynamic obstacle and predicted its future trajectory,
opting to pass underneath it.
At moment \normalsize{\textcircled{\scriptsize{2}}}\normalsize , the ego drone was directly
beneath the dynamic obstacle. Although the obstacle was no longer in view,
the drone relied on prior observations and predictions to maneuver safely,
successfully avoiding the obstacle.
In the scenario shown in Fig.\ref{Fig.sub.2}, a high wall completely obstructs the field of view,
with dynamic obstacles flying in the blind spot behind the wall.
The ego drone's destination is set within these blind spots.
Initially, the area behind the wall is unknown, requiring the drone to navigate leftward,
as shown at moment \normalsize{\textcircled{\scriptsize{1}}}\normalsize.
Due to the presence of the blind spot, the planned trajectory maintains a safe distance from
the wall's edge, effectively preventing collisions with dynamic obstacles emerging from behind the wall,
which is caused by the $\textbf{hazardous area}$ mentioned in
\ref{sec:path_planning} and \ref{sec:trajectory_optimization}.
Moreover, this wide detour also gives the ego drone a better observational angle of the area
behind the wall. Consequently, at moment \normalsize{\textcircled{\scriptsize{2}}}\normalsize,
the drone detects a dynamic obstacle and adjusts its path,
moving closer to the wall to stay clear of the obstacle, as shown in moments
\normalsize{\textcircled{\scriptsize{2}}}\normalsize \ and
\normalsize{\textcircled{\scriptsize{3}}}\normalsize.
In the end, the drone successfully avoids the dynamic obstacle emerging from the blind spot and
reaches its destination safely.
\subsection{Simulation Tests}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig/fig_sim.jpg}
\caption{Snapshot of the simulation environment}
\label{fig:simulation}
\end{figure}
We conducted ablation experiments of our method using the Rviz simulation environment
described in \cite{wangVisibilityawareTrajectoryOptimization2021}, as shown in Fig.\ref{fig:simulation}.
In a 40x20 meter area, we randomly placed 55 cylindrical obstacles of varying sizes and
introduced 12 drones flying unpredictably to simulate dynamic obstacles.
The ego drone's task was to navigate from the far left end to the far right end of the area
at a target speed of 3 $m/s$. Additional parameters are detailed in Table \ref{tab:Parameters}.
\begin{table}[h]
\caption{Parameters in simulation tests}
\label{tab:Parameters}
\centering
\begin{tabular}{cccc}
\toprule
Field Size($m$) & \# Static Obs & \# Dynamic Obs & Desired Vel($m/s$) \\
\midrule
40*20*3 & 55 & 12 & 3 \\
\toprule
$\lambda_g$ & $\lambda_{\text{dyn}}$ & $\lambda_{\text{blind}}$ & $d_m$ ($m$) \\
\midrule
4.0 & 3.0 & 0.1 & 1.0 \\
\bottomrule
\end{tabular}
\end{table}
We tested dynamic obstacle flight speeds ranging from 1.0 $m/s$ to 3.0 $m/s$ and
compared the success rates of the ego drone reaching its destination without collisions.
The ablation group refers to our method without considering the \textbf{hazardous area}.
The results are presented in Table \ref{tab:result}.
Overall, our proposed dynamic obstacle avoidance framework performed well,
even in environments with multiple dynamic obstacles.
For instance, when the ego drone's speed was 3 $m/s$ and the dynamic obstacles moved at 1 $m/s$,
the success rate reached 94.1\%.
Notably, the group that accounted for \textbf{hazardous areas} performed better
at lower dynamic obstacle speeds. However, at higher obstacle speeds ($>$2.0 $m/s$),
the ablation group, which did not consider \textbf{hazardous areas}, showed better results.
This could be attributed to excessive avoidance maneuvers in dense, dynamic environments,
which may increase the likelihood of collisions behind the drone, opposite to its direction of travel.
\begin{table}[h]
\caption{Comparison of success rates in ablation experiments}
\label{tab:result}
\centering
\begin{tabular}{cccccc}
\toprule
Vel of Obs($m/s$) & 1.0 & 1.5 & 2.0 & 2.5 & 3.0 \\
\midrule
Proposed & \textbf{94.1\%} & \textbf{90.2\%} & \textbf{76.5\%} & 60.8\% & 41.2\% \\
Ablation & 88.2\% & 84.3\% & 74.5\% & \textbf{68.6\%} & \textbf{47.1\%} \\
\bottomrule
\end{tabular}
\end{table}
\section{CONCLUSIONS}
This paper presents a new trajectory planning framework for UAVs in dynamic environments.
By adapting the Simple Online and Realtime Tracking algorithm for 3D space and utilizing Bezier curves
with analytical solutions for real-time obstacle prediction, our approach facilitates efficient
navigation among multiple dynamic obstacles.
This framework improves traditional path planning and trajectory optimization by actively managing
potential collisions, including those from unseen obstacles in blind spots.
Experimental results demonstrate that our method significantly enhances UAV safety and operational
efficiency, effectively navigating complex and dynamic environments by predicting and avoiding
collisions with both static and moving obstacles.
However, some limitations were observed, particularly in scenarios involving high-speed dynamic
obstacles. These challenges suggest
the need for more robust prediction models and improved perception accuracy to handle such cases
effectively. Additionally, future work will focus on optimizing the framework's performance in
denser obstacle environments, incorporating advanced perception techniques to improve adaptability
and responsiveness in even more unpredictable and high-speed dynamic settings.
\addtolength{\textheight}{-12cm} % This command serves to balance the column lengths
% on the last page of the document manually. It shortens
% the textheight of the last page by a suitable amount.
% This command does not take effect until the next page
% so it should come on the page before the last. Make
% sure that you do not shorten the textheight too much.
\bibliographystyle{IEEEtran} %IEEEtran为给定模板格式定义文件名
\bibliography{ref} %ref为.bib文件名
\end{document}
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/chushengbajinban/robio2024.git
git@gitee.com:chushengbajinban/robio2024.git
chushengbajinban
robio2024
robio2024
master

搜索帮助

23e8dbc6 1850385 7e0993f3 1850385