Our aircraft, whose main task is to follow the opponent UAVs autonomously, is designed to perform the task of dogfighting (dogfight) without assistance under all kinds of conditions. It will also have autonomous landing, take-off and flight services.
If you are interested, you can read our detailed 35 page report from here
● Autonomous landing, takeoff and stable flight
● Sufficient hardware and software to help track and lock down competitor UAVs with image processing
● Trying to follow the rival UAVs by using design, hardware and software techniques.
● Instant communication capability with the ground station during the mission (image and data transfer)
● Have the necessary security measures (failsafe) in the face of problems
Determining the Target to Follow
Telemetry data provided by the competition server will be used effectively in determining the aircraft to be tracked. During target determination, the drone closest to our own UAV and with relatively stable or predictable movement patterns will be selected as the target for locking. To achieve this, software will be designed to run on the ground station.
The Algorithm To Be Used To Detect The Target
Object recognition and the ability to track this object constitute the most important and complex part of this category. In order for the UAV we designed to perform tasks such as autonomous driving, target detection and tracking, the image processing algorithm must be carefully selected. In this context, two evaluation criteria played an important role in the selection of algorithms. These are the speed and accuracy of image detection. Today, there is a wide range of image processing algorithms. Among these, it was decided to use the YOLOv4 algorithm, which operates in relatively short times without losing its accuracy rate.
As the name suggests, the YOLO (You Only Look Once) algorithm processes the whole frame at once. (Redmon et al., n.d.) Because it uses a single-digit object recognition algorithm, it is similar to methods such as SSD and RetinaNet in this respect, while R-CNN, fast R- CNN differs from two-step object recognition methods such as faster R-CNN, R-FCN and Libra R-CNN. (Bochkovskiy et al., 2020) YOLO method divides the source image into SxS grids and performs boundary drawing and probability map creation on it. In the last step, it combines these two and completes the object recognition process. (Redmon et al., n.d.)
Target Tracking Algorithm
By their nature, drones are fast aircraft and the tracking algorithm to be used must ensure the tracking of the detected object at high speeds and accuracy. After detecting the drone with the YOLO algorithm, the KCF (Kernelized Correlation Filter) method is used to track this object. The KCF method is a new tracking system that increases the processing speed by using the properties of the circulatory matrix. (OpenCV: cv::TrackerKCF Class Reference, n.d.) This method enables real-time tracking of the given object by pre-marking it. The KCF method provides object tracking at high FPS values without compromising its accuracy.