13
Drone Technologies and Applications
DOI: http://dx.doi.org/10.5772/intechopen.1001987
parameters such as sensor type (CCP/CMOS), sensor size, dynamic range, width
ratio, field of view (FOV), delay, and low-light performance are the features to be
considered.
There are two sensors used in cameras as CCD and CMOS. CCD sensor collects data
using scanning method pixels simultaneously. It has less jelly effect due to the global
more shot, high dynamic range performance, good performance in multi-light and
low-light (WDR) environments, and better black/white transitions than CMOS. CMOS
sensors collect line-by-line data for all pixels on the horizontal and vertical axis. This
process can create delays, according to CDD. CMOS sensors may generally experience
image distortions with the rolling shutter, that is, the jelly effect, which occurs when the
CMOS sensor collects data line-by-line from the pixels. These sensors have good
color
fidelity and performance, have low power consumption, and are cheaper.
Another factor affecting image quality in cameras is sensor size. Low-light
performance and dynamic range depend on sensor size. The larger the sensor size,
the greater the field of view as the lens size and low-light performance increase.
The dynamic range improves the image in bright and dark environments according
to the light intensity and allows the user to see the desired object comfortably. The
cameras have two aspect ratios, 16:9 and 4:3. Many cameras today have an adjustable
double-width balance. Field of view (FOV) and angle are important user features.
The larger the angle, the more difficult it is to view distant objects due to the fisheye
appearance. The smaller the field of view, the closer and clearer the image. This event
is preferred on immediate flights.
On the other hand, decreasing the lens length or increasing the sensor size
increases the field of view. The “lux” unit is an important feature when shooting
in low-light environments, and the increase in this value is a feature that improves
low-light performance. Sensor size is very important in low light. As the sensor grows,
the sensor area increases,
causing more light to enter, and lower speed performance
occurs.
3.9.1 Sensor features of drone cameras
Drones, cameras, and sensors detect differences, modeling, classification, devel-
opment, and changes, especially for agricultural applications.
Imaging sensors such
as red, green, and blue (RGB), and near-infrared, multispectral, hyperspectral, and
thermal and distance sensors such as LiDAR and search-and-rescue (SAR) are widely
used. RGB sensors capture the image as the human eye sees these red, green, and blue
(RGB) colors, a narrow band of the electromagnetic spectrum.
Multispectral image sensor offers the opportunity to easily detect differences in
the target area by using sensors sensitive to certain wavelengths along the electromag-
netic spectrum. The sensor types and features used for this purpose are divided into
green, red,
and blue visible bands, red edge, and near-infrared.
Visible light has wavelengths in the range of 400 to 700 nm. It determines regional
features, elevation modeling, and object counting applications, especially for agricul-
tural uses.
Red Edge: The 717 nm center is a band corresponding to the 12 nm bandwidth.
This tape provides information on phytosterols and chlorophyll. Accordingly, it is
used
in plant health, plant counting, and water management.
Near-infrared (NIR): 842 nm center is a band corresponding to 57 nm bandwidth.
This reflection is used in soil, moisture analysis, crop health, and stress analysis
depending on the chlorophyll level in the plant.
Drones – Various Applications
14
Like other spectral imaging, hyperspectral imaging collects information and
processes it into the electromagnetic spectrum. Nevertheless,
aside from visible
light, which the human eye can detect in three bands (red, green, and blue), spec-
tral imaging looks at objects using a wide part of the electromagnetic spectrum. In
other words, thanks to this technique, which divides the image into many bands, it
offers the opportunity to grasp objects and their properties in a much wider band
range than what is visible in the pictures with a single camera. In particular, it is a
technology that can be used in the detection of underground resources in mining,
agriculture, the prevention of diseases and pests, the military field, thermal infra-
red
hyperspectral imaging, the chemical field, in the detection of colorless and
odorless harmful substances in the air, in environmental issues, in the detection
of leaking toxic wastes. Despite the advantages of hyperspectral imaging, such as
imaging in a wide spectrum, it is very expensive, and complex processing
processes
pose a significant problem [6].
Thermal imaging is an imaging system based on invisible IR energy (heat) and
determines the general structure of the image, colors, and shapes formed according
to IR energy. While normal cameras create the image thanks to the light, thermal
cameras make the image thanks to the heat. Similarly, color differences are important
when the human brain and eye use colors and light to create an image.
Thermal cameras are used to map the amount of water in the soil (SWC) depend-
ing on the land surface temperature (LST) [7]. Thermal cameras have limited spatial
resolution, which often causes difficulties in homogeneous areas such as farmland
with bare soil [8]. It is especially used in agriculture to determine plant water needs,
detect disease, and for phenotyping.