
Satellite imagery plays a vital role in a wide range of applications, including disaster response, police surveillance, and environmental monitoring. Traditionally, this task requires manual analysis by trained experts, which is time-consuming and often infeasible due to the vast scale of imagery and the limited availability of analysts. Automating this process is taking time because many of the algorithms used lack accuracy. One promising solution, using Convolutional Neural Networks (CNNs), showed remarkable performance in image recognition accuracy. The results are discussed in this 2024 paper.
Here’s a simple breakdown of how CNN works:
Fully Connected Layer: This is the final step where all the features the CNN has identified are combined together to make a final decision about what the image is.
Convolution: This is where the CNN starts scanning the image. It uses little filters that move across the image to identify features like edges, textures, or shapes.
ReLU (Rectified Linear Unit): This step helps to increase non-linearity in the image. In simple terms, it makes sure that the CNN isn’t just looking at the image in a straight line, but is considering all the different parts of the image.
Pooling: Here, the CNN reduces the size of the image data it’s working with, making it easier to process. It does this by picking out the most important features it’s found so far.