Pregled bibliografske jedinice broj: 1179427
Drone Detection and Classification using an Acoustic Camera
Drone Detection and Classification using an Acoustic Camera // STO MEETING PROCEEDINGS MP-SET-275 Cooperative Navigation in GNSS Degraded and Denied Environments
Split, Hrvatska, 2021. MP-SET-275-14, 8 (predavanje, međunarodna recenzija, sažetak, znanstveni)
CROSBI ID: 1179427 Za ispravke kontaktirajte CROSBI podršku putem web obrasca
Naslov
Drone Detection and Classification using an Acoustic Camera
Autori
Grubeša, Sanja ; Stamać, Jasna ; Orlić, Nikša ; Grubeša, Tomislav
Vrsta, podvrsta i kategorija rada
Sažeci sa skupova, sažetak, znanstveni
Izvornik
STO MEETING PROCEEDINGS MP-SET-275 Cooperative Navigation in GNSS Degraded and Denied Environments
/ - , 2021
ISBN
978-92-837-2380-6
Skup
STO Sensors and Electronics Technology Panel Symposium "Cooperative Navigation in GNSS Degraded and Denied Environments" (SET-275)
Mjesto i datum
Split, Hrvatska, 29.09.2021. - 30.09.2021
Vrsta sudjelovanja
Predavanje
Vrsta recenzije
Međunarodna recenzija
Ključne riječi
drone, acoustic camera, MEMS microphones, microphones array, classification, convolutional neural networks
Sažetak
In our research, as part of the 4D Acoustic Camera project, an acoustic detector, i.e. a prototype of an acoustic camera, was developed. In order to achieve our goal which is to design a robust yet small acoustic camera, which can be used in different security systems, 72 MEMS microphones were used to form a microphone array in the shape of a hemisphere. The acoustic camera prototype records the sound in a protected area and classifies it. The classification is carried out by comparing the recorded sound from the protected area with the sounds existing in a database, and in this way the acoustic camera either eliminates or confirms the sound source as the actual target. Thus, an acoustic camera would eliminate the expected sounds typical of human movement, such as walking through low vegetation, as a potential target because it eliminates the normal sounds present in the environment. The first step of classification was to create a database of different sounds. We divided the samples into the following four categories: noise, walking, speech, and drone flight. For each of the sounds in the database it is necessary to obtain a spectrogram. When recording sounds, i.e. obtaining spectrograms with our acoustic camera prototype, each sound is recorded multi- channel (72-channel). The acoustic camera prototype uses the “Delay and Sum” (DAS) algorithm which enables it to maximize the array’s sensitivity to incoming sound waves coming from a particular direction. In this way, attenuation of sound from directions which are not of interest is achieved, making it is possible to reduce the influence of noise, and thus it is easier to extract a useful signal. Such spectrograms from the database represent a training set for a convolutional neural network that we will use as a classification tool, with which the sounds from a protected area will be classified. Since convolutional neural networks are often used to classify images, we can imagine the input to the convolutional network as a grayscale image which represents the spectrogram. Preliminary classification results show a high classification of the samples, which shows that the method and architecture of the selected neural network are appropriate.
Izvorni jezik
Engleski
POVEZANOST RADA
Projekti:
EK-EFRR-KK.01.2.1.01.0103 - 4D Akustička kamera (Geolux) (Đurek, Ivan, EK ) ( CroRIS)
Ustanove:
Fakultet elektrotehnike i računarstva, Zagreb