-
Notifications
You must be signed in to change notification settings - Fork 0
/
ohun.qmd
963 lines (652 loc) · 31.2 KB
/
ohun.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
---
title: <font size="7"><b><i>ohun</i>: optimizing acoustic signal detection</b></font>
toc: true
toc-depth: 2
toc-location: left
number-sections: true
highlight-style: pygments
format:
html:
df-print: kable
code-fold: show
code-tools: true
css: styles.css
link-external-icon: true
link-external-newwindow: true
editor_options:
chunk_output_type: console
---
::: {.alert .alert-info}
## **Objetive** {.unnumbered .unlisted}
- Get familiar with concepts and data formatting practices related to automatic acoustic signals detection
- Learn how to run automatic detection using the package ohun
:::
[ohun](https://github.com/maRce10/ohun) is intended to facilitate the automated detection of sound events, providing functions to diagnose and optimize detection routines. Detections from other software can also be explored and optimized.
::: {.alert .alert-info}
<font size = "4">The main features of the package are: </font>
- The use of reference annotations for detection optimization and diagnostic
- The use of signal detection theory indices to evaluate detection performance
<font size = "4">The package offers functions for: </font>
- Curate references and acoustic data sets
- Diagnose detection performance
- Optimize detection routines based on reference annotations
- Energy-based detection
- Template-based detection
:::
All functions allow the parallelization of tasks, which distributes the tasks among several processors to improve computational efficiency. The package works on sound files in '.wav', '.mp3', '.flac' and '.wac' format.
------------------------------------------------------------------------
To install the latest developmental version from [github](https://github.com/) you will need the R package [remotes](https://cran.r-project.org/package=devtools):
```{r, eval = FALSE}
# install pacakge
remotes::install_github("maRce10/ohun")
#load package
library(ohun)
```
```{r global options, echo = FALSE, message=FALSE, warning=FALSE}
#load package
library(ohun)
data(lbh2)
data(lbh1)
data(lbh_reference)
# for spectrograms
par(mar = c(5, 4, 2, 2) + 0.1)
stopifnot(require(knitr))
options(width = 90)
opts_chunk$set(
comment = NA,
message = FALSE,
warning = FALSE,
# eval = if (isTRUE(exists("params"))) params$EVAL else FALSE,
dev = "jpeg",
dpi = 100,
fig.asp = 0.4,
fig.width = 6.5,
out.width = "100%",
fig.align = "center"
)
```
------------------------------------------------------------------------
# Automatic sound event detection
Finding the position of sound events in a sound file is a challenging task. [ohun](https://github.com/maRce10/ohun) offers two methods for automated sound event detection: template-based and energy-based detection. These methods are better suited for highly stereotyped or good signal-to-noise ratio (SNR) sounds, respectively. If the target sound events don't fit these requirements, more elaborated methods (i.e. machine learning approaches) are warranted:
<figure>
<center><img src="images/analysis_workflow.png" alt="automated signal detection diagram" width="500" height="450"/></center>
<figcaption><i>Diagram depicting how target sound event features can be used to tell the most adequate sound event detection approach. Steps in which 'ohun' can be helpful are shown in color. (SNR = signal-to-noise ratio) </i></figcaption>
</figure>
Still, a detection run using other software can be optimized with the tools provided in [ohun](https://github.com/maRce10/ohun).
# Signal detection theory applied to bioacoustics
Broadly speaking, signal detection theory deals with the process of recovering signals (i.e. target signals) from background noise (not necessarily acoustic noise) and it's widely used for optimizing this decision making process in the presence of uncertainty. During a detection routine, the detected 'items' can be classified into 4 classes:
- **True positives (TPs)**: signals correctly identified as 'signal'
- **False positives (FPs)**: background noise incorrectly identified as 'signal'
- **False negatives (FNs)**: signals incorrectly identified as 'background noise'
- **True negatives (TNs)**: background noise correctly identified as 'background noise'
Several additional indices derived from these indices are used to evaluate the performance of a detection routine. These are three useful indices in the context of sound event detection included in [ohun](https://github.com/maRce10/ohun):
- **Recall**: correct detections relative to total references (a.k.a. true positive rate or sensitivity; *TPs / (TPs + FNs)*)
- **Precision**: correct detections relative to total detections (*TPs / (TPs + FPs)*).
- **F1 score**: combines recall and precision as the harmonic mean of these two, so it provides a single value for evaluating performance (a.k.a. F-measure or Dice similarity coefficient).
<font size = "2">*(Metrics that make use of 'true negatives' cannot be easily applied in the context of sound event detection as noise cannot always be partitioned in discrete units)*</font>
A perfect detection will have no false positives or false negatives, which will result in both recall and precision equal to 1. However, perfect detection cannot always be reached and some compromise between detecting all target signals plus some noise (recall = 1 & precision \< 1) and detecting only target signals but not all of them (recall \< 1 & precision = 1) is warranted. The right balance between these two extremes will be given by the relative costs of missing signals and mistaking noise for signals. Hence, these indices provide an useful framework for diagnosing and optimizing the performance of a detection routine.
The package [ohun](https://github.com/maRce10/ohun) provides a set of tools to evaluate the performance of an sound event detection based on the indices described above. To accomplish this, the result of a detection routine is compared against a reference table containing the time position of all target sound events in the sound files. The package comes with an example reference table containing annotations of long-billed hermit hummingbird songs from two sound files (also supplied as example data: 'lbh1' and 'lbh2'), which can be used to illustrate detection performance evaluation. The example data can be explored as follows:
```{r, eval = TRUE, }
# load example data
data("lbh1", "lbh2", "lbh_reference")
lbh_reference
```
All [ohun](https://github.com/maRce10/ohun) functions that work with this kind of data can take both selection tables and data frames. Spectrograms with highlighted sound events from a selection table can be plotted with the function `label_spectro()` (this function only plots one wave object at the time):
```{r, eval = TRUE}
# save sound file
writeWave(lbh1, file.path(tempdir(), "lbh1.wav"))
# save sound file
writeWave(lbh2, file.path(tempdir(), "lbh2.wav"))
# print spectrogram
label_spectro(
wave = lbh1,
reference = lbh_reference[lbh_reference$sound.files == "lbh1.wav",],
hop.size = 10,
ovlp = 50,
flim = c(1, 10)
)
# print spectrogram
label_spectro(
wave = lbh2,
reference = lbh_reference[lbh_reference$sound.files == "lbh2.wav",],
hop.size = 10,
ovlp = 50,
flim = c(1, 10)
)
```
The function `diagnose_detection()` evaluates the performance of a detection routine by comparing it to a reference table. For instance, a perfect detection is given by comparing `lbh_reference` to itself:
```{r}
lbh1_reference <-
lbh_reference[lbh_reference$sound.files == "lbh1.wav",]
# diagnose
diagnose_detection(reference = lbh1_reference,
detection = lbh1_reference)[, c(1:3, 7:9)]
```
We will work mostly with a single sound file for convenience but the functions can work on several sound files at the time. The files should be found in a single working directory. Although the above example is a bit silly, it shows the basic diagnostic indices, which include basic detection theory indices ('true.positives', 'false.positives', 'false.negatives', 'recall' and 'precision') mentioned above. We can play around with the reference table to see how these indices can be used to spot imperfect detection routines (and hopefully improve them!). For instance, we can remove some sound events to see how this is reflected in the diagnostics. Getting rid of some rows in 'detection', simulating a detection with some false negatives, will affect the recall but not the precision:
```{r}
# create new table
lbh1_detection <- lbh1_reference[3:9, ]
# print spectrogram
label_spectro(
wave = lbh1,
reference = lbh1_reference,
detection = lbh1_detection,
hop.size = 10,
ovlp = 50,
flim = c(1, 10)
)
# diagnose
diagnose_detection(reference = lbh1_reference,
detection = lbh1_detection)[, c(1:3, 7:9)]
```
Having some additional sound events not in reference will do the opposite, reducing precision but not recall. We can do this simply by switching the tables:
```{r, }
# print spectrogram
label_spectro(
wave = lbh1,
detection = lbh1_reference,
reference = lbh1_detection,
hop.size = 10,
ovlp = 50,
flim = c(1, 10)
)
# diagnose
diagnose_detection(reference = lbh1_detection,
detection = lbh1_reference)[, c(1:3, 7:9)]
```
The function offers three additional diagnose metrics:
- **Split positives**: target sound events overlapped by more than 1 detecion
- **Merged positives**: number of cases in which 2 or more target sound events in 'reference' were overlapped by the same detection
- **Proportional overlap of true positives**: ratio of the time overlap of true positives with its corresponding sound event in the reference table
In a perfect detection routine split and merged positives should be 0 while proportional overlap should be 1. We can shift the start of sound events a bit to reflect a detection in which there is some mismatch to the reference table regarding to the time location of sound events:
```{r, }
# create new table
lbh1_detection <- lbh1_reference
# add 'noise' to start
set.seed(18)
lbh1_detection$start <-
lbh1_detection$start + rnorm(nrow(lbh1_detection),
mean = 0, sd = 0.1)
## print spectrogram
label_spectro(
wave = lbh1,
reference = lbh1_reference,
detection = lbh1_detection,
hop.size = 10,
ovlp = 50,
flim = c(1, 10)
)
# diagnose
diagnose_detection(reference = lbh1_reference,
detection = lbh1_detection)
```
In addition, the following diagnostics related to the duration of the sound events can also be returned by setting `time.diagnostics = TRUE`. Here we tweak the reference and detection data just to have some false positives and false negatives:
```{r}
# diagnose with time diagnostics
diagnose_detection(reference = lbh1_reference[-1, ], detection = lbh1_detection[-10, ], time.diagnostics = TRUE)
```
These additional metrics can be used to further filter out undesired sound events based on their duration (for instance in a energy-based detection as in `energy_detector()`, explained below).
Diagnostics can also be detailed by sound file:
```{r}
# diagnose by sound file
diagnostic <-
diagnose_detection(reference = lbh1_reference,
detection = lbh1_detection,
by.sound.file = TRUE)
diagnostic
```
These diagnostics can be summarized (as in the default `diagnose_detection()` output) with the function `summarize_diagnostic()`:
```{r}
# summarize
summarize_diagnostic(diagnostic)
```
# Detecting sound events with *ohun*
## Energy-based detection
This detector uses amplitude envelopes to infer the position of sound events. Amplitude envelopes are representations of the variation in energy through time. The following code plots an amplitude envelope along with the spectrogram for the example data `lbh1`:
```{r, }
# plot spectrogram and envelope
label_spectro(
wave = cutw(
lbh1,
from = 0,
to = 1.5,
output = "Wave"
),
ovlp = 90,
hop.size = 10,
flim = c(0, 10),
envelope = TRUE
)
```
This type of detector doesn't require highly stereotyped sound events, although they work better on high quality recordings in which the amplitude of target sound events is higher than the background noise (i.e. high signal-to-noise ratio). The function `energy_detector()` performs this type of detection.
### How it works
We can understand how to use `energy_detector()` using simulated sound events. We will do that using the function `simulate_songs()` from [warbleR](https://CRAN.R-project.org/package=warbleR). In this example we simulate a recording with 10 sounds with two different frequency ranges and durations:
```{r}
# install this package first if not installed
# install.packages("Sim.DiffProc")
#Creating vector for duration
durs <- rep(c(0.3, 1), 5)
#Creating simulated song
set.seed(12)
simulated_1 <-
warbleR::simulate_songs(
n = 10,
durs = durs,
freqs = 5,
sig2 = 0.01,
gaps = 0.5,
harms = 1,
bgn = 0.1,
path = tempdir(),
file.name = "simulated_1",
selec.table = TRUE,
shape = "cos",
fin = 0.3,
fout = 0.35,
samp.rate = 18
)$wave
```
The function call saves a '.wav' sound file in a temporary directory (`tempdir()`) and also returns a `wave` object in the R environment. This outputs will be used to run energy-based detection and creating plots, respectively. This is how the spectrogram and amplitude envelope of the simulated recording look like:
```{r, fig.height=4, fig.width=10}
# plot spectrogram and envelope
label_spectro(wave = simulated_1,
env = TRUE,
fastdisp = TRUE)
```
Note that the amplitude envelope shows a high signal-to-noise ratio of the sound events, which is ideal for energy-based detection. This can be conducted using `energy_detector()` as follows:
```{r, fig.height=4, fig.width=10}
# run detection
detection <-
energy_detector(
files = "simulated_1.wav",
bp = c(2, 8),
threshold = 50,
smooth = 150,
path = tempdir()
)
# plot spectrogram and envelope
label_spectro(
wave = simulated_1,
envelope = TRUE,
detection = detection,
threshold = 50
)
```
The output is a selection table:
```{r}
detection
```
Now we will make use of some additional arguments to filter out specific sound events based on their structural features. For instance we can use the argument `min.duration` to provide a time treshold (in ms) to exclude short sound events and keep only the longest sound events:
```{r eval = TRUE, echo = TRUE, fig.height=4, fig.width=10}
# run detection
detection <-
energy_detector(
files = "simulated_1.wav",
bp = c(1, 8),
threshold = 50,
min.duration = 500,
smooth = 150,
path = tempdir()
)
# plot spectrogram
label_spectro(wave = simulated_1, detection = detection)
```
We can use the argument `max.duration` (also in ms) to exclude long sound events and keep the short ones:
```{r eval = TRUE, echo = TRUE, fig.height=4, fig.width=10}
# run detection
detection <-
energy_detector(
files = "simulated_1.wav",
bp = c(1, 8),
threshold = 50,
smooth = 150,
max.duration = 500,
path = tempdir()
)
# plot spectrogram
label_spectro(wave = simulated_1, detection = detection)
```
We can also focus the detection on specific frequency ranges using the argument `bp` (bandpass). By setting `bp = c(5, 8)` only those sound events found within that frequency range (5-8 kHz) will be detected, which excludes sound events below 5 kHz:
```{r, fig.height=4, fig.width=10, eval = TRUE, echo = TRUE}
# Detecting
detection <-
energy_detector(
files = "simulated_1.wav",
bp = c(5, 8),
threshold = 50,
smooth = 150,
path = tempdir()
)
# plot spectrogram
label_spectro(wave = simulated_1, detection = detection)
```
::: {.alert .alert-info}
<font size="5">Exercise</font>
- Detect only the short sound below 5 kHz
```{r, fig.height=4, fig.width=10, eval = FALSE, echo = FALSE}
# Detect
detection <-
energy_detector(
files = "simulated_1.wav",
bp = c(0, 6),
threshold = 50,
max.duration = 500,
smooth = 150,
path = tempdir()
)
# plot spectrogram
label_spectro(wave = simulated_1, detection = detection)
```
:::
Amplitude modulation (variation in amplitude across a sound event) can be problematic for detection based on amplitude envelopes. We can also simulate some amplitude modulation using `warbleR::simulate_songs()`:
```{r, eval = TRUE, warning=FALSE, message=FALSE}
#Creating simulated song
set.seed(12)
#Creating vector for duration
durs <- rep(c(0.3, 1), 5)
sim_2 <-
sim_songs(
n = 10,
durs = durs,
freqs = 5,
sig2 = 0.01,
gaps = 0.5,
harms = 1,
bgn = 0.1,
path = tempdir(),
file.name = "simulated_2",
selec.table = TRUE,
shape = "cos",
fin = 0.3,
fout = 0.35,
samp.rate = 18,
am.amps = c(1, 2, 3, 2, 0.1, 2, 3, 3, 2, 1)
)
# extract wave object and selection table
simulated_2 <- sim_2$wave
sim2_sel_table <- sim_2$selec.table
# plot spectrogram
label_spectro(wave = simulated_2, envelope = TRUE)
```
When sound events have strong amplitude modulation they can be split during detection:
```{r, eval = TRUE}
# detect sounds
detection <- energy_detector(files = "simulated_2.wav", threshold = 50, path = tempdir())
# plot spectrogram
label_spectro(wave = simulated_2, envelope = TRUE, threshold = 50, detection = detection)
```
There are two arguments that can deal with this: `holdtime` and `smooth`. `hold.time` allows to merge split sound events that are found within a given time range (in ms). This time range should be high enough to merge things belonging to the same sound event but not too high so it merges different sound events. For this example a `hold.time` of 200 ms can do the trick (we know gaps between sound events are \~0.5 s long):
```{r, eval = TRUE}
# detect sounds
detection <-
energy_detector(
files = "simulated_2.wav",
threshold = 50,
min.duration = 1,
path = tempdir(),
hold.time = 200
)
# plot spectrogram
label_spectro(
wave = simulated_2,
envelope = TRUE,
threshold = 50,
detection = detection
)
```
`smooth` works by merging the amplitude envelope 'hills' of the split sound events themselves. It smooths envelopes by applying a sliding window averaging of amplitude values. It's given in ms of the window size. A `smooth` of 350 ms can merged back split sound events from our example:
```{r, eval = TRUE}
# detect sounds
detection <-
energy_detector(
files = "simulated_2.wav",
threshold = 50,
min.duration = 1,
path = tempdir(),
smooth = 350
)
# plot spectrogram
label_spectro(
wave = simulated_2,
envelope = TRUE,
threshold = 50,
detection = detection,
smooth = 350
)
```
The function has some additional arguments for further filtering detections (`peak.amplitude`) and speeding up analysis (`thinning` and `parallel`).
### Optimizing energy-based detection
This last example using `smooth` can be used to showcase how the tunning parameters can be optimized. As explained above, to do this we need a reference table that contains the time position of the target sound events. The function `optimize_energy_detector()` can be used finding the optimal parameter values. We must provide the range of parameter values that will be evaluated:
```{r}
optim_detection <-
optimize_energy_detector(
reference = sim2_sel_table,
files = "simulated_2.wav",
threshold = 50,
min.duration = 1,
path = tempdir(),
smooth = c(100, 250, 350)
)
optim_detection[, c(1, 2:5, 7:12, 17:18)]
```
The output contains the combination of parameters used at each iteration as well as the corresponding diagnose indices. In this case all combinations generate a good detection (recall & precision = 1). However, only the routine with the highest `smooth` (last row) has no split sound events ('split.positive' column). It also shows a better overlap to the reference sound events ('overlap.to.true.positives' closer to 1).
In addition, there are two complementary functions for optimizing energy-based detection routines: `feature_reference()` and `merge_overlaps()`. `feature_reference()` allow user to get a sense of the time and frequency characteristics of a reference table. This information can be used to determine the range of tuning parameter values during optimization. This is the output of the function applied to `lbh_reference`:
```{r}
feature_reference(reference = lbh_reference, path = tempdir())
```
Features related to selection duration can be used to set the 'max.duration' and 'min.duration' values, frequency related features can inform banpass values, gap related features inform hold time values and duty cycle can be used to evaluate performance. Peak amplitude can be used to keep only those sound events with the highest intensity, mostly useful for routines in which only a subset of the target sound events present in the recordings is needed.
`merge_overlaps()` finds time-overlapping selections in reference tables and collapses them into a single selection. Overlapping selections would more likely appear as a single amplitude 'hill' and thus would be detected as a single sound event. So `merge_overlaps()` can be useful to prepare references in a format representing a more realistic expectation of how a pefect energy detection routine would look like.
## Template-based detection
This detection method is better suited for highly stereotyped sound events. As it doesn't depend on the signal-to-noise ratio it's more robust to higher levels of background noise. The procedure is divided in three steps:
- Choosing the right template (`get_templates()`)
- Estimating the cross-correlation scores of templates along sound files (`template_correlator()`)\
- Detecting sound events by applying a correlation threshold (`template_detector()`)
The function `get_templates()` can help you find a template closer to the average acoustic structure of the sound events in a reference table. This is done by finding the sound events closer to the centroid of the acoustic space. When the acoustic space is not supplied ('acoustic.space' argument) the function estimates it by measuring several acoustic parameters using the function [`spectro_analysis()`](https://marce10.github.io/warbleR/reference/spectro_analysis.html) from [`warbleR`](https://CRAN.R-project.org/package=warbleR)) and summarizing it with Principal Component Analysis (after z-transforming parameters). If only 1 template is required the function returns the sound event closest to the acoustic space centroid. The rationale here is that a sound event closest to the average sound event structure is more likely to share structural features with most sounds across the acoustic space than a sound event in the periphery of the space. These 'mean structure' templates can be obtained as follows:
```{r, eval = FALSE, echo = TRUE}
# get mean structure template
template <-
get_templates(reference = lbh1_reference, path = tempdir())
```
```{r, fig.width=6, fig.height=5, eval = TRUE, echo = FALSE}
par(mar = c(5, 4, 1, 1), bg = "white")
# get mean structure template
template <-
get_templates(reference = lbh1_reference, path = tempdir())
```
The graph above shows the overall acoustic spaces, in which the sound closest to the space centroid is highlighted. The highlighted sound is selected as the template and can be used to detect similar sound events. The function `get_templates()` can also select several templates. This can be helpful when working with sounds that are just moderately stereotyped. This is done by dividing the acoustic space into sub-spaces defined as equal-size slices of a circle centered at the centroid of the acoustic space:
```{r, eval = FALSE, echo = TRUE}
# get 3 templates
get_templates(reference = lbh_reference,
n.sub.spaces = 3, path = tempdir())
```
```{r, fig.width=6, fig.height=5, eval = TRUE, echo = FALSE}
par(mar = c(5, 4, 1, 1), bg = "white")
# get 3 templates
templates <- get_templates(reference = lbh_reference,
n.sub.spaces = 3, path = tempdir())
```
We will use the single template object ('template') to run a detection on the example 'lbh1' data:
```{r}
# get correlations
correlations <-
template_correlator(templates = template,
files = "lbh1.wav",
path = tempdir())
```
The output is an object of class 'template_correlations', with its own printing method:
```{r}
# print
correlations
```
This object can then be used to detect sound events using `template_detector()`:
```{r}
# run detection
detection <-
template_detector(template.correlations = correlations, threshold = 0.4)
detection
```
The output can be explored by plotting the spectrogram along with the detection and correlation scores:
```{r, warning=FALSE}
# plot spectrogram
label_spectro(
wave = lbh1,
detection = detection,
template.correlation = correlations[[1]],
flim = c(0, 10),
threshold = 0.4,
hop.size = 10, ovlp = 50)
```
The performance can be evaluated using `diagnose_detection()`:
```{r}
#diagnose
diagnose_detection(reference = lbh1_reference, detection = detection)
```
### Optimizing template-based detection
The function `optimize_template_detector()` allows to evaluate the performance under different correlation thresholds:
```{r}
# run optimization
optimization <-
optimize_template_detector(
template.correlations = correlations,
reference = lbh1_reference,
threshold = seq(0.1, 0.5, 0.1)
)
# print output
optimization
```
Additional threshold values can be evaluated without having to run it all over again. We just need to supplied the output from the previous run with the argument `previous.output` (the same trick can be done when optimizing an energy-based detection):
```{r}
# run optimization
optimize_template_detector(
template.correlations = correlations,
reference = lbh1_reference,
threshold = c(0.6, 0.7),
previous.output = optimization
)
```
In this case several threshold values can achieved an optimal detection.
### Detecting several templates
Several templates can be used within the same call. Here we correlate two templates on the two example sound files, taking one template from each sound file:
```{r}
# get correlations
correlations <-
template_correlator(
templates = lbh_reference[c(1, 10),],
files = c("lbh1.wav", "lbh2.wav"),
path = tempdir()
)
# run detection
detection <-
template_detector(template.correlations = correlations, threshold = 0.5)
correlations <-
template_correlator(
templates = lbh_reference[c(1, 10),],
files = c("lbh1.wav", "lbh2.wav"),
path = tempdir()
)
```
Note that in these cases we can get the same sound event detected several times (duplicates), one by each template. We can check if that is the case just by diagnosing the detection:
```{r}
#diagnose
diagnose_detection(reference = lbh_reference, detection = detection)
```
Duplicates are shown as split positives. Fortunately, we can leave a single detected sound event by leaving only those with the highest correlation. To do this we first need to label each row in the detection using `label_detection()` and then remove duplicates using `filter_detection()`:
```{r}
# labeling detection
labeled <-
label_detection(reference = lbh_reference, detection = detection)
```
This function adds a column ('detection.class') with the class label for each row:
```{r}
table(labeled$detection.class)
```
Now we can filter out duplicates and diagnose the detection again, telling the function to select a single row per duplicate using the correlation score as a criterium (`by = "scores"`, this column is part of the `template_detector()` output):
```{r}
# filter
filtered <- filter_detection(detection = labeled, by = "scores")
# diagnose
diagnose_detection(reference = lbh_reference, detection = filtered)
```
We successfully get rid of duplicates and detected every single target sound event.
------------------------------------------------------------------------
## Improving detection speed
Detection routines can take a long time when working with large amounts of acoustic data (e.g. large sound files and/or many sound files). These are some useful points to keep in mine when trying to make a routine more time-efficient:
- Always test procedures on small data subsets
- `template_detector()` is faster than `energy_detector()`
- Parallelization (see `parallel` argument in most functions) can significantly speed-up routines, but works better on Unix-based operating systems (linux and mac OS)
- Sampling rate matters: detecting sound events on low sampling rate files goes faster, so we should avoid having nyquist frequencies (sampling rate / 2) way higher than the highest frequency of the target sound events (sound files can be downsampled using warbleR's [`fix_sound_files()`](https://marce10.github.io/warbleR/reference/selection_table.html))
- Large sound files can make the routine crash, use `split_acoustic_data()` to split both reference tables and files into shorter clips.
- Think about using a computer with lots of RAM memory or a computer cluster for working on large amounts of data
- `thinning` argument (which reduces the size of the amplitude envelope) can also speed-up `energy_detector()`
------------------------------------------------------------------------
## Additional tips
- Use your knowledge about the sound event structure to determine the initial range for the tuning parameters in a detection optimization routine
- If people have a hard time figuring out where a target sound event occurs in a recording, detection algorithms will also have a hard time
- Several templates representing the range of variation in sound event structure can be used to detect semi-stereotyped sound events
- Make sure reference tables contain all target sound events and only the target sound events. The performance of the detection cannot be better than the reference itself.
- Avoid having overlapping sound events or several sound events as a single one (like a multi-syllable vocalization) in the reference table when running an energy-based detector
- Low-precision can be improved by training a classification model (e.g. random forest) to tell sound events from noise
------------------------------------------------------------------------
::: {.alert .alert-info}
Please cite [ohun](https://github.com/maRce10/ohun) like this:
Araya-Salas, M. (2021), *ohun: diagnosing and optimizing automated sound event detection*. R package version 0.1.0.
:::
## References
1. Araya-Salas, M. (2021), ohun: diagnosing and optimizing automated sound event detection. R package version 0.1.0.
2. Araya-Salas M, Smith-Vidaurre G (2017) warbleR: An R package to streamline analysis of animal sound events. Methods Ecol Evol 8:184-191.
3. Khanna H., Gaunt S.L.L. & McCallum D.A. (1997). Digital spectrographic cross-correlation: tests of sensitivity. Bioacoustics 7(3): 209-234.
4. Knight, E.C., Hannah, K.C., Foley, G.J., Scott, C.D., Brigham, R.M. & Bayne, E. (2017). Recommendations for acoustic recognizer performance assessment with application to five common automated signal recognition programs. Avian Conservation and Ecology,
5. Macmillan, N. A., & Creelman, C.D. (2004). Detection theory: A user's guide. Psychology press.
------------------------------------------------------------------------
<font size="4">Session information</font>
```{r session info, echo=F}
sessionInfo()
```