Abstract | The availability of big data has resulted in significant advancements in deep learning, which have led to increased performance, surpassing human capabilities in tasks, for example, audio classification. Moreover, there is a growing need for energyefficient, self-sustained sensors that can be massively deployed for managing sound events with intelligence. Acoustic Triboelectric Nanogenerators (TENG) show promise in this context, as they are capable of converting mechanical motion from acoustic waves into electrical signals while being cheap to manufacture. However, leveraging TENGs for environmental sound classification systems requires addressing the challenges associated with data collection and model training. This paper presents a design for an environmental sound classification system that utilizes acoustic TENGs and transformer-based models. To achieve this, we design and fabricate a device composed of cascading TENGs and use the device to record the ESC-50 dataset, which is then used to fine-tune transformer-based models for audio classification. A substantial improvement on model performance (by 44%) has been observed compared to that from a model pre-trained on the original ESC-50 dataset. The results provide valuable insights into the quality of TENG recorded audio, serving as a benchmark for future research in building data-driven environmental sound monitoring systems. |
---|