PSM: Learning Probabilistic Embeddings for Multi-scale Zero-Shot Soundscape Mapping

Washington University in St. Louis
ACM Multimedia, 2024

Abstract

A soundscape is defined by the acoustic environment a person perceives at a location. In this work, we propose a framework for mapping soundscapes across the Earth. Since soundscapes involve sound distributions that span varying spatial scales, we represent locations with multi-scale satellite imagery and learn a joint representation among this imagery, audio, and text. To capture the inherent uncertainty in the soundscape of a location, we design the representation space to be probabilistic. We also fuse ubiquitous metadata (including geolocation, time, and data source) to enable learning of spatially and temporally dynamic representations of soundscapes. We demonstrate the utility of our framework by creating large-scale soundscape maps integrating both audio and text with temporal control. To facilitate future research on this task, we also introduce a large-scale dataset, GeoSound, containing over 300𝑘 geotagged audio samples paired with both low-and high-resolution satellite imagery. We demonstrate that our method outperforms the existing state-of-the-art on both GeoSound and the existing SoundingEarth dataset.

Method

method

Our proposed framework, Probabilistic Soundscape Mapping (PSM), combines image, audio, and text encoders to learn a probabilistic joint representation space. Metadata, including geolocation (l), month (m), hour (h), audio-source (a), and caption-source (t), is encoded separately and fused with image embeddings using a transformer-based metadata fusion module. For each encoder, 𝜇 and 𝜎 heads yield probabilistic embeddings, which are used to compute probabilistic contrastive loss.

Soundscape Maps

Figure 1 Description

Two soundscape maps of the continental United States, generated from Bing image embeddings obtained from PSM, accompanied by a land cover map for reference.

Figure 2 Description

Soundscape maps over smaller geographic areas using PSM embeddings from Sentinel-2 satellite imagery at two zoom levels.

Figure 3 Description

Temporally dynamic soundscape maps created by querying our model with audio and text query.

Figure 4 Description

A soundscape map of the USA for a text query: sound of insects compared with a reference map indicating the risk of pest-related hazards.

Satellite Image to Sound Retrieval

BibTeX

@inproceedings{khanal2024psm,
      title = {PSM: Learning Probabilistic Embeddings for Multi-scale Zero-Shot Soundscape Mapping},
      author = {Khanal, Subash and Xing, Eric and Sastry, Srikumar and Dhakal, Aayush and Xiong, Zhexiao and Ahmad, Adeel and Jacobs, Nathan},
      year = {2024},
      month = nov,
      booktitle = {Association for Computing Machinery Multimedia (ACM Multimedia)},
    }

Follow more works from our lab: The Multimodal Vision Research Laboratory (MVRL)