From Data to Melody: Data Sonification and Its Role in Open Science

IMPACT Unofficial
5 min readApr 28


By Dr. Manil Maskey and Abdelhak Maroune
Interagency Implementation and Advanced Concepts Team (IMPACT)


Data sonification is a field that involves the transformation of data into sound. This approach allows researchers, scientists, and individuals to use their sense of hearing to interpret and analyze complex datasets. Unlike traditional data visualization techniques, which rely on graphs, charts, and images to represent data, data sonification provides an auditory representation of data, offering a unique and complementary perspective on the information being studied.

Sound as a sensing mechanism is useful for several reasons. Firstly, it provides a different way of perceiving and understanding the world around us. While vision is our primary sense, hearing can provide additional information about our environment that is not available through visual cues alone. For example, sound can provide information about the location and movement of objects, the presence of obstacles, and changes in the environment such as the sound of running water. In addition, using sound as a sensing mechanism can be particularly useful for people with visual impairments. By using sound to represent data or information, researchers can create ways of perceiving and interacting with the world that are more accessible and inclusive (pillars of open science initiatives).

Science impacts can be maximized in various ways through sonification:

Accessibility and Inclusivity: By using sound to represent data, researchers can create more inclusive ways of presenting scientific information, ensuring that everyone can participate in the exploration and understanding.

Pattern Recognition: Sound has a temporal dimension, and so does most science data, allowing researchers to listen to changes and patterns in data over time.

Outreach and Education: Data sonification can be used as a powerful tool for outreach and education. Sonification can also inspire curiosity and interest in science, technology, engineering, and mathematics (STEM) fields, particularly among young learners.

Artistic Expression: Data sonification has the potential to inspire new forms of artistic expression. Artists can use sonified data to create unique musical compositions or soundscapes that reflect the rhythms and patterns of the natural world.

These impacts can be particularly useful in fields such as astronomy or Earth science, where changes in data can be difficult to perceive. NASA science has several examples of data sonification that can be found at Explore — From Space to Sound.

Example Earth Science Applications

We explore an Earth science application of data sonification in the analysis of the Normalized Difference Vegetation Index (NDVI). The NDVI is a widely used metric for monitoring vegetation health and cover. It is calculated using data collected by remote sensing satellites, such as Landsat and Sentinel-2, which measure the reflectance of different wavelengths of light to detect photosynthetic activity in vegetation. The Landsat and Sentinel-2 satellite programs are two of the most important sources of remote sensing data for environmental monitoring. Landsat, operated by NASA and the United States Geological Survey (USGS), has been collecting data since the 1970s. Sentinel-2, part of the European Copernicus program, is a newer satellite system that provides high-resolution imagery with a wide range of applications. The harmonization of Landsat and Sentinel-2 (HLS) data involves the development of consistent and compatible data products from these two satellite systems. This process is essential for ensuring that data from both sources can be seamlessly integrated. HLS NDVI provides valuable insights into vegetation dynamics, including changes in vegetation cover, health, and phenology. We explored the potential of data sonification in analyzing the Harmonized Landsat Sentinel-2 (HLS) dataset NDVI.

Sonification of HLS NDVI

Sonification of the harmonized Landsat Sentinel-2 dataset NDVI offers a novel way to explore vegetation dynamics. By mapping NDVI values to sound parameters such as pitch, volume, and timbre, we created auditory representations of vegetation changes over time. The tile used for this example is “15TVG for Central Iowa, USA”, and the timeframe captured spans from “2020–01–15 08:01:59” to “2023–03–05 07:58:11” with cloud cover less than 30%. NDVI data values are mapped to MIDI chords and chord velocity levels to create these auditory representations. The video includes the time series of the resulting NDVI and the corresponding sonification. The unique seasonal trend of the NDVI can be discerned through auditory cues.

Sonification of NDVI Derived from HLS video

Sonification of Warming Stripes

“Warming stripes” are graphics that depict temperature trends over time. Each stripe represents a year’s average temperature, with blue for cooler and red for warmer years. Created by UK climate scientist Ed Hawkins, they convey rising temperatures in a visually compelling way. The video includes the stripes and the corresponding sonification. The rise in temperature over recent years can be perceived audibly.

Sonification of Warming Stripes video


To sonify data in these examples, a set of MIDI chord names from “C1” to “A6” was used. Each chord included a based note (A-G), an optional sharp or flat, and an octave number indicating its position on a piano keyboard. The chords were ordered from lowest (C1) to highest (A6). Data was then mapped to MIDI note numbers, with higher data values assigned to lower notes. Data was also mapped to chord velocities (i.e., loudness), where higher values corresponded to higher velocities, ranging from 35 to 127. The sonification source code is available in a Github repository.

Challenges and Potentials

Despite the potential benefits of data sonification, there are challenges that need to be addressed to fully realize its potential. One of the main challenges is the need for standardization in sonification methods. Researchers and scientists need to agree on standard methods for converting data into sound, ensuring that the resulting auditory representations are accurate and consistent for perception. Crucially these methods must incorporate best practices and guidelines from previous research into the use of sonification to increase accessibility and inclusiveness for people with visual impairments.

Another challenge is the need for more accessible tools and software for creating data sonifications. While there are existing tools available for creating sonifications, they can be complex and difficult to use for those without specialized training. The development of user-friendly software and platforms that allow individuals to easily create and customize sonifications will be important for democratizing access to this technology.

Additionally, there is a need for more research on how people perceive and interpret sonified data. Understanding how different sound parameters influence perception and cognition will be essential for designing effective sonifications. Research should explore how data sonification can decrease cognitive load and increase the intake of information by splitting data streams across visual and aural modes of perception. Researchers will also need to consider how to present sonified data in a way that is meaningful and comprehensible to diverse audiences, including those with little or no background in science.

Despite these challenges, the future of data sonification is promising. As technology continues to evolve, we can expect to see new and innovative applications of data sonification in various scientific fields. In particular, the integration of data sonification with other data visualization and analysis techniques, such as virtual reality and augmented reality, could provide immersive and interactive experiences that enhance our understanding of complex data.

We thank Dr. Brian Freitag for providing source code to access HLS data and to compute NDVI.



IMPACT Unofficial

This is the unofficial blog of the Interagency Implementation and Advanced Concepts Team.