Analyzing a Stereo’s Frequency Response and Decay Times

Analyzing a Stereo’s Frequency Response and Decay Times


A collage of Kevin Fielding surrounded by various audio setups he helped fine-tune (photo courtesy of Kevin Fielding)

Many of us use our listening rooms as a place to escape the world and its problems, so that we may decompress and recharge our batteries. One thing I, as an audiophile, discovered is that the better the sound quality in my room, the more I’ve been able to connect emotionally with my music. That shouldn’t be a surprise, since sonic degradation or distortion can be obstacles to the musical illusion—if you can hear the disjointed “seams” in the playback, it’s harder to be fooled into believing the music is real or authentic. Achieving great sound is, of course, dependant on the gear itself and its related synergy, but room acoustics also plays a large role, due to indirect reflections interfering with direct ones.

I suggest to audiophiles considering installing acoustic treatments in their room to use their ears and take acoustic measurements before doing so. Measurements of the frequency and time domains taken at the listening position can help explain what the ear is perceiving. Is the frequency response smooth or ragged? Or, is the bass decay time too long, which is causing the midrange to sound muddy? While our ears should always be used as the final arbiter of sound quality, measurements can guide us most of the way towards the sound we want, with our ears doing the final tweaking.

In this article, I will discuss the basics of measurements-taking, tips on measurement settings, and the variables that are important for frequency response and decay time analysis.

Measurement basics

While getting the “right” sound is a subjective exercise, “cohesive” sound is measurable (e.g. high signal/noise ratio, pleasing frequency response, balanced decay times etc.). But measurements can lead down the wrong path; for example, there are often several ways to achieve a desirable frequency response, and if one of those ways uses an overabundance of absorption in the room, then, in my experience, it will sound odd, despite how good the measurements look on a screen. Achieving the desired goal is just as important as the method used to achieve it. When done correctly, there will be less tweaking for the ears to do. 

An audiophile interested in measuring their room will need to invest in an omnidirectional microphone (many USB types exist that plug directly into your computer), and measurement software. Dayton Audio sells a variety of microphones and their OmniMic software is very easy to learn and has a good user interface. Room EQ Wizard (REW) is a free software with a rich feature set but has a steeper learning curve. There are some other software tools that specialize in combining measurements with corrections to the frequencies and time domain, such as Audiolense, but are for the advanced user.

The microphone should be positioned as a proxy for the listener’s head, at ear level, with the microphone pointed horizontally at the midpoint between the main speakers or vertically towards the ceiling, which measures more of the room effects. The chosen software should come with its own test tracks so the microphone can record and display them within the software graphs so they can be examined.


  • Play the test tones at the same volume you’d play your music. 
  • A quality measurement requires a quiet background, so you may have to temporarily unplug a fridge, close the windows, and turn off the HVAC. 
  • Measure one channel at a time so that any problems are easier to spot and treat.
  • Use a 1/24th smoothing for frequency response measurements to detect problems that are masked when a less granular smoothing is used (e.g. 1/6th, 1/3rd).
  • Use a 1/3rd octave interval for decay time measurements so that the granularity exposes decay issues that would be masked using a 1 octave interval.
  • Use a decay time measurement window length of at least 500ms so that the microphone has enough time to adequately consider the room effects.  If the window is lengthened (e.g. 750ms), then the decay time measurement will be more accurate but at the cost of a less accurate frequency reading.

Frequency response measurements

The standard frequency response chart shows 20 – 20kHz on the horizontal axis and loudness in decibels on the vertical axis. Ideally, you want to see a smooth horizontal line indicating that all frequencies are of equal loudness, but this rarely happens unless you’re measuring outdoors without walls! As human hearing is less efficient at low frequencies, it is common practice to raise the loudness as frequency descends. For example, 20 or 30Hz may sound several decibels lower (e.g. 3-10dB) than the midrange and high frequencies. Bass loudness is a personal preference, so experimentation is key. You should be able to create a target curve within the measurement software (or as a text file outside of it, then imported into the software), which acts as your target that you hope your frequency response will look like. The software should display your target curve and the actual frequency response so deviations are easily spotted. (Harman International has done research on target curves and the sound people find pleasing, a topic I recommend you explore on the Internet.) You will notice that the highest frequencies drop off due to the impedance of air acting as a natural absorber of smaller wavelengths.

I like to save my measurement data and import it into a spreadsheet program to allow more in-depth analysis.*

Important frequency response variables

The spreadsheet offers insight into important variables, such as frequency peaks and nulls (excessively loud or quiet notes respectively) caused by the room’s effect on sound; error rates between the actual and target frequency responses, which may or may not be due to peaks and nulls; and the difference in loudness between speakers. I’ve elaborated on some of the variables below.

Analysis AreaVariable Description
Peaks and NullsNumber of actual notes affected: This metric helps determine if a peak or null is a large or minor issue. The more actual notes affected by a peak or null, the worse the sound quality. Ideally, you want as few of them as possible and about the same number of them for each channel. Very tall peaks or deep nulls that are very broad and cover quite a few frequencies are the worst type, while the more narrow ones that affect just 1 or 2 notes are the least detrimental.
 Basic Statistics: Includes metrics for Quantity, Average, and Maximum peak or null values by channel. For purposes of symmetry, it’s best to try to have the same number of peaks as nulls, and with similar average values between each main speaker.
 Distribution: This variable shows how the peaks or nulls are spread across the bass/midrange/high frequency regions. It’s typical to see more things happening in the bass than the other regions.
 Error rate: This represents how far the frequency response is from the target curve. The greater the distance, the greater the error. Quantifying the error rate by bass/mid/high regions for each speaker allows both speakers to be compared. Both should display very similar rates.
Loudness Difference Between ChannelsDistribution: When a note played by one speaker is louder than when it’s played by the other, the difference could be caused by a peak or null, but not always. This metric shows how the inter-channel loudness notes are spread across the bass/midrange/high region. If there is just one note that is more than 3dB louder/quieter than the other channel, then the ear will adapt to it. If there is a cluster of them, this will likely be distracting.

Decay time measurements

The other domain for room acoustic measurements is the decay time, i.e. the time a note lasts until it fades completely into the noise floor. A “T30” or “T40” metric is typically used to denote decay time and represents how long a sound takes to fall 30 and 40 decibels, respectively. If your noise floor is, on average at 30dB, and you listen to music at 70dB or higher, then using the T40 metric might provide more relevant data (70 – 30 = 40db, which corresponds to the T40 metric) than that provided by the RT60 metric. Most residential rooms aren’t large enough to support the more commonly-known RT60 used for concert halls and other public spaces.

Important decay time variables

A sample of important decay time variables is shown below and includes a target curve, general statistics, error rates between the actual and target curves, and inter-channel symmetry. These variables are not normally part of software applications, so importing the data into a spreadsheet allows for further data manipulation and analysis.

Analysis AreaVariable Description
Decay Time Target CurveTarget Curve: This is the same concept as a frequency response target curve. Typically, a home stereo should have a decay time between 200 to 500 milliseconds (ms) to avoid sounding overly dead or too alive. The decay time across the midrange should be consistent between both channels and gradually decrease at higher frequencies. For the bass—which naturally has longer decay times—its longest decay should be about double the average midrange decay time values, but less than 500ms.
 Basic Statistics: Consider calculating basic statistical measures (e.g. Average, Median, Mid-Point, Minimum, Maximum) to determine how similar both channels are.
 Error Rates: like the frequency response error rate idea, this metric considers how far from the ideal target curve the actual 1/3rd octave intervals are located. Single digit or low double digit error rates are quite acceptable.
 Symmetry: It’s ideal when both channels have the same 1/3rd octave decay times. Minimizing variance between them is best to avoid hearing them.

Unlike the frequency response chart, which is standard in its appearance, there are several different chart types for decay time measurements, including the popular but complex waterfall chart, which displays both frequency response loudness and decay times, and the wavelet spectrogram, a colourful chart that shows both frequency and decay intensity as a colour palette. Other charts exist.

In my next installment, I will focus on deciphering the analysis and what corrective actions, if any, may be considered to improve sound quality. Included will be measurement graphs of a real room’s before-and-after acoustics.

* I realize that not everyone will want to pore through a spreadsheet’s computational information to help identify and quantify problem areas, so I can provide this service to interested parties.


Fielding Acoustic Consulting specializes in two channel stereo setup and optimization. Kevin Fielding is a musician and devoted audiophile with a long work history in data analytics. He can be reached at

2024 PMA Magazine. All rights reserved.

Dear readers,

As you might know, PMA is an independent consumer audio and music magazine that prides itself on doing things differently. For the past three years, we’ve dedicated ourselves to bringing you an authentic listening experience. Our commitment? Absolute authenticity. We steer clear of commercial influences, ensuring that what you hear from us is genuine, unfiltered, and true to our values.

However, independence comes with its challenges. To continue our journey of honest journalism and to maintain the quality of content you love, we find ourselves turning to you, our community, for support. Your contributions, no matter how small, will help us sustain our operations and continue to deliver the content you trust and enjoy. It’s your support that empowers us to remain independent and keep our ears to the ground, listening and sharing stories that matter, without any external pressures or biases.

Thank you so much for being a part of our journey.

The PMA Team

If you wish to donate, you can do so here.

Search for a Topic

and receive our flipbook magazines early


Email field is required to subscribe.