
Prices listed are in US$.
Many subjective audiophiles loathe blind listening tests. The standard putdown for blind testing is, “That’s not the way I listen.” Yet, in truth, blind comparisons—free from the influence of price, brand, technology, aesthetics, or other personal non-sonic biases—represent the purest form of subjective evaluation. So why aren’t blind tests more popular with audiophiles? The answer is simple—conducting a well-designed, truly unbiased blind test is a pain in the ass. I know, because I just completed one with the help of members of the Colorado Audio Society.
The Signal Chain
The listening test I set up for them compared two Digital-to-Analogue converters. DACs are among the easiest components to configure for blind testing, since all that’s needed in the way of special hardware is a transparent switching device and you are almost all set. But there is more to an unbiased test. The most critical factor is ensuring a level playing field. The volume levels of both DACs must be as closely matched as possible. I devoted considerable time to making sure that both DACs’ output levels were matched as closely as I could make them. After much back and forth, I enlisted a second set of ears to help confirm the match, where we determined that one DAC needed a 1 dB raise in its level to align the volumes of both DACs as closely as possible.
The software that makes it possible to A/B two DACs in real-time—without pauses, breaks, or imprecise synching—is Roon. Roon has a feature called “ganging,” which allows the exact same digital file to be sent simultaneously to multiple Roon endpoint DACs. This way both DACs are in synch, so when switching between them, there are no interruptions, clicks, pops, or any other audible cues that could give away which unit is playing.
The signal chain for both DACs was identical: each was connected via dedicated CAT 6 cable runs directly from the primary network switch to their respective Ethernet interfaces. For analogue switching, I used the Schiit Kara F. Like its predecessor, the Freya S, the Kara F makes absolutely no noise when switching between balanced inputs. With the Kara, I could have replicated a test I recently saw on YouTube, which used silent switching between two DACs to determine if listeners could identify how often the signal was switched between them. I found that approach biased—it seemed designed to support the conclusion that “all DACs sound the same.” That was not my goal. I was far more interested in exploring what sonic differences, if any, listeners could discern between two DACs.
The cabling from both DACs to the Kara F consisted of identical lengths of balanced interconnects to ensure consistency. The Kara’s balanced outputs were connected to the balanced inputs of a Pass 150.8 power amplifier, while its unbalanced outputs fed a pair of JL Audio F112 subwoofers. For speakers, I used my usual reference Spatial Audio X-2s.
The Test Itself
My listening room was created with a one-person listening sweet-spot in mind because there is only one of me. For the test, participants took turns inhabiting said sweet spot and listened to one track selected by me (always played first), and one chosen by them. To switch from DAC1 to DAC2, they only had to raise their hand and hold up one or two fingers to indicate which DAC they wanted to hear. I handled the switching to avoid any movement from the listener that could shift their head position and disrupt their focus.

This test wouldn’t have been possible without Schiit’s new Forkbeard remote control. The standard remote included with the Kara only cycles through inputs in one direction (1 to 2 to 3, etc.), making precise A/B comparisons cumbersome. In contrast, the Forkbeard app, which allows direct switching between inputs, was ideally suited for this type of evaluation.
I held three listening sessions, each with up to five participants. During their turn, each person sat in the sweet spot, while the others were seated to the sides or behind. People in the sweet spot were allowed about ten minutes for their session. I controlled the volume levels because I know the approximate level range where the system remains linear (at lower and higher volume levels the Fletcher-Munson curve effects the perceived linearity), so I selected the level for each participant’s track.
After each listening session, participants filled out a brief questionnaire. The questions were simple: What music did you choose? Which DAC did you prefer? And finally, an open-ended section: What differences did you hear?
The Results
There are two types of blind tests: single-blind and double-blind. In a double-blind test, neither the tester nor the participant knows the identities of the two devices. In a single-blind test, only the tester knows, which makes it prone to tester bias because the tester could, through verbal or non-verbal cues, display a bias the participant will pick up on. I conducted a single-blind test, but it came with a twist: I like both DACs equally. Also, I was careful to give identical instructions to everyone to minimize bias.
Why did I use two selections for the test? Because I had no idea what the participants would choose for their musical selection, and I wanted to make sure each person heard what I consider a high-quality recording of live, unprocessed acoustic instruments. For that, I used one of my own recordings of the Mr. Sun band, captured at the Salina Schoolhouse. The participants chose tracks from a range of commercial recordings available via Qobuz, including big-band jazz, rock, and pop. No one chose a classical music track.

So—what were the results? Five participants preferred DAC1 while five others preferred DAC2. Three participants had no preference.
While any conclusions from such a small sample must be taken with caution, I gleaned a few conclusions from the tests. First, the two DACs do not sound identical—but they are extremely close. Three participants reported having no preference, and several described the differences as “minimal” or “negligible” in their questionnaires. One participant disliked both DACs and another wrote that they preferred their own system playing the same track on vinyl. Obviously, to draw any meaningful generalizations, a much larger group would have been needed, so these results should be considered anecdotal rather than definitive.
What DACs Were Used
Ok, time for the big reveal. The two DACs I used for the test were the Gustard A26 and the Fosi Audio ZD3 DAC. The Gustard A26 was paired with the Gustard C16 10 MHz OCXO external clock, while the ZD3 DAC had multiple tweaks and additions, including Muses02 op amps replacing the stock ones and a capacitor bank situated between the DAC and the FiiO/Jade 12-volt linear power supply replacing the stock supply. The Gustard has a built-in Ethernet port, whereas the Fosi utilized a Raspberry Pi 4B with DietPi for its Ethernet connection. The Gustard with the clock costs approximately $2150 while the Fosi configuration cost me about $500, including the Raspberry Pi. In many respects—besides price—these two DACs are markedly different. The Gustard A26 employs an AKM DAC chip, while the Fosi uses an ESS DAC chip. The Gustard’s power supply is internal while the Fosi utilizes an AC-to-DC wall wart. The Gustard is very much a fully formed and finished product, while the Fosi set-up is far more of a tweaky mix-and-match assemblage.
Given the price difference, some people might conclude that the Fosi is a “giant killer,” or the Gustard is merely “meh.” That would not be correct. The Gustard offers far more flexibility and adjustable features, including six different digital filters, whereas the Fosi provides only one filter choice and comes in a much smaller, cheaper chassis. Although both DACs are similarly stellar sonically, it’s easy to see that the additional cost for the Gustard buys you a more sophisticated device.
My Take-Aways from the Tests
Did these blind listening tests meet my expectations? Yes. And having nothing I desperately wanted to prove helped minimize any potential disappointment. The tests showed me that when you have two DACs of relatively comparable sonic quality, a listener’s preference might likely be personal and subjective, shaped by individual taste and music selection, and may have very little to do with the component’s price, appearance, or features.
Leave a Reply