Home Computer Audio Asylum

Music servers and other computer based digital audio technologies.

RE: Why expect heroic DAC engineering to overcome a flawed interface?

I2S comes in two flavors: clock at source and clock at master. It's the same signals in either case, the difference is where the clock itself is located and which way the clock signal travels on the wire. If the clock is at the source then (assuming all the wires are the same length) there will be no skew problems between the clock and the data lines. The problem is that noise on the clock line translates via limited risetime to a jitter source at the distant end. If the clock is at the transport then this noise jitters the DAC itself, causing distortion. (However, this jitter is at least uncorrelated with the signal, unlike with SPDIF.) If the clock is at the the DAC then noise on the clock line doesn't affect the DAC. The problem might be skew, but this can be minimized by limiting the round trip time to a fraction of a bit time. Alternatively, and this has been used as far back as the 1960's in "super computers" the cable length can be fixed so that round trip time is an integral number of bit times.

I2S was proposed as an on-board interconnect, originally TTL levels. If it is used across a cable there are electrical issues that must be addressed, but the signals can perfectly well be sent in a balanced fashion, reducing noise coupling.

AES can be used correctly, with the clock at the DAC for playback and separate cabling used to send a clock signal to the transport. In this regard, it is almost as good as I2S, since any jitter on the incoming data lines caused by the Manchester coding is lost when the signals are latched by the local clock. (No need for phase lock loops.) I2S over cables is not standardized, so some implementations probably provide balanced signaling.

There is an operational benefit for placing the clock at the source, even though this is sonically inferior. The clock signal identifies the sampling rate. When the clock is placed at the sink then an out of band channel is needed to select a sample rate that matches the source, assuming that the source is not willing to do a sample rate conversion to a common rate. In some professional systems, separate datacomm links have been used for the out of band channel. With USB, not only is there bidirectional data flow, but there is also the ability to send control and status signals as needed to deal with housekeeping factors.

Since the relevant signal quality and jitter issues were known at the best communications and computer laboratories as far back as the 50's and 60's the situation in audio reflects ignorance on the part of the designers or cost pressures that are appropriate to mass market products, but just plain wrong when used to design "cost no object" products, which is what the "high end" purports to do. I say "purports" because some vendors sell expensive audio jewelry.



Tony Lauck

"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar


This post is made possible by the generous support of people like you and our sponsors:
  VH Audio  


Follow Ups Full Thread
Follow Ups
  • RE: Why expect heroic DAC engineering to overcome a flawed interface? - Tony Lauck 10/21/1412:36:14 10/21/14 (0)

FAQ

Post a Message!

Forgot Password?
Moniker (Username):
Password (Optional):
  Remember my Moniker & Password  (What's this?)    Eat Me
E-Mail (Optional):
Subject:
Message:   (Posts are subject to Content Rules)
Optional Link URL:
Optional Link Title:
Optional Image URL:
Upload Image:
E-mail Replies:  Automagically notify you when someone responds.