In Reply to: RE: " USB REGEN " by UpTone Audio ... any opinions ? posted by Thorsten on May 26, 2015 at 01:43:38:
One difference between USB and Ethernet is that USB uses bit stuffing to frame the packets. This means that the time on media varies according to the data being set. When USB carries audio data, this means that the power supply load will vary according to the audio signal being transmitted, adding a unique mechanism for noise modulation.
A second difference between USB and Ethernet is the poor error detection capabilities of USB, with the use of a 16 bit CRC vs. Ethernet's 32 bit CRC. The situation is made worse with USB due to the use of bit stuffing. In the event of marginal transmitters, cables and receivers, there will be a radically higher chance of corrupted packets arriving. In some cases there can be systemic problems as well. Note that a single bit error will misframe subsequent audio signals and may produce loud error bursts. A single bit error over Ethernet will only corrupt a single PCM sample, which at worst will be heard as a click. If USB is used to carry DSD via the DoP kluge, then there is a further failure mode associated with missynchronization generated by a single bit error.
I have hated the USB technology since its very beginning, because of its use of bit-stuffing for packet framing, a terrible idea dating back to the the mid-1970's when logic circuits were horrendously expensive. There are real problems with this coding (based on IBM's SDLC technology) that make it inappropriate to be used in noisy environments where there might be questionable signal integrity. The use of boutique USB cables that do not meet specifications is completely absurd given this weak design.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
This post is made possible by the generous support of people like you and our sponsors:
Follow Ups
- USB signal integrity and packet coding - Tony Lauck 05/26/1509:07:45 05/26/15 (0)