Another delay story?

Garrett Wollman wollman@csail.mit.edu
Tue Oct 10 17:38:46 EDT 2006


<<On Tue, 10 Oct 2006 15:35:35 -0400, John Francini <francini@mac.com> said:

> I had heard somewhere that many of the digital channels are squeezed  
> into a mere fraction of the on-the-wire bandwidth used by the analog  
> signals.

That's sort-of true.  An television "channel", whether analogue or
digital, uses 6 MHz of bandwidth in the western hemisphere.  The rate
of digital data (also called "bandwidth") that it's possible to
achieve varies depending on the modulation used.  The 8VSB system used
for broadcast digital TV can do about 19 Mbit/s, or twice the data
rate of a DVD.  The ATSC standard includes a cable-only 16VSB
modulation, although this is rarely if ever used in real cable
systems; as the numbers suggest it can carry about twice the data rate
of 8VSB.  (It does this at the cost of less immunity to noise.)
Instead of VSB, cable TV systems typically use 64QAM or 256QAM, which
can support a comparable or greater data rate.

However, few if any individual services actually use the full
bandwidth of the channel.  There's a lot of redundancy in video: one
frame usually looks very much like its predecessor, so if you can
arrange to send only the parts that change, rather than 30 entire
frames per second, you can get away with a much lower data rate.  All
digital TV systems (including DVD) currently in use make use of the
video encoding defined in the MPEG standard.  However, the standard
only specifies how the bits in the data stream are to be interpreted;
how the video is actually encoded is left up to the encoder.  This
means that there is a lot of room for differences in quality between
encoders, and even between different configurations of the same
encoder.  For example, here are some of the trade-offs:

- Constant or variable bit-rate:  An encoder may be configured to
generate video at a constant bit rate, which makes system design and
planning simpler -- all of the data rates can be determined in advance
and the appropriate amount of bandwidth allocated to serve all of
them.  (I believe the networks encode their HD programming in CBR.)
In VBR systems, the user sets a target data rate, but the actual
output of the encoder is allowed to vary by some amount depending on
the difficulty of compression, and during fast motion may greatly
exceed the target rate.  (Cable and satellite systems generally use
VBR, for reasons I'll get to in a moment.)  In either case, if the
source material requires a higher data rate than what is actually
available, some data will get dropped, leading to the "blocky"
appearance of heavily-artifacted MPEG video.[1]

- Real-time versus near-real-time versus off-line:  There are
opportunities to compress video that are only available when using a
high-end compression system which can look at a large amount of the
source material at once, so a pre-recorded source like a film is
nearly always going to have better video quality than a live event for
the same bandwidth.  Real-time encoders tend to be poor compressors;
if a delay of a few seconds is allowed, it's possible to do much
better.  This is also why HD Radio requires such a long delay.

- Multiplexing:  When using VBR encoders exclusively, as on the typical
cable system, there is the additional complication of assigning
services to physical channels.  With CBR, you can simply add up the
target bit rates until you reach one channel's worth.  With VBR, there
is the extra complication of not knowing the rate at which the encoder
will actually output ahead of time.  So operators will typically
identify specific channels which require high data rates -- sports and
movie channels -- and make sure not to multiplex them on the same
channel, so that it's unlikely for more than one service on the
channel to need all that capacity at the same time.

> If this is the case, shouldn't the eventual demise of the  
> analog tier be a good thing, as the cable co could then use more  
> bandwidth to provide better pictures?

Assuming they use it for that purpose rather than offering more
channels of endlessly repeated tripe....

-GAWollman

[1] There are two main ways to reduce the size of an MPEG video
stream.  One way is to reduce the temporal resolution, by dropping
some blocks of motion data (each one containing, IIRC, a 16x16
rectangle of image data).  The other way is to reduce the spatial
resolution, either by not encoding some information -- often done as a
matter of course with chrominance, since the human eye is less
sensitive to variations in color than in luminance -- or by increasing
the lossiness of the image compression.  MPEG "I frames", like JPEG
still images, are compressed using a technique baused on the Discrete
Cosine Transform, and like JPEG offer a trade-off between spatial
resolution and compression ratio.  The spatial resolution generally
cannot be changed in real time, so when a stream exceeds its allowed
data rate, the only solution is to drop blocks.



More information about the Boston-Radio-Interest mailing list