We have developed a new technique for measuring the velocity of information vi based on information-theoretic ideas. We do this by first creating an alphabet of symbols, each of which is a distinct pulse shape. We then send the symbols through vacuum and a test medium which has either a very fast or very slow group velocity. By measuring the accuracy with which the symbols can be distinguished as a function of time, we can determine the velocity of the information.
|The two symbols in our alphabet. The leading edges are identical.|
Our technique begins with the creation of an
symbol shapes. The alphabet that we use is binary, having two
symbols, but it could in principle have any number of symbols.
We designed our symbols so that they are easily distinguishable
from each other, but not immediately. The leading edges
of the symbols are identical.
This choice of symbol shapes has a number of appealing features. One is that when the symbols become distinguishable, the signal-to-noise ratio is large. The other appeal is that we can observe how the symbols behave when they are similar vs when they are different.
Imagine you're sending your friend information using these symbols. Don't worry about fast light or velocities for the moment. Lets say your sitting right next to each other. Every minute, you send one of these symbols to tell her whether you're happy (1) or sad (0).
When the time comes, you start sending a symbol. At time -100 ns, your friend clearly sees a symbol coming, but can she tell whether you're happy or sad? No! She doesn't know yet because at that point, the symbols are the same. Therefore, she hasn't received any information1.
A more subtle question is have you actually sent any information? The answer is again no. The reason is that you can still change your mind. At time -100 ns, you still haven't committed and you don't need to until time 0 ! Only then must you commit, and only then have you sent the information.
Now, what about your friend? At time 100 ns, you've committed to a symbol, and she can easily tell which mood you're in. In this case, the sending time and arrival time of the information are nearly the same because you and your friend are so close to each other, but if you were a large distance apart, she would receive the information long after you sent it. This time difference along with the distance tells us the information velocity.
|More realistic symbol shapes. These have non-perfect discontinuities and have noise added.|
But there's a catch! The symbol shapes used above are ideal. In the real world, we can't make such perfect jumps or such smooth shapes. The devices we use both have noise and tend to filter or smooth the jumps. As a result, the symbols shown to the right are a little more realistic.
What if these are what your friend sees? Lets say she's right across the table from you again. When does she detect the information? You still made your choice at time 0, but now she can't tell what you sent until some time later. We call this delay the detection latency, and it has nothing to do with the information velocity. It is a fundamental part of sending and detecting information, and it is something we must account for in our measurements.
Detecting the realistic symbols shown above, when can your
friend be completely certain which symbol you sent? The
real answer is never! If the noise is bad, there's always a
chance that what she's seeing just looks like a
but was really a
1. However, at some time, she can be
90% sure, or 99% sure, etc.
In the information theory and communications world, we quantify this sort of certainty with the bit error rate, or BER. The BER is just the fraction of symbol identifications that are incorrect. So, a BER of 1/2 is the worst possible; that's what you get if you don't even look at the symbols and just guess. The better your detection, the lower this BER will be. A BER of 0.1 means that you are right 9 out of 10 times.
The BER is convenient because it gives us a way to quantify the detected information at a given time. If we tell your friend that she must identify the symbol she's receiving at time -100 ns, and repeat the test over and over again, she'll get a BER of 1/2. She'll be guessing because that's all she can do. Now, if we do the same at time 100 ns, her BER will probably be pretty good because it's easy to tell the symbols apart then. All we have to do is find the time when the BER first departs from 1/2.
Again, the detection latency gets in our way. Noise in the pulses leads to noise in the BER, so it's not immediately clear when the BER leaves 1/2. We must choose some threshold BER at which we believe we are detecting information. For example, your friend might check 1000 symbols and discover that she's right 9 out of 10 times when she identifies the symbol at time 50 ns. With a BER of 0.1, she can be pretty sure that she's detecting some information.
According to this system, we would then say that you sent the information at time 0, your friend detected it at time 50 ns, and (because you're right next to each other and there was no propagation time) the detection latency is 50 ns.
The problem is only a little more complicated when you're sending the symbols through some medium. It is often hard to know exactly when the information left the sending device because the devices can be rather complex. However, if we send the symbols through the same length of vacuum (where the information velocity is c) we can figure out when the information was sent. It's also difficult to know what the detection latency is. However, we can estimate it by theoretically modeling our experiment. The benefit of a theoretical model is that you can know exactly when the information was encoded.
By combining all of these tools, we can measure the velocity of information through any medium!
So, what do we see? When we measure the information velocity in our system, which has a very fast group velocity3, we find that the information velocity vi ≈ 0.4(+ 0.7 - 0.2) c. Why so slow? Well, our uncertainty range includes c, but we think the reason we got 0.4 c is as simple as noise, an imperfect theoretical model, and different noise in the vacuum vs. fast cases. We hope that future experiments with better equipment can get closer to the real answer!
It may seem pretty easy to identify the symbols shown above, but in order to do this experiment well, it's important to use the best technique available. We use an identification scheme called a matched filter.
The basic idea is that each incoming symbol is compared to two
reference symbols; the ideal symbol shapes for
1. The comparison is done by integrating the product of
each reference symbol with the test symbol (including some
normalization) up to the identification time. If the test
symbol is similar to a reference, the integral will be large.
Otherwise, the integral will be small. If one of these
integrals is subtracted from the other, the resulting number
tells us which symbol we think it is.
Please see our recent letter for a more complete description.
arrival informationis a bit more involved.