Clock synchronization across network nodes is the problem at hand. Instead of relying on elements “keeping” their own sense of time, a “distribution-of-time” approach is talked about here. The two promising solutions: Synchronous Ethernet and IEEE 1588 are presented in their approach, requirements, design and as an interesting comparison as well as a “combination” to deal with the challenge. Mandatory Sync-E hardware expectations and advanced 1588 concepts are covered. Ideas regarding modularization, extensions and software algorithmic requirements are individually defined. Plus, a new way of looking at IEEE 1588 software partitioning as a multi-threaded application or on a multi-core is presented.
In addition, the synchronization achieved is measurable both qualitatively and quantitatively using various tools and methods that are enumerated.
The changing scenario
It is a fact that the service providers have huge investment in legacy TDM, SONET/SDH, ATM network and equipment, however, what is also true is that they are cashing-in on the disruptive Ethernet technology. The reason is as the traffic grows exponentially, with the demands of the customers (like speed, bandwidth, choice, service and reliability) and newer applications, service providers are looking upon on newer and better technology to retain their profit margins (despite of competition and lower end-user costs) by moving to IP or MPLS, Ethernet, TDM Circuit Emulation and Mobile Backhaul.
This move, from a time synchronized to an asynchronous medium, although financially beneficial, is detrimental for the applications and services that rely fundamentally on the accuracy of time. These applications are designed for a network that has a small and discreetly measurable transmission delay and a significantly lower delay variation, both of which are absent in Ethernet. A typical service level would be:
|Frame delay||< 10ms|
|Frame delay variation||2ms|
|Frame error rate||0.0001%|
|Mean-time to repair||2h|
Table: Service levels for mobile backhaul services
(Source: ADVA optical networking)
Ironically, this changing scenario is the reason that we are discussing synchronization in an asynchronous network.
Timing is Fundamental
Time, and its perceived accuracy, depends upon what use we put it to. Flight delays are not measured in seconds, getting your car repaired may take extra ten minutes while a delay of a day in construction may not be matters of any escalation or complaint. We have an implicit margin and a different expectation for each one of them. Reduction of these delays, by efficient processes and technology, is sometimes taken as a measure of progress and it is worth noting that that we have come a long way from pendulums to atomic clocks. The smallest measured time till current day, is 20 attoseconds (10-18) and the theoretically derived lower limit of time measurement is 10-44 seconds, known as the Planck Time (tp). As we keep on overcoming various physical limitations, this gap would go on decreasing.
For the computing machines, needless to say, one second is a long interval. Unlike us humans, they can talk to each other in nanoseconds and feel annoyed for micro-second delays. Just to put this into perspective, a nanosecond (10-9) when compared with 1 second (in terms of length) is analogous to a measurement accuracy of size of a virus or a DNA helix while measuring the height of an average human being.
Other analogies (in order of magnitude) for comparing one nanosecond to one second are:
- 1 paisa in 1 crore rupees
(or 1 cent in 10 million dollars)
- Speed of an electron versus a snail.
- Nerve cell potential versus a Lightening strike.
Time is one of the most accurately measured quantities and, considering this, it is really a big achievement, when we say that a Cesium clock has an uncertainty of 5.10 x 10-16 (Error of 1 second in 60 million years). We refer to these clocks as “Stratum-0” clock sources.
...that's not all folks!
(c) AVChrono 2021, All Rights Reserved