If you’ve ever written anything that tries to keep perfect synchronization across systems, odds are pretty good you’ve run into desyncs. Some of them sensible, some of them leading to weeks of hair pulling.
And of course, the most obtuse and unexpected one of all is dealing with floating point issues, where a single off-by-0.000000001 error will snowball up into something much larger in a quick hurry. Traditional wisdom has just been to not use floating point math for anything related to logic, but this is not practical in every situation.
It turns out that’s not entirely true and perfect synchronization is attainable, if you understand the pitfalls that are in place. Warning: This gets fairly technical.