I also thought I could trust mega Corp. That's why I put all my code on their platform, code.google.com, and not on this obscure platform without any business model, github.
Well, that sucked. And why should I use protobuf, when I just need to share structs and arrays in memory (aka zero copy) with a version field? Like everyone else does for decades?
> Causal dependencies can be modeled as commit parent-child relationships.
And there we have the problem. Git does not guarantee these things. Git is no CRDT. A proper replication protocol would, but git not. Git requires manual intervention to resolve coflicts. You end up with hourly conflicts, which need to be resolved manually, or not. Leading to inconsistencies all over when two people merge and resolve conflicts differently. Let not people merge, the system must handle this automatically. As in all online collaboration tools. Like Google Wave eg. If CRDT or as with databases PAXOS or single owner.
For the source code in the repository, conflicts must be merged by users (or their tools, like `mergiraf`), just like with any other Git repo containing source code.
What might confuse you is the mention that a collaborative object may opt in to ask the user to resolve a conflict. Well, in this case, strictly speaking, it's not a CRDT anymore of course. But none of the collaborative objects commonly used in Radicle use this escape hatch.
It is clear that Git itself does not give you CRDTs, but Radicle implemented CRDTs on top of Git, which is entirely possible. This is also what's explained in the Protocol Guide. I don't understand what's the misunderstanding here, sorry.
Not just the source is laden with conflicts. Also all other data. One rejects a PR, another merges it, the next still waits on a disapproval. This cannot work without CRDT. It's worse than source
According to it, it seems that if someone registers autodiscover.com then example.com lacking autodiscover.example.com will make Outlook try checking if autodiscover.com has an entry.
> I am working on a high-performance game that runs over ssh. The TUI for the game is created in bubbletea 1 and sent over ssh via wish.
> The game is played in an 80x60 window that I update 10 times a second. I’m targeting at least 2,000 concurrent players, which means updating ~100 million cells a second. I care about performance.
High performance with ssh and wish? For sure not. Rather use UDP over secure sockets. Or just normal sockets. Even Claude would come up with much faster code than the ssh/wish nonsense. Or mosh, but this also too complicated.
I didn't think about such throwback to the 80ies. Could be, yes. But then he cannot control the ssh option, and with 2000 users, maybe 10 would set it. I don't think so.
The real Intel mistake was that they have segregated by ISA the desktop/laptop CPUs and the server CPUs, by removing AVX-512 from the former, soon after providing decent AVX-512 implementations. This doomed AVX-512 until AMD provided it again in Zen 4, which has forced Intel to eventually reintroduce it in Nova Lake, which is expected by the end of this year.
Even the problems of Skylake Server and of its derivatives were not really caused by their AVX-512 implementation, which still had a much better energy efficiency than their AVX2 implementation, but by their obsolete implementation for varying the supply voltage and clock frequency of the CPU, which was far too slow, so it had to use an inappropriate algorithm in order to guarantee that the CPUs are not damaged.
The bad algorithm for frequency/voltage control was what caused the performance problems of AVX-512 (i.e. just a few AVX-512 instructions could lower preventively the clock frequency for times comparable with a second, because the CPU feared that if more AVX-512 instructions would come in the future it would be impossible to lower the voltage and frequency fast enough to prevent overheating).
The contemporaneous Zen 1 had a much more agile mechanism for varying supply voltage and clock frequency, which was matched by Intel only recently, many years later.
So why is google.com, youtube.com and it's various local aliases not suspended then, when it publishes much more unlicensed music and texts? On Spotify you find only a tenth of you would find on youtube. Books and texts likewise
reply