>The most confusing thing to me is a discussion of "offline first" applications which starts with, and maintains, the assumption that your only option is a web app.
I didn't downvote your comment but in this author's article, it's deliberate for the starting context for discussion to be a networked collaborative app.
Yes, internet connected apps is a subset of all possible software but that's not the point.
And then a commenter misunderstands that article complaining, "it's confusing to me because this article maintains that the only option is C in an embedded app but over here, I'm using Python with Cloudflare Workers"
In other words, it doesn't seem like you're interested in collaborative apps that require distributed data consistency so this article looks "wrong" to you.
I’ve been working in this problem space for awhile now and I sort of agree with the GP poster. I really like native software; and I want native software which can work simply in a distributed, collaborative context. As an example, I have a note taking app on my laptop. I want to be able to read and edit all my notes on all my other devices. And I want that to work in a way that doesn’t depend on some random startup keeping their servers on the other side of the planet running. Right now every software company which wants to build something like this needs to invent their own data stack, network protocols and storage systems. And the prize at the end is with software that can only talk to itself via closed protocols. It’s infeasible, and inefficient.
We have an opportunity right now to do an awful lot better. When we do, I want to service both native and web apps. If we do it right, from the network level the distinction should just boil away anyway.
I'm in this boat with the product I'm currently hacking on but I just can't seem to commit to an architecture I'm happy with. Mostly due to what you mentioned here.
Essentially I want to build an opensource/hackable notes/tasks/calendar system with a central datastore and message broker for coordination between various systems/scripts/components/clients.
The thing I'm struggling with is to define which features are supported in online and offline mode. Every feature that gets added to offline mode adds tons of duplication and complexity. I'm almost at the point of just saying I don't need offline-mode except for viewing already-cached data and maybe very basic creates/updates, with all the processing happening on the backend once the client comes back online.
edit for more context: The old-style native apps (for example OmniFocus) usually have all the logic only in the client and use "dumb" cloud storage for synchronizing between clients. The difference with what I'm trying to build is that I want that central hub to be "smart" and always online so it becomes easy to hack/interact with the system from cronjobs/scripts/external services.
The architecture I have in mind is to use a CRDT of some sort as the data store. (Or OT if you have a centralised server and want to keep complexity down). Then make the client smart, like old school apps like OmniFocus. Do concurrent editing via the data layer - so the application only really deals with the local data and hears about updates via the underlying data itself changing.
The data model can be reused across many applications, since there's nothing application specific about it. So we can make standard, interoperable debugging tools, backup tools, viewers, etc.
If you want to interact with the same data from scripts, cron jobs and external services, just have another peer on the network with access to the same set of API methods the application can access. You can already read and write data via that API, and any applications with the data open should see any changes instantly.
Basically, what I'm imagining is pretty similar to a self hosted firebase. Except, ideally, I want a CRDT under the hood so we don't need to send all edits via someone else's computer.
Networked collaborative app doesn't need to mean runs in a browser. A git repo fits that description, even a true distributed VCS with no server where every editor has their own copy and no single copy is authoritative. Each user chooses which changes to merge into their personal copies. The native versions of Microsoft Office when backed by Sharepoint also operates that way, allowing users to check out individual copies and edit them in a native editor, although in that case Microsoft is clearly trying to push people into editing directly in the browser.
A lot of these problems go away if you don't run in a browser, because user inherently trust software more when they're running a copy that can't change from underneath them on a second-by-second basis. I'm a lot more willing to give filesystem access to an application I have to explicitly install and that remains what I installed until I knowingly and intentionally upgrade it, as opposed to code pulled continuously from the network as I am working.
It's confusing because "offline-first" doesn't even seem to make sense in the context of web-apps, which I thought meant "fancy (functional, able to do stuff) web sites" or similar.
I didn't downvote your comment but in this author's article, it's deliberate for the starting context for discussion to be a networked collaborative app.
Yes, internet connected apps is a subset of all possible software but that's not the point.
As an analogy, imagine if someone else submitted an article about C Language memory techniques on an embedded chip. E.g.: https://www.embedded.com/memory-allocation-in-c/
And then a commenter misunderstands that article complaining, "it's confusing to me because this article maintains that the only option is C in an embedded app but over here, I'm using Python with Cloudflare Workers"
In other words, it doesn't seem like you're interested in collaborative apps that require distributed data consistency so this article looks "wrong" to you.