Please don't tell me what I'm thinking or what I want to do ;-)
The use case I had in mind is closer to a data.world or dbhub, collaborative data cleaning (which often include bulk operations) rather than merging OLTP databases after days or weeks of changes.
What I would use now is a script or Jupyter Notebook checked in Git, where cells are strongly ordered and if someone sends me a pull request (changing the code) I have to re-run the notebook to obtain the "merged dataset". I can't say that I have a use for Dolt but it is definitely cool tech.
If you're a data cleaning guy, you might be interested in our data bounties program. It's collaborative data importing / cleaning. The current one is building a catalog of museum collections. We launch a new one every month.
The use case I had in mind is closer to a data.world or dbhub, collaborative data cleaning (which often include bulk operations) rather than merging OLTP databases after days or weeks of changes.
What I would use now is a script or Jupyter Notebook checked in Git, where cells are strongly ordered and if someone sends me a pull request (changing the code) I have to re-run the notebook to obtain the "merged dataset". I can't say that I have a use for Dolt but it is definitely cool tech.