Used dgraph for a large knowledge graph and didn't really feel like managing the cluster of servers. So made a switch to Arango Oasis.
But that's a very tempting offering from Manish Jain. Their graph database is awesome and now packaged as a managed service is making it a great option.
I'd say running and maintaining any distributed system is hard. Which is why cloud makes for an attractive offering for these services, because it lets the provider (Dgraph Labs in this case) ensure that the users are having a great product experience. They can focus on building and scaling their app, not worry about how to run the infrastructure.
Hasura is a great product, but it is a GraphQL layer on top of Postgres. So, what you need to do to use Hasura is to first lay out all the tables in Postgres, and then map those table schemas to GraphQL schema. Any changes to the schema would then need to be applied first to the Postgres instance, and then brought back to Hasura. Of course, Hasura has benefits in that if you're already on Postgres and don't want to move to another DB, then it makes for an easy adoption.
Dgraph works very differently. It's a native GraphQL database. You set the GraphQL schema directly. Any changes are done within the GraphQL schema itself, and the database figures out how to serve that best. Not to mention, the database is optimized for running GraphQL queries, so fast joins, traversals and so on coming directly from the storage layer. Not to mention it scales as your data size grows without needing to do data denormalization.
Overall, app iteration can be much faster with Dgraph.
"So, what you need to do to use Hasura is to first lay out all the tables in Postgres, and then map those table schemas to GraphQL schema. Any changes to the schema would then need to be applied first to the Postgres instance, and then brought back to Hasura"
This is partially true (you need to define DB tables), but it leaves the sense that if you make DB schema changes, you need to run a second step for the GQL schema to reflect this.
If you change existing tables, nothing has to be done. If you add a new table, you have to press the "track/track all" button to expose it via GQL. And if you made join-tables or new FK relationships, you need to press "track relationships".
For the most part, Hasura is meant to just stay synced to your DB schema (database-first development).
"You set the GraphQL schema directly. Any changes are done within the GraphQL schema itself, and the database figures out how to serve that best."
This is where I'd say the approaches differ, Dgraph being GQL-Schema first and Hasura being Database-Schema first.
Not to say that one is inherently better than the other, they're both entirely valid approaches and Dgraph is a solid product.
Isn't GraphQL by dgraph just another GraphQL layer on top of the underlying graph database?
You could have created a separate project for the GraphQL layer compatible with dgraph and other database. But no, you wanted to enforce your graph database which is not proven for production load.
Hence, it is wrong to claim it being different from Hasura just because you are embedding the GraphQL layer and database into a single binary.
> But no, you wanted to enforce your graph database which is not proven for production load.
One of our customers, a big brand e-commerce site, is running 20 TBs of data on Dgraph. I wonder what makes you claim "not proven for production load."
> Hence, it is wrong to claim it being different from Hasura just because you are embedding the GraphQL layer and database into a single binary.
Perhaps you could embed GraphQL layer into Postgres, then change the way Postgres stores data to make it do joins better, make Postgres not require strict schemas and allow major schema changes without any downtime, then add distributed transactions, consistent replication, fault tolerance to it; and then yes, it might be close.
In my experience at least, building with Hasura can be very cumbersome, i.e. you need to edit multiple files or resort to the web UI. Importing or exporting schema is also very unintuitive.
In contrast to that, I love the idea of the database schema being just a graphql type definition file with annotations.
The one thing I have struggled with is to figure out if dgraph is strictly a triple/rdf graphdb or can it act as a labeled propery graph? For my usecase, I think I would prefer being able to add properties to Vertices and Edges, instead of those properties becoming new vertices and edges. IS that possible in Dgraph?
Dgraph doesn't really fit into those descriptions of "RDF graph DB" or "property graph". It's quite a different design. You can read about it here [1].
I think you're talking about facets: [2]. They can be attached to edges, and Dgraph supports retrieving them and filtering on them.
Dgraph supports flexible schema. So, you can make a lot of changes to the schema without any "table migration" or downtime. For example, switching from int to datetime to string, adding fields, removing fields, adding relationships between types, all of this stuff needs no "DB work."
So, schema migrations are really easy with Dgraph, because it is a graph storage model, which allows for sparse properties to any data type (as opposed to the rigid table model enforced by SQL DBs).
The existing stored data would stay as it is. It would be automatically converted to datetime on query. The new data that's coming in would be stored as datetime.
From user's perspective, the change would be instant.
CPO here: if you want to use any service in a u safe third part country (US) you have to follow this recommendations [0].
They are up for consultation untill end of December.
If you want to play it safe: Only transfer personal information out of the EU into a unsafe country if it is fully encrypted and only you Organisation holds the keys.
More on that in the link but thats the gist of it.
Hi, Thanks for asking. We actually have a few ways to achieve this:
1) as mentioned, slash supports graphql subscriptions
2) you could write a custom mutation with our new @lambda directive. Do your changes then call out to your webhook
3) we have been discussing hooks after a mutation internally, but I don’t have a timeline for this yet.
But that's a very tempting offering from Manish Jain. Their graph database is awesome and now packaged as a managed service is making it a great option.