Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve recently started deploying on Cloudflare workers.

They’re cheap and “infinitely scalable.” I originally picked them for my CRUD API because I didn’t want to have to worry about scaling. I’ve built/managed an internal serverless platform at FAANG and, after seeing inside the sausage factory, I just wanted to focus on product this time around.

But I’ve noticed something interesting/awesome about my change in searches while working on product. I no longer search for things like “securely configuring ssh,” “setting up a bastion,” “securing a Postgres deployment,” or “2022 NGinx SSL configuration” - an entire class of sysadmin and security problems just go away when picking workers with D1. I sleep better knowing my security and operations footprint is reduced and straightforward to reason about. I can use all those extra cycles to focus on building.

I can’t see the ROI of managing a full Linux stack on an R620 plugged into a server rack vs. Workers when you factor in the cost of engineering time to maintain the former.

I do think this is a new world though. AWS doesn’t compare. I’d pick my R620s plugged into a server rack over giving AWS my credit card any day. AWS architectures are easy to bloat and get expensive fast - both in engineering cost and bills.



I'm scared to use Cloudflare products, because yes they are cheap and good but the company is burning money, not profitable and has large amount of debt. They wil have to raise those prices to not be cheap. Can you predict when and how much they will increase those prices?

If you depend on them for everything and then they decide to make a big price increase to become profitable. Will you be able to handle that price increase? You are pretty much stuck with paying the price.

Yeah other companies can increase their prices, but most of the time profitable companies in cloud infrastructure will only increase if their expenses increase and this is pretty predictable if you pay attention to costs. Like last year it was pretty easy to predict a price increase coming because of inflation and supply chain issues.


Imo, even if they tripled their pricing, they'd still be more cheap than any serverless product other cloud providers have to offer. Looking at their performance over the past 2 years, their losses are in no proportion to their revenue increase [0].

I'm nervous about them changing their pricing too, but just the fact that they're so much more transparent than AWS or GCE is a net plus for me, even with an increase in price.

[0]https://simplywall.st/stocks/us/software/nyse-net/cloudflare...

(ignore the contents and forecast of the article and just look at the graph)


I’ll second this.

It’s also worth noting the development patterns they force when building on Workers and Pages sends you down the path of a fairly portable architecture.

I can’t say anything I’m doing would be difficult to port to another provider on short notice.


I completely agree. Most of my personal projects are unlikely to ever go above 50 concurrent users, so I don't really benefit from the scaling part of cloud flare, but I recently switched to using cloud flare pages for all new personal projects and it's fantastic. The ease of use really makes my life all that much better.

Just buy a domain name and start deploying. Unlike other cloud providers (looking at you Azure/AWS) the time from push to deployment finished is under a minute. Azure could take 15-20 minutes and AWS still relied on zip file uploads for functions last I checked.


I recently was asked to port an app to cloudflare pages but found it can only basically handle static rendered content or server side rendered content if the toolset is compatible with their workers thing. Is that the case for you or do I have more to learn about cloudflare? Like I can't just drop Django onto it I assume?


cf workers look so promising, but their pricing makes websocket / persistent connections untenable. I know they are possible with durable objects, but wish they would have a full product story around actually building apps with live requirements with pricing that makes sense.


What are you using to manage your schema? Do you use an ORM? Maybe something like PocketBase[0]?

[0]https://pocketbase.io/


Nope. Just have a directory:

    ./sql/0001_create_users.sql
    ./sql/0002_create_sessions.sql
    ....
Each query for modifying the schema is idempotent and safe to be rerun.

Then I do:

   ls ./sql | xargs -I{} wrangler d1 execute <db> --file {}
Can put that in a script to make things easy. You use the same script to modify the db as you do to bootstrap a db from scratch.


I looked at pocketbase and other tools, but decided to keep it simple.

Like GP, I'm also using D1 (https://developers.cloudflare.com/d1) which is based on SQLite and still in early Alpha. In combination with KV (https://developers.cloudflare.com/workers/learning/how-kv-wo...) it's trivial to have a great database layer with caching using kysely (https://github.com/aidenwallis/kysely-d1) and trpc (https://trpc.io) you can have typing from DB to front end.


How's that 100MB limit on D1 going though? I realise support for 'larger' databases is coming, but it gives me the impression they don't intend it ever to be a main application database, for anything that's not small and with a fairly constant data requirement (not scaling with users adding content, say).


How's the developer experience of writing code for workers?


Pretty fantastic (not an employee of cloud flare or anything, just a happy customer). They have this concept of mini flare that you can host a tiny cloud flare on your dev box/CI pipeline so it makes it easy to run unit tests and the like.


Is there any way to self host it? I'm concerned about vendor lock-in


I don’t have the hardware to self host it. The value of this is that it’s an operating system for planet scale computers. It deploys your process across a global super computer with 100+ data centers.

Most code I’ve rolled on Workers is simple because it’s just business logic (the OS takes care most of the heavy lifting that adds bloat to other code bases). Migrating a few 1000 lines of JS isn’t a big deal. I just don’t have very many places I could move it to.

Lambda is region bound. I could do lambda@edge which is closer. But self hosting it on an R620? I could probably roll an API compatible wrapper that loads my worker into a node process on a standard server and plug it into a colo rack, would take me a few hours, but that’s not comparable.

This idea of self hosting and building portable abstractions I feel is jumping the gun. We are just starting to figure out how to build operating systems for datacenter scale computers, but standardization is still a ways out there. Planet scale is a whole ‘nother level, and we’ve got a long ways to go to figure out what the right abstractions are for computers of this scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: