Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Am I the only one who still hosts my own static sites on a plain old virtual machine?

It's pretty simple to configure nginx for static sites, and by doing it yourself you reduce vendor lockin to just about nil.

Even if S3 is massively cheaper, $5/month for a tiny VM seems like a small price to pay for being vendor-abstract.

I suppose S3 is way less likely to suffer a meaningful outage than my little VM, but how many 9s do my personal websites actually need?



Maintenance is my primary concern. I deal with software for a living. I want my blog to just work without me having to worry about maintaining the VM. Netlify makes this dead simple.

I used to host Wordpress sites for myself and family members. I've now moved nearly all of those sites to Netlify (for hosting) and Forestry (for editing/CMS). I no longer have to worry about malicious hacking attempts, Wordpress updates, or anything else outside of the site content.

Here is my post on this transition for those interested: https://dev.clintonblackburn.com/2019/03/31/wordpress-to-jek....


apt-get install nginx goaccess

cd website

cp * /var/www/html

Yearly maintenance required: apt-get update, apt-get upgrade

View traffic stats: goaccess -f /var/log/nginx/access.log

I'd say its just as easy and seamless to do it yourself on a cheap VPS for a static website. HTTPS isn't that much extra work either.


Maybe I'm just a security nut, but I would probably also relegate ssh to a non-default port, allow key-only authentication, narrow ciphers, close all other ports (except 80, 443, and 53). Also fail2ban, sysctl tweaks (networking, disable coredumps), and a whole bunch of other things I have in a script.

I've seen way too many people get their boxes trashed to leave an internet-accessible one exposed and unsecured.


What are your thoughts on sharing your script? I have a few VPS and would love some new tools / proper setup. I have been learning as I go, learned a few day 1 things not to do, but would like to learn more about networking/coredumps. Cheers!


I'd have to clean it up first. I wrote it for a competition, and it does its job well; I may clean it up and improve it soon. Right now, it's a mess of a monolithic script.


Excellent, well if you get around it to I would love to scope it out. Autodidact after being fedup with shared servers like, GoDaddy/HostGator/Inmotion, they were easy to use since I had no idea what I was doing, I moved to Digital Ocean and its been a fun learning experience. I love using command line and solving problems. Would love to be as tight on security as you are! Cheers


That's great that you have enough time and experience to consider all of this easy. As someone who works a bit higher up the stack, I rarely go as deep as configuring Nginx. This setup may take you a few minutes, but I usually end up spending an entire Saturday on stuff like this. Having done this for a few years, I would rather spend my free time on other things.


> Yearly maintenance

I'd say continuous maintenance with response to specific issues. Also debian updates don't restart services which rely on updated shared libraries, which means you need to restart your nginx after openssl updates. Also restarts when kernel is updated. Also...

There's really more to it than just an annual upgrade. You're likely not going to be affected if you ignore this, but why risk it?


Ok, I forgot to add 'reboot' to yearly maintenance :). And change the ssh port or consider a private key. But if its just for a personal static website, I wouldn't get overly concerned about being hacked. Assuming you have backed up your page, its another handful of simple commands to rebuild the whole thing anyway. They are also quite fun for other uses, like setting up a squid proxy, messing with an email server or irc server, just having a personal mini-cloud you can easily access from anywhere.


It's not about rebuilding if your website is defaced. It's the possibility of someone (for example) adding a client side exploit / throttled miner to your existing website. Without more monitoring, you won't know it happened, and neither will most of your visitors.


Has this sort of thing ever happened to you?


Yes. I can't remember the details of entry since it was decades ago, but the end result was JavaScript snippets targeting browsers appended to the end of index page.

Adding extra servers like own cloud storage, email, IRC, etc. just expands your risk to more services (unless you internally separate them into namespaces/VMs, but then we're really far away from a "simple static hosting" territory)


Lucky for me I dont use javascript. But that was decades ago right? Well.... relax! I think you are letting these fears get in the way of actually enjoying something quite fun. Perhaps the NSA has some lovely nginx exploits, but the script kiddies that trawl the web these days are laughable. (knock on wood).


It was decades ago because that was before I started working with IT security and stopped using single VM for mixed purposes and treat patching seriously. It's literally part of my job to not relax about those things and keep bringing them up, and remind people that they're not easy, annual apt-get updates.

You're right that there's fewer wormable issues these days. But the question is: does your usual approach to security allow you to stay safe when (not if) the next one happens. And feel free to continue in not-super-secure way for personal, fun things. Just keep in mind that there's more to the story and the more moving parts, the more you need to work to keep things reasonably secure.


Your story is almost identical to mine - years of hosting on a VPS a bunch of small family/project mostly-Wordpress sites. I simply exported them and uploaded to Netlify+Github. I haven't really bothered keeping the connection from the back-end to a dynamic export but have kept those pieces in place for another wet weekend.


You make a good point clearly. Thanks for taking the time to do it.

I guess I feel like the maintenance cost is worth the knowledge I gain from automating my own infrastructure, but I realize not everyone is interested in devops. I'll also note it costs me very little time - I don't remember the last time I had to do anything actively with it.

Elsewhere in the thread I mentioned vendor lockin, which does concern me. I also worry about vendor monoculture - if everyone just uses AWS, they gain undue influence over the market, so in some ways I guess my stubborn self-hosting is a small gesture against that.

I see a lot of people complain about how the internet has become a drab, uniform machine that treats people as eyeballs or wallets to be sacrificed to Moloch [1], little like the wild, free-spirited collection of small sites it was back in the late 90s.

I think a lot of that is the price paid for centralization and funding, so again, self-hosting is a small way to fight back just a bit against that.

1: Moloch in this sense: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Did you consider Netlify CMS vs Forestry?


I did not. I've used Forestry for over a year. I was not aware of Netlify CMS until shortly after writing my post.


Ah. I was not aware of Forestry until I came across your post as well. Now I’m not sure which one I should go with.


I use Netlify and can vouch for its simplicity. I have a few sites on it, some are deployed via bitbucket and some are simply drag-and-drop.

I never used Forestry but by the looks of it, it looks more of an actual CMS and far too sophisticated than Netlify. Being said that it looks over engineered to me for hosting static websites. But if I wanted a CMS to host my client websites whom I have to hand over control, I would definitely give Forestry a try.


Thanks this is helpful!


I disagree with encouraging people to do this. You are not accounting for a CDN here, like the post. A website on the HN front page went down yesterday on a $5 VM.

And S3 just holds your HTML files, for super cheap. There’s no lock-in concern there. You can easily migrate to nginx in the future if you really want, but start with S3


HN won't take a static website on a $5 VM down if it's set up even remotely correctly. Traffic to a popular link on HN is likely to get on the order of ~100rps max (more likely 1-10rps). Nginx will handle that with no problem.

CDNs may make a site a bit faster, but for a static site it's unlikely to make much difference if you're on a good host in US/EU or central Asia. If you're hosting in Australia or Japan, maybe it might be a little slower than expected, but still totally usable.


Completely agree. I think many people here regularly work on larger web applications in dynamic languages with heavy JS front-ends piling on dependencies.

Nginx is unbelievably fast by itself, not to mention the optimizations that are completely unnecessary for a static blog. It's not going to be your blocker.

If you're serving up 20MB of JS and inlined images on each page load, yeah, you may want to rethink that. But we don't need to get wild. My homepage is 9.2KB. Longer blog posts (e.g. [1]) can clock in at 20KB. HN won't take that down.

[1] https://maddo.xxx/thoughts/what-the-hell-are-you-doing.html


Out of curiosity, what made you decide to purchase a ".xxx" domain for your blog? And do you regularly get comments?


Looks like his last name end with an `x`


Not to mention that most VPS providers to even speak about nowdays use SSDs

For a personal site who the heck even needs a CDN, the only reason I might use that if I put photography website with huge shots or if there's a bunch of videos as well.


When I tested my $5 nginx vps could handle 16,000 requests per second over local host. Maybe at worst 10,000 per second over the network


Yep this doesn't surprise me at all. A stock install of nginx with no tuning at all was reaching 26k rps on my 2013 MacBook Pro when I tested it years ago.


I have front paged on HN and Reddit several times. Often 'only' using 5$ vms. However i was using cloudflare or at least nginx and proper caching settings.

I run several hundred dollars monthly of infrastructure but my websites are nearly all on a simple VM for about 20€/month on Vultr right now.

Web hosting only is expensive when people run badly optimizer infrastructure


Putting Cloudflare in front of a VPS is pretty simple and gives you the same result so long as you are sending the correct cache headers.


There are some low end VPS providers too that go as low as $1/mo. I usually stick to $2/mo or higher just for stability, reliability. Even you even got hosts like Hetzner Cloud, Scaleway (see European hosts) that provide great service, bandwidth, and VPS. I don't know why people use Amazon... I don't find their value proposition very good unless you need dynamic scaling for unpredictable demand.


That last unless is what the value prop is. Personal site is indeed just fine on a cheap vps (and I can also put up the occasional file), but AWS has better reliability and much better scaling. When you consider the opportunity cost of time, AWS can come out cheaper.


I host my site on Apache running on an ARM board in my garage.

I'll consider moving to a VM if/when the ARM board eventually fails, but it's been running for 6 years so far. I have 6TB of storage, which mostly serves as a NAS but includes about 200GB of photos for the website.

There is no deployment process; the web root is mounted by NFS on my desktop. I can share large files with people just with "mv" or "ln -s".

> how many 9s do my personal websites actually need?

My router seems to crash every 3-4 months, and I need to reset it. There's around 15-30 minutes of power failure every year. I don't worry about this.


Sorry about the ignorance but how do you run it from your garage? What about bandwidth? Could you share the url? Also would you recommend me any guide to get started?


The usual roadblock in this process is getting ports exposed to the internet. In the best case this can just be done on your router configuration. In the unfortunately common case the ISP blocks you from doing this and the only solution is to change ISP.


I've heard of ISPs blocking ports, but not in Europe. I just forward ports 80 and 443 to the server (and pinhole the IPv6 ports) and it's done.

The upstream bandwidth is about 60Mb/s, which is fine for almost everything.


How can I do such thing? I'm in Europe as well. Do you have any guide to get me started?


The only thing you have to google for is “Port Forwarding” and it’s usually a few clicks in your router interface. Then you just run the service you want on your computer / NAS / Raspberry Pi and tell your router to forward the port to your service IP / Port. If you have a dynamic IP at home you probably also do have to get a script or something to update your domain records if you want to point a domain to your home service.


yes. I host my personal website (maddo.xxx) on a single EC2 instance with just nginx. It's easy. It's fast. When I want to over-engineer the shit out of it for fun, it's ready.

Deploys? One line of `scp`

scp -r -i ./certs/maddoxxxnginx.pem ./app/* ubuntu@13.52.101.21:/var/www/maddo.xxx/

(that deploy script just bulk uploads everything, but that's fine for now. The whole site is measured in KB.)


I served a static website off nginx from a docker container for a while. At some point there was a breaking change and it would have taken 3 minutes to fix, but I didn't bother. Static hosting is a solved problem and there's not really a reason to do it yourself unless you just want to learn.


How is there vendor lock-in for static websites, though? Can't you just take your files and go somewhere else?


Yep, you certainly can, which is part of their the beauty.

Last I looked, though, you couldn't deploy to S3 without using tools that work specifically with it.

I guess it's really not that big a deal, but I prefer the genericness of "I'm configuring a webserver and pushing my files to it."

That process can be just about fully automated, even including HTTPS setup if you want that, and then you can use with whatever server provider you like.


Depends on the tools! If you're manually copying files, there are clients (e.g. Transmit, which is what I use) that just treat it like any (S)FTP server. If you're using the command line, yeah, you need to use Amazon's CLI, although it's still basically a one-liner to sync the directory you want to publish.


Ah, yes, that does make sense. Thanks for explaining.

I'm a fairly aggressive automator, so I forget that doing it by hand is actually an option.


I’m old enough to serve my static site from nginx running on my server :) No virtual machines.


If it’s a static site and you own the domain then vendor lock-in isn’t really a problem regardless of if you use cloud services or not. Because you can just dump those files on a different provider and change your DNS entry. It’s not even remotely the same level of complexity as other services when people normally talk about vendor lock-in.


I do. I'm surprised more don't on hn perhaps it speaks to something bigger.


I switched from S3 to a $5/month VM a while back and it is massively better.


> you reduce vendor lockin

I don't know why anyone cares about vendor lock in. It's either trivial to move an aws lambda to a google cloud function because you don't have a lot going on, or it's not trivial to move stuff from even your own servers to other servers because it's under huge load and you have considerable amount of data you'd have to migrate under complex conditions.

Moving around is either hard or easy based on things that don't really have anything to do with vendor lock in.


No, vendor lock in can mean a lot, from lets say even a simple plain API implementation, where one vendor might implement something(storage for eg) in a way where its not possible for the other vendor.

I recently moved one of my k8s cluster from gc to aws, even terminology change can introduce a lot of awkwardness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: