Hacker Newsnew | past | comments | ask | show | jobs | submit | lisperforlife's commentslogin

Recently the v8 rust library changed it from mutable handle scopes to pinned scopes. A fairly simple change that I even put in my CLAUDE.md file. But it still generates methods with HandleScope's and then says... oh I have a different scope and goes on a random walk refactoring completely unrelated parts of the code. All the while Opus 4.5 burns through tokens. Things work great as long as you are testing on the training set. But that said, it is absolutely brilliant with React and Typescript.


Well, it's not like it never happened to me to "burn tokens" with some lifetime issue. :D But yeah, if you're working in Rust on something with sharp edges, LLM will get get hurt. I just don't tend to have these in my projects.

Even more basic failure mode. I told it to convert/copy a bit (1k LOC) of blocking code into a new module and convert to async. It just couldn't do a proper 1:1 logical _copy_. But when I manually `cp <src> <dst>` the file and then told it to convert that to async and fix issues, it did it 100% correct. Because fundamentally it's just non-deterministic pattern generator.


The 5ms write latency and 1ms write latency sounds like they are using S3 to store and retrieve data with some local cache. My guess is a S3 based block storage exposed as a network block device. S3 supports compare-and-swap operations (Put-If-Match), so you can do a copy-on-write scenario quite easily. May be somebody from TigerData can give a little bit more insight into this. I know slatedb supports S3 as a backend for their key-value store. We can build a block device abstraction using that.


None of this. It's in the blog post in a lot of detail =)

The 5ms write latency is because the backend distributed block storage layer is doing synchronous replication to multiple servers for high availability and durability before ack'ing a write. (And this path has not yet been super-performance-optimized for latency, to be honest.)


I think you can get much farther with dedicated servers. I run a couple of nodes on Hetzner. The performance you get from a dedicated machine even if it is a 3 year old machine that you can get on server auction is absolutely bonkers and cannot be compared to VMs. The thing is that most of the server hardware is focused towards high core count, low clock speed processors that optimize for I/O rather than compute. It is overprovisioned by all cloud providers. Even the I/O part of the disk is crazy. It uses all sorts of shenanigans to get a drive that sitting on a NAS and emulating a local disk. Most startups do not need the hyper virtualized, NAS based drive. You can go much farther and much more cost-effectively with dedicated server rentals from Hetzner. I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner. I know of OVH but I would love to know others in the same space.


As mentioned multiple times in other comments and places people think that doing what Google or FB is doing should be what everyone else is doing.

We are running modest operations on European VPS provider where I work and whenever we get a new hire (business or technical does not matter) it is like a Groundhog day - I have to explain — WE ALREADY ARE IN THE CLOUD, NO YOU WILL NOT START "MIGRATING TO CLOUD PROJECT" ON MY WATCH SO YOU CAN PAD YOUR CV AND MOVE TO ANOTHER COMPANY TO RUIN THEIR INFRA — or something along those lines but asking chatgpt to make it more friendly tone.


The number of times I have seen fresh "architects" come in with an architectural proposal for a 10 user internal LoB app that they got from a Meta or Microsoft worldscale B2C service blueprint ...


> doing what Google or FB is doing

Google doesn't even deploy most of its own code to run on VMs. Containers yes but not VMs.


Well I think that’s the point people think if we run VPS and not containers or some app fabric, serverless so PaaS we are “not using real cloud”. But we use IaaS and it is also proper cloud.


Yeah, the irony being Google runs VMs in Containers but not the other way around.


I actually benchmarked this and wrote an article several years back, still very much applicable: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...


Did you "preheat" during those tests? It is very common for cloud instances to have "burstable" vCPUs. That is - after boot (or long idle), you get decent performance for first few minutes, then performance gradually tanks to a mere fraction of the initial burst.


> The total wall clock time for the build was measured. The smaller the better. I always did one build to prime the caches and discarded the first result.

The article is worth the read.


I also did a benchmark between cloud providers recently and compared performance for price

https://dillonshook.com/postgres-cloud-benchmarks-for-indie-...


That isn't the same as parent through, you are comparing VMs instead of dedicated servers


I recently rediscovered this website that might help: https://vpspricetracker.com

Too cool to not share, most of the providers listed there have dedicated servers too.


Great website, but what a blunder to display the results as "cards" rather than a good old table so you can scan the results rather than having to actually read it. Makes it really hard to quickly find what you're looking for...

Edit: Ironically, that website doesn't have Hetzner in their index.


That is weird indeed. But I bet you are getting Hetzner results indirectly through resellers :) (Yeah I checked one Frankfurt based datacenter named FS1 - probably for Falkenstein. They might be colo or another datacenter there of course)


Amazing website, glad to know that I already have a super great offer! But will definitely share this


Nice! Bookmarked. I know this one, it's been useful to me before: https://serverhunter.com/


What a great site. Thanks for sharing!


This is an amazing site


++1

excellent website, thanks.


> I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner

FWIW, Hetzner has two data centers in the US, in case you're just looking for "Hetzner quality but in the US", not for "American/Canadian companies similar to Hetzner".


IIRC, Hetzners dedicated instances are only available in their German and Finnish data centers, not anywhere else sadly :/


This is correct, they only offer VPS in the US.


But are the VPSs also similarly much better performing than AWS?


I don't know the answer to that, I believe they are a good bit cheaper (it's been a while since I compared apples-to-apples). My understanding is that they are a good deal but I can't say that with 100% certainty.


latitude.sh do bare metal in the US well


Yes, but they are vastly more expensive than Hetzner (looks like pricing stats at just under $200/m for 6 cores).


Similarly OVH is French and has bare metal in their US and Canadian data centers.


Yeah no dedicated severs in the US sadly. I'm not aware of anyone who can quite match hetzners pricing in the US (but if someone does I'd love to know!). https://www.serversearcher.com throws up clouvider and latitiude at good pricing but.. not hetzner levels by any means.


I haven't checked Hetzner's prices in a while, but OVHcloud has dedicated servers and they do have dedicated servers in the US and in Canada (I've been using their dedicated servers for years already and they are pretty dang good)


Seems to be broadly the same sadly, but thanks it's interesting to see they're all hovering quite close to eachother.


I have been considering colocating at endoffice (I saw the suggestion once at codinghorror.com)



Thanks for this.


For self hosted / cohosting my own kit, I buy refurbed servers from https://www.etb-tech.com/ because I can spec exactly what I want and see how the cost varies, what the delivery time is, etc.

Years ago Broadberry has a similar thing with Supermicro, but not any more. You have to talk to a sales person about how they can rip you off. Then they don't give you what you specced anyway -- I spec 8x8G sticks of ram, they provide 2x32G etc.


Be warned though that, when renting dedicated servers, there are certain issues you might have to deal with that usually aren't a factor when renting a VPS.

For example, I got a dedicated server from Hetzner earlier this year with a consumer Ryzen CPU that had unstable SIMD (ZFS checksums would randomly fail, and mprime also reported errors). Opened a ticket about it and they basically told me it wasn't an issue because their diagnostics couldn't detect it.


Yeah, their support, for better or worse, is really technical and you need to send all the evidence of any faults to convince them. But when I've had random issues happening, I've sent them all the troubleshooting and evidence I came across, and a couple of hours later they had provisioned a new host for me with the same specs.

And based on our different experiences, the quality of care you receive could differ too :)


> and a couple of hours later they had provisioned a new host for me with the same specs.

To be fair, they probably would've done the same for me if I'd pushed the issue further, but after over a week of trying to diagnose the issue and convince them that it wasn't an problem with the hard drives (they said one of the drives was likely faulty and insisted on replacing it and having me resilver the zpool to see if it fixed the issue. spoiler: it didn't) I just gave up, disabled SIMD in ZFS and moved on.


> but after over a week of trying to diagnose the issue and convince them that it wasn't an problem

That sucks big time :( In the most recent case I can recall, I successfully got access, noticed weirdness, gathered data and sent an email, and had a new instance within 2-3 hours.

Overall, based on comments here on HN and otherwhere, the quality and speed of support is really uneven.


> based on comments here on HN and otherwhere, the quality and speed of support is really uneven.

Can you name one tech company that's scaled passed the point where the founders are closely involved with support that has consistently good tech support? I think this is just really hard to get right, as many customers are not as knowledgeable as they think they are.


"Consistently" is hard, people's experiences tend to differ with every company out there, even by what country you're currently in. For example, I've always had quick and reasonable replies from Coinbase support, but I know friends who've had the complete opposite experience with Coinbase, so won't claim they're consistent. But their replies to me has been consistent at least.

Probably the company most people have had any sort of consistency from would be Stripe I think. Of course, there are cases where they haven't been great, but if you ask me for a company with the best tech support, Stripe comes to mind first.

I'm not sure it's active anymore, but there used to be a somewhat hidden and unofficial support channel in #stripe@freenode back in the day, where a bunch of Stripe developers hanged out and helped users in an in-official capacity. That channel was a godsend more than once.


It's not hard to get right. It's expensive to get right. And that affects pricing and profitability. You have to have a threshold.


> I would love to know if they are any north-american (particularly canadian) companies that can compete with price and the quality of service like Hetzner.

In a thread two days ago https://ioflood.com/ was recommended as US-based alternative


But I'm looking more for "compute flood" ...


On a similar note, I'm looking for a "Hetzner, but in APAC, particularly East Asia". I've struggled to find good options for any of JP, TW or KR.


LayerStack is very fast in APAC:

    https://www.layerstack.com/en/dedicated-cloud


Going to try this out, looks very much like what I was looking for.


VMs are middle ground between AWS and dedicated hardware. With hardware you need to monitor it, report problems/failures to the provider, make necessary configuration changes (add/remove node to/from a cluster e. t. c.). If a team is coming from AWS it may have no experience with monitoring/troubleshooting problems caused by imperfect hardware.


One thing that frustrates me with estimating performance on AWS is that I have to dramatically estimate down from the performance of my dev laptop (M2 MBP). I've noticed performance tradeoffs around 15x slower when deployed to AWS. I realize that's a anecdotal number, but this is a fairly consistent trend I've seen at different companies running on cloud hosting services. One of the biggest performance hits is latency between servers. If you're server is making hundreds or thousands of db queries per second, you're going to feel that pain on the cloud more even if your cloud db server CPU is relatively bored. It is network latency. I look at the costs of AWS and it is easy to spend >$100,000 month.

I have ran services on bare metal, and VPSs, and I always got far better performance than I can get from AWS or GCP for a small fraction of the cost. To me "cloud" means vendor lock-in, terrible performance, and wild costs.


> It is network latency.

People do not realize for that fancy infinite storage scaling, that it means that AWS etc run network based storage. And that, like on a DB, can be a 10x performance hit.


I have seen hetzners cloud block storage to be quite slow. It became soon a bottleneck on our timescale databases. Now we're testing on netcup.com's "root servers" which are VPS with dedicated CPU cores and lots of very fast storage.


They limit them to 7500 IOPS, as stated in their docs. It also doesn't scale with size, the limit is there for every volume of any size.


Quite a few options on https://serversearcher.com that sell in US/CA.

Clouvider is available in alot of US DCs, 4GB ram/2cpu/80GB NVME and a 10Gb port for like $6 a month.


I've been using dedicated servers for 20 years. Here's my top list:

Hetzner, OVH, Leaseweb, and Scaleway (EU locations only).

I've used other providers as well, but I won't mention them because they were either too small or had issues.


> . I know of OVH but I would love to know others in the same space.

When I've needed dedicated servers in the US I've used Vultr in the past, relatively nice pricing, only missing unmetered bandwidth for it to be my go-to. But all those US-specific cases been others paying for it, so hasn't bothered me, compared to personal/community stuff I host at Hetzner and pay for myself.


I've been eying Vultr for dedicated metal in Canada (Toronto Datacenter). How do they measure up to Hertzner? I'm not looking to get the best possible deal, but just better value than EC2 (which costs me a fair amount of egress).


If you are a Canadian entity, I would go OVH rather than Vultr. OVH US is a completely distinct legal entity from their Canada and EU offerings, specifically so that the rest of OVH is immune to the CLOUD Act. Vultr is an American company, so if Uncle Sam asks for your data, even at a Canadian location, there's nothing Vultr nor you can do to stop it.

This wasn't a consideration a few years ago, but with how quickly things are devolving south of the border it's now much more of a risk. If I were operating a company in Canada, I would want to be able to assure my customers that their data won't get expropriated to the US without first going through Canadian courts.

OVH Canada now has two Canadian locations, by the way - the original location in Beauharnois and a new location in Cambridge, so you even can have two zones for redundancy.


Yes I was also looking at OVH. I heard some horror stories about a fire several years ago and a lack of backups though...


I'd go for Hetzner any day of the week, but if a client really screams "Servers MUST be in North America" I'd use Vultr before anything else, unless the client is bandwidth-sensitive.


I have used wholesaleinternet.net and they are centrally located in the USA.


ugh 235$ a month for a 4TB SSD?! You can buy one for that price and have some money left over


Try www.wowrack.com or www.serverstadium.com. (I work for them).


Virtualization has a crazy overhead - when we moved to metal instances in AWS, we gained like 20-25% performance. I thought that since AWS has the smartest folks in the business and Intel & co. has been at this for decades, it'd be like a couple percent overhead at most, but no.


It can affect system design. Just chuck it all on one box! And it will be crazy fast.


Interserver. But I don’t have personal experience (yet)


I used GTHost in the US. Performance is not bad but you do end up paying more if you need 1gbit/s link.


Yeah this is what they do in "high perfomance" server, they just use gaming cpu


I don't think models are fundamentally getting better. What is happening is that we are increasing the training set, so when users use it, they are essentially testing on the training set and find that it fits their data and expectations really well. However, the moat is primarily the training data, and that is very hard to protect as the same data can be synthesized with these models. There is more innovation surrounding serving strategies and infrastructure than in the fundamental model architectures.


You can use libkrun to pretty much do the same thing.


Why is this not the top comment? FAIR published their C3MLeon paper about decoder-only autoregressive models that work with both text and image tokens. I believe GPT-4o's vocabulary has room for both image and audio tokens. For audio tokens, they probably trained an RVQ-VAE model like Encodec or Soundstream.


I am curious about models like encodec or soundstream. They are essentially meant to be codecs informed by the music they are meant to compress to achieve insane compression ratios. The decompression process is indeed generative since a part of the information that is meant to be decoded is in the decoder weights. Does that pass the smell test from a copyright law's perspective? I believe such a decoder model is powering gpt-4o's audio decoding.


I had written about it a few months back. https://blog.tarkalabs.com/what-does-a-cto-do-67c26d34ae7a

As I see it, CTOs are responsible for

- Making Build/Buy decisions

- Hiring

- Setting up a culture of learning

- Balancing tech and product priorities

- Setting up delivery processes that work for the team

- And finally for architecture and system design

I feel in that order of priority.


I tend to use foreign keys everywhere. The only time that I would skip it is when I do not need to cascade deletes. These are mostly metadata tables that will be archived on a periodical basis.


This is awesome. Brings back so many memories as a kid. Thank you. This is built with phaser. I am not very good at this game but if you need to recover life mid game, you can use this.

Phaser.GAMES[0].state.getCurrentState().ui.player.addLife()

Is this open source?



Love that the cheats are just program statements that can be understood, instead of some obscure rom or memory hacking. This is how software always should have been like.


One of the most valuable lessons I learned early on from hacking the binaries of savegames is that if you make it too easy, it stops being fun.

And yet, I still sometimes go to places like flingtrainer.com when a game "feels too hard" for me (and therefore is also not fun... the recently-released 2021 "Dark Alliance" is one example)

I think editing game state should be "difficult, but not impossible". The thing is, if it's "difficult", then someone somewhere's just going to make a free save editor...

"Fun" is a delicate balance in these things and probably varies by person. For example, I CANNOT STAND inventory management in games (it's not immersive, it's WORK!) so Skyrim became 100% more fun once I figured out how to console up my carryweight (and installed a mod to make the inventory UI searchable/sortable by type). To me, all this did is prevent me having to spend time traveling in-game back to a home stash to retrieve some odd thing and then traveling all the way back... that's literally "work, inside a game" to me, and takes away time better spent exploring, questing, fighting etc. Similarly, I have no idea why Blizzard limits Diablo 2 stash size (I mean... the extra database cost has to be practically... negligible for them?) because all it does is cause lots of login/logout churn for them as people switch off to mule characters to hold all their set items/uniques/runes/etc.


this really depends on the developer though....


It’s probably this: https://phaser.io/

Which is open source.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: