Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Async is necessary sometimes... but I'm really hating the trend of designing APIs that make it unavoidable, with the assumption that it's necessary always. It just makes code illegible and thus buggy.


Threads were invented decades ago to resolve this problem. Or actors / goroutines / CSP if you want to call them that..


And strangely enough, people still reinvent the wheel, reproducing a scheduler in userland, which is both insane in term of code complexity and debuggability, and in term of performances.


Yes, although to be fair they tend to do that because kernel threading never quite does what it needs to do (e.g. doesn't scale to the # threads applications need these days).

Your comment is spot on though: I'm only aware of two modern implementations that really work : Erlang and Go.


Kernel threading does scale to the number of threads applications need these days.

What can be slow are context switches, but they aren't slow in absolute terms. The vast majority of applications, including Web servers, are perfectly fine with 1:1 threading.


It doesn't really scale. Kernel level threads are quite expensive in the terms of memory. Erlang's processes take up few hundred bytes. It's rare to have dozen thousands of kernel-level threads running on Linux, while it's quite common for Erlang servers to have that many processes.


Anyone know if this is better in Alpine Linux with musl libc and smaller stacks?


It's not really up to a distribution to lessen this cost. Large part of it is what task descriptor in the kernel takes.

Also, if nothing else, you'll run out of PID numbers, as they're usually still 16 bits, even today, though there was a kernel compilation option to change that, from what I remember.


It can be changed at runtime. From proc(5), system wide limits:

/proc/sys/kernel/pid_max

> PID_MAX_LIMIT, approximately 4 million

/proc/sys/kernel/threads-max

> FUTEX_TID_MASK (0x3fffffff), [approximately 1 billion]

For per-process limit, increase RLIMIT_NPROC.


> Large part of it is what task descriptor in the kernel takes.

4K for kernel stacks, only 2K with future work. That's really not much space at all if you're doing anything interesting with those threads.

> Also, if nothing else, you'll run out of PID numbers, as they're usually still 16 bits, even today

Not for a very long time.


I wrote a server application that runs with about a million goroutines and performs quite well. It's not a webserver, but it responds to "requests" within hundreds of micros. Surely this would not be possible with OS threads.


And the go implementation is also significantly broken, as evidenced by the namespace issue a while ago. https://news.ycombinator.com/item?id=14470231


Then look at what Clojure is capable of.


Threads do scale. On Linux, O(1) scheduler solved this non-issue a long time ago.


> Threads do scale. On Linux, O(1) scheduler solved this non-issue a long time ago.

yup they do.

till you start making sure that your code doesn't end up with deadlock, data-corruption, races, performance issues due to lock-contention etc. etc. designing efficient locking schemes is notoriously hard alternating between:

- too coarse grained : resulting in serializing activities which could have (should have) proceeded in parallel, thereby sacrificing performance and scalability.

or

- too fine grained: with space+time for lock operations sapping performance, error recovery and not to mention understanding etc. etc.

In the former we have the dragons of deadlock and livelock roaming freely, and in the latter we have race conditions. Somewhere in between is a razor's edge which is both efficient and correct.

Almost always, things start with ‘one big lock around everything’ and the vague hope that performance might not be abysmal. When that is dashed, big lock gets broken up, and the prayer is repeated. Each iteration increasing complexity and decreasing lock-contention, and hopefully with some luck, modest performance gain as well.

remember this:

What do we want ?

Now !

When do we want it ?

Fewer race conditions !

have fun :)


I see your design is lacking and your fud quotient is high. Good on you!


Yep. This was my reaction as well. Threads are fine.


Having high numbers threads and switching between threads are different things. There is still a huge constant in front of that O(1) scheduler that makes it unattractive.


Spin up 100k threads on linux vs 100k actors in erlang.


Kernel threading doesn't scale well beyond hundreds or maybe thousands of threads. A server can have a million concurrent requests in progress.

Of course a better solution would have been to fix the kernel rather than go back to 1980s cooperative multitasking in userland.


Full interfaces for managing threads are often unnecessarily flexible and complicated for the relatively simple use cases where JS programmers typically need some sort of promise, though. Even if it’s still OS level threads implementing the behaviour behind the scenes, if you’re only spawning a new thread to determine a value in some potentially time-consuming way and then return that value when it’s available, the concept of a function that runs asynchronously and returns a future value is quite neat and intuitive.


Async is always necessary in JavaScript, and always has been. (And yes I know it's possible to write a fully synchronous JS program. No front ends do that, because obviously the site would be unusable, and no backends do it because it's a bad idea- inefficient, wasteful, more code than needed, etc. etc..)

Which APIs are you seeing that are async and seem like they shouldn't be?

Promises make code more legible than callback chains, and async/await make code more legible than promise chains. Do you have any examples of buggy/illegible code and the more legible less buggy alternative?


The async/await paradigm of C#/Typescript fixes this very nicely IMO.


I quite like async/await except that it's annoyingly easy to produce a deadlock, and given a snippet, it's not obvious that such a deadlock should occur.


Ironically it is essentially the event loop which causes the deadlock in C#.

Depending on what framework/runtime you're in .NET will schedule the await continuation on something called a "SynchronizationContext" which has ~3 different forms but it's basically an event loop/message loop which queues up each continuation on the original thread.

The problem occurs when you use .Wait() or .Result instead of 'await'. This causes the function to spin waiting for the Task to finish, which of course it never will if it has a continuation trying to dispatch into that same event loop.

This problem doesn't really happen at all under some circumstances, such as if the async chain starts on a background thread, or in .net core where they've removed the SynchronizationContext, hence no event loop, hence no problems.


Do you have an example? I’ve never seen a deadlock in Js. I’m trying to think of how it’s even possible? Async/await is just super around promises anyway.


Oops, should've specified C# specifically, in which deadlock has become notorious. I don't think there's anything wrong with async/await per se. It looks to me like the linearity of js engine event loops wouldn't produce the same problem.


Ohh gotcha, shoulda picked up on the context :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: