Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: TaskBotJS – TypeScript and JavaScript background job processing (github.com/eropple)
137 points by eropple on July 6, 2018 | hide | past | favorite | 76 comments


Whee! My first Show HN. I've only been here, like, seven years and all.

But, yeah, I've been tooling away on TaskBotJS in my metaphorical garage for a while. I've found Kue and Bull really frustrating to use (no shade intended; they feel like they made a lot more sense in a pre-ES6 universe?) and `node-resque`, while scratching a little more of that itch, just made me miss the comfortable guardrails of Ruby offline job queues. So I made one.

As my day job is devops and devops accessories, making it easy to run and administer TaskBotJS was a big priority for me and I've tried to get the best of both worlds: mostly turn-key to run in simpler cases but less prescriptive and more pluggable at scale. TaskBotJS is pretty extensible and supports job middleware and event listeners; I have a Sentry plugin coming in the near future as an example (and also 'cause I like Sentry).

I also like to write documentation[0]. While it's not yet complete, what's there is pretty exhaustive and I try to explain my thinking as well as just "what it does"; capturing the whys is as important as the whats, to me, to better foster conversation around the project and to make for more constructive issues when systemic, design-focused concerns are raised.

It's running in at least two production environments (only one of which is mine; I dogfood but I wouldn't have released it as a "you should check this out" version if somebody else wasn't checking me) and is being rolled out to at least one more in the near-ish future.

Thanks for taking a peek, and I'd love to hear any actionable feedback that comes to mind.

[0] - https://github.com/eropple/taskbotjs/wiki


Congrats on shipping! As someone who's written similar software[1] (for Python), I can appreciate the hard work that went into this.

One piece of feedback regarding the docs: the documentation section in your README has two links, one to the wiki and the other to the website. I had initially scanned the section and followed the link to the website (I don't automatically associate "wiki" with "documentation") and was then unable to find your documentation for a good couple of minutes. You should make the wiki link more prominent.

[1]: https://dramatiq.io


Nice! We need more work in this space. I'm one of the node-resque authors. I'd love to hear more about what you felt was missing from the ruby version... clearly we copied a lot :D


I'm using Kue heavily and so far had no problems using it. Can you elaborate on 'pre-ES6 universe' or any other issues you had with it?


TaskBotJS looks great. Does it support recurring scheduled tasks?

For example:

  run task foo at 13:00 every day for 14 days.
  run task bar at 06:30 every day forever (until cancelled).


I love Bull and wrote a front end for it. I've had Bull running for years with no problems and minimal management. Can you give some specific details on the problems you found with Bull? Writing a queue that atomically processes items is non-trivial, so when I wanted a feature in Bull I just improved it, the devs are very receptive.

Otherwise it seems like TaskBotJS is just NIH so that it can be LGPL and then commercially licensed for features that Bull has for free? This licensing setup means I will probably never touch TaskBotJS.

EDIT: I see there is a test script line in package.json but I don't see a CI badge and can't find the tests in the source? The first thing I did with Bull is verify it with the test suite and then add more tests to ensure consistency across various job types and errors. This is critical for a background job package which is meant to be left alone.


If I had to choose between Kue and Bull, I'd choose Bull. But I found it frustrating to work with.

- I want to support multiple backends. Bull is a Redis queue. Which is fine, as far as it goes, but I have some early designs on an AWS-backed SQS/Dynamo backend that obviates the need for Redis entirely, among other things. ElastiCache is not the worst thing in the world to have to run but not having to directly run anything is, in many environments, preferable, so that's a route I want to go down.

- Separate queues for each type of job (per-key) isn't sufficient control. I think it's better to categorize jobs by priority. And while one could multiplex on top of a single queue it, tbh, makes it pretty rough to handle stuff like "these jobs are all of similar priority but have different retry counts and strategies". That multiplexer then isn't integrated with metrics, frontends, etc. and it becomes a "I guess we just reimplement everything except the basic transport on top of Bull" situation.

- Also, the basic transport itself is arguable. TaskBotJS uses BRPOP in Redis and I think it's better in the general case to go "if the process hard crashes and doesn't have time to respond to a stop signal I may lose a job". When necessary, an administrator can explicitly opt-in to more complicated and more resource-intensive completely reliable queueing. IMO, RPOPLPUSH plus a heartbeat is a better way to do this than expiring locks, too; expecting a developer to renew the job lock or tune its timing is a bigger ask than "the worker's still working on it--whoops, the worker went away, requeue the job".

- Bull uses functions over classes for jobs. In many ways that's defensible; I don't think it scales (people-scales, not computer-scales) across a large dev team without doing a lot of work on top of it. TaskBotJS expects you to provide dependencies to your jobs that it can inject into the worker state--every Kue/Bull-based processor I've seen tends to use globals wrapped in the job closure for stuff like database connections, and I do not subscribe to this point of view and I think you shouldn't either.

Any one thing (save pluggable backends) I could have implemented on top of Bull. IMO it'd have been worse to do so, but I could have. But the sum total of things I think could be done better made it easier to write my own, and then to refine it over time into TaskBotJS.

Don't get me wrong: I think Bull is fine. The compromises it forces upon you are ones I don't want to make and I don't think developers should be forced to, but of course you can get by with it. But I wouldn't have built TaskBotJS if I didn't think it was better. Throw shade if you'd like; I made the tool before I made the product and I really don't expect it to make more money than enough to keep me focused on its maintenance.


My priorities as a business needing a job queue are:

- Does it run my jobs reliably?

- Does it run my jobs with good performance?

- Are failed jobs obvious and simple to report/retry/log?

Bull's tests and history shows that these needs are addressed.

For a backend, the Redis strategy employed by Bull is battle-tested and used by many other related projects. I can't say that about any other backend. I haven't found this to be a common, real issue in practice.

For multiple queues, I can easily run multiple namespaced Bull instances, so that is not an issue.

Bull uses subscriptions and RPOPLPUSH, so I'm not sure where the polling problem is that you mention. Bull's performance testing is very competitive.

Functions vs classes is an interesting dev choice... In JavaScript there is really not much of a difference. I would argue that functions are more lightweight, which is good for a job queue, but I don't actually think a job queue needs an opinion on client JS architecture.

If you have performance numbers and a test suite for TaskBotJS, that would go a long way to support the claims.


My priorities are similar, but #1 for me is how easy it is to maintain. It sounds like most of the benefits over Bull are focused on that area.


That's a fair read, IMO. My contention is that by the time you've built something on top of Kue or Bull that feels good to a developer...you're probably going to have built most of TaskBotJS.


I'm planning to use a similar model OSS / commercial as well. Mike and Sidekiq is my inspiration for this. Good luck going further with the project and congrats for the launch!


sorry i don't get it. why would somebody use this over a rabbitmq.


I think it is a legit question... I poked a little around and I did not find an answer.


Read and find out


After a few minutes I gave up on evaluating this tool. It looks interesting but there's no "hello world" or even a simple use-case explanation in the first few pages of docs. Couple that with the no-warning switching between JS & TS in the code examples, and it is too tedious to try to figure out.

I have no objections to TS, but if this is a "JS" library for consumption in JS, please write your example jobs in JS please! Don't make me try to translate TS to JS in my head–I don't care how easy you might consider it to be, I don't want to do it.

I like the idea and will revisit when docs improve. I recommend user testing your docs: put an unfamiliar JS dev in front of your README, set a timer for ~7 minutes, and when the timer dings ask if they think they'll use your library. If they're still confused after that time, your intro docs are not clear enough. That may seem like a small amount of time but there are 10,000 JS libs out there, 7 minutes is far more time than I can devote to developing an initial impression of each one.


Thanks for checking it out and for the feedback! Agreed on intro docs--that's a big reason it's "0.9.0" instead of "1.0". ;) "1.0", to me, is "the API is stable" (though the current API should be) and "it's documented to the best of my ability for novice users." The current userbase is a little more willing to dig, as it's mostly friends and colleagues, and that'll change as it moves towards 1.0.

I'll look into a JS-based example job; I have never found the translation particularly onerous, but horses for courses, I guess. I'm not switching between JS and TS in the documentation, though, save for the definition of a configuration file--it's pretty silly, IMO, to transpile a config file, especially when you get 99% of the way there through something like VSCode's IntelliSense. I didn't make clear enough the distinction, apparently, but I didn't expect it to be a stumbling block. Lessons learned.


How about a language agnostic background processing framework with clients in multiple languages, including Javascript?

https://github.com/contribsys/faktory


I like Mike a lot and I think Sidekiq is awesome--not having it in Node was largely why I wrote TaskBotJS. And I evaluated Faktory before writing it, because hey, this has been a lot of work, buy instead of build when you can!

As it stands, and things could change in the future, but I think a dedicated ecosystem-specific system is a better bet for the 90th, 95th, and probably 98th percentile of products and projects. I don't feel that centralizing background processes makes a lot of sense in a service-oriented architecture and that takes away a lot of Faktory's benefits of agnosticism. The embedded use of RocksDB also makes me itchy; Redis--while by no means perfect--is, IMO, more understood as a backing store, and it's separate from the runner itself. (TaskBotJS is also likely to eventually grow a Postgres backend. I've also got a good chunk of one written using SQS/DynamoDB, but that's further out for sure.)

All that said: Mike is crazy smart and I wouldn't begrudge anybody who wanted to use Faktory. When I write Ruby, I will continue to use Sidekiq. ;)


I was very enthusiastic about Faktory when it was first announced, and still am, but when we tried to use it for a project at work it wasn't really suited to PaaS based infrastructure because of the dependency on a local database. There are a few Faktory as a Service providers in development at the moment, but none of them are in production in the EU yet.


Very nice job! I'll definitely be trying this out in a personal project.

How's your experience been with TypeScript and oao? I've got a very large monorepo (using yarn workspaces) of TypeScript modules and it's been sort of a PITA. For instance, a typical yarn install will unnecessarily duplicate some dependencies across packages causing tsc to complain about duplicate identifiers. Running yarn --check-files fixes this, but it's still annoying. Also, following symbols with yarn workspaces is sometimes annoying since linked packages will use "main" and "types" fields in the package.json, thus following a symbol takes you to the generated type definition. I have a generated tsconfig.json that sets paths to their appropriate package paths to fix that. Again annoying.


Thanks for the kind words. =) And, good question! I...probably use `oao` wrong, to be honest, or at least not as fully as it could be; my main use of it is to power a workspace-wide `yarn watch`. I have had a lot of problems trying to use its more advanced feature set and, tbh, if you look in `packaging` I am certain there are functions in there that `oao` totally does, especially around version bumping.

100% agreed with regards to yarn workspaces and linked packages. Symbol navigation is Not Great, and I also have to be careful using VSCode's quick import feature because sometimes it will just go wild and import "../../../../dist/client/foo" or whatever instead of using the package import. Once done correctly in a given file it's fine, though.

But converting TaskBotJS to TypeScript was really easy, and it catches a lot of issues, so I think the future's bright on the tooling front. I found TypeScript unusable, like, six months ago. I wrote a couple React projects and a fairly large React Native app[0] in ES6 (which I Do Not Recommend Doing...ow) because getting TypeScript working was just way too much work. Still kind of is with regards to React, to be honest; the web panel for TaskBotJS is just a create-react-app app because of it. But--progress.

[0]: https://bit.ly/buymyapp


Have you tried using Lerna for monorepo management? I see some projects using it, I haven't yet (no monorepos) but it claims to solve some of those problems. https://github.com/lerna/lerna


I don't get it. I'm primarily a Python developer, so this looks like Celery to me. I've never understood why I'd want to tie myself to a particular library, rather than encode messages to an agreed encoding, then write workers in the technology most appropriate for the job?


Does it run the tasks in the same or a different process? Why is redis necessary? Can it pick up where it left off if the server restarts? Why so much boilerplate (still) for simple use cases?


I love seeing projects with a docker-compose.yml, so I can check it out quickly. However, yours only spins up redis - wouldn't it make sense to spin up TaskBotJS in a separate container? Yes, I know the proper answer is "PRs accepted", but just curious about the rationale.


Hey, awesome feedback! The root docker-compose file is for development; it stands up Redis and the example config files, unless overridden, look for a Redis on the port specified in that file.

TaskBotJS does have an integration environment, though. Take a look in `packaging`. When I do a release I use the functionality in there to spin up `@taskbotjs/example` and the associated functionality. On that it does some very basic smoke tests to make sure nothing's obviously broken, but it spins up a set of producers, consumers, and the webapi/web panel.

If you run `NO_CLEANUP=1 ./packaging/build-and-integrate.bash`, it'll stand up the integration environment and you can play around with an active producer/consumer/web panel system.

I should definitely break that out and make it easier to play with right off the bat. Thanks for the idea.


I'd definitely try to go with docker-compose - it's nice to be able to try something out without having to install it in your dev environment. Perhaps example/docker-compose.yml.


That's a really good point, and I can still use it during packaging. Will see what I can do. Thanks.


Not OP, but I’m guessing is because Redis supports pub sub patterns, while for SQLite you’d need to do polling, which is slower.


I wasn't saying anything about Redis vs another data store. I meant that the docker-compose should stand up node with TaskBotJS running in addition to Redis.


Trying to solve something like this always puzzled me: how to make job publishing transactional? There are no simple way to do transaction across redis and, say, postgres. If you wish to make thing reliable, you will have to put job to an sql first and then pull (or subscribe) from database to a worker queue.

Am i right or there are another option?


Yes, this is a major problem most people simply ignore and deal with in a one-off manner when it fails. The only way to do this safely that is practical is to create a jobs table which is written to in a transaction with the other writes. Another process reads the table and writes the job data into the queueing system.

For example, the process starts up, reads the highest ID it has written from Redis, then starts reading the table for jobs with IDs higher than that. The job itself is written to Redis in a transaction along with the ID from the database. Ignoring the fact that Redis can lose data during failover, you’re pretty safe and duplicates will be minimal. You can add some idempotency on top if desired.

Attempting to write to both a queue system and a relational database at the same time will fail at some point and you will be left to pick up the pieces of inconsistent state.

If the write to the queue is to send an email, maybe you don’t care care. If it is to calculate a very important value that happens to be expensive to compute and needs to be done outside the request... good luck.


I think calling it a "major problem" is overblown for the 90/95/98th percentile use cases. It certainly exists, and given a long enough time horizon it will happen, but I have never worked on a system where datastore non-transactionality has caused issues that need to be resolved with more than, as you say, a one-off fix.

I tend to implement what you suggest with absolutely critical jobs, but the level of juggling it requires is a little much for, again, the 90/95/98th percentile jobs. Few things are so critical that you can only do them once and don't have significant visibility into them that retry logic can't bail you out--though, I hasten to note, I do explicitly advise the use of idempotent job logic with this or any other task-running solution.


99.99% of the time at small scale it will work fine. That is completely accurate. But random network hiccups are so common in today’s cloud environments that, even at medium scale, a few minutes of partition between your app server and Redis, while still allowing access to the database, will result in a huge number of dropped jobs.

The fix is so tiny I don’t see why it would be an issue for a team to implement. It uses one new database table and a long lived process for writing to Redis, which could be just running a script on the same node as Redis polling the database. The rest of the code can be the same outside of that.


Yeah, that's a fair criticism. For me, why I'm not comfortable with that approach for everything is that--if I'm understanding you correctly--means storing a lot of state with the job (state which can then go stale).


That would depend on the application’s needs, but isn’t a part of the approach itself. It is only guaranteeing delivery of the job to a worker at some point in the future, on average half the duration of the polling interval reading from the database. You can still use the strategy of only encoding object IDs in the job to avoid stale data.

If the worker can’t reach the database, the job will just fail as it would at any other time and retry later.

This is changing an at-most once delivery of the job into the queue into at-least once. Combine with idempotency you can get “exactly once semantics.”


Sure, but then our database will be a bottleneck and we actually can just use SQL for everything and not to forward information to another storage (like Redis) since there are no benefits.

If you will deploy live system several times a day, eventually you will lose some data. Also, implementing monitoring for this is not an easy task and you will probably will never know if job wasn't scheduled.


Not criticizing, but just curious what was the reasoning behind picking Redis rather than something like sqlite, which would keep the application a little bit more self-contained so to speak?


Hey, great question! SQLite is great for locally-stored databases with relatively light relational requirements, but I chose Redis as a first go-around in part because it's kind of "the standard" for this sort of job queue (Kue, Bull, Resque, Sidekiq, etc.) and also because it was the simplest network-friendly datastore that included the primitives I wanted to use and because its simple nature--when not in clustered mode, which is currently in "you're on your own" territory for TaskBotJS, but I expect that to improve--makes reasoning through operations simpler for me.

As mentioned elsewhere, I've started work on a PostgreSQL-backed backend (which is actually easier than it sounds to get right, modern Postgres includes functionality to work with advisory-locked rows) and have done some noodling on a fully AWS-backed DynamoDB/SQS backend solution, too. The FoundationDB open-source announcement has me doing some reading, too, but I don't want to say much there just because I don't know enough about it yet to know whether it makes sense.


I am in no way affiliated with this project, but my assumption is that since this uses workers(separate servers) to process the background jobs - they could be on completely separate machines. If you use redis for job storage, they can all communicate without having to talk back to the master server.


You can also do that with sqlite: have workers query for data and lock records when they are working on it. (people have been implementing job queues in SQL based environments for decades). The point of using sqlite is that it's not something you need to install and manage separately, or something that acts as a broker. (sqlite would be very slow as the workers need to poll for jobs).


Polling is not the worst thing in the world; when using reliable queueing and BRPOPLPUSH TaskBotJS does poll, and Redis handles it okay. SQLite is more just emphatically Not A Good Network Database, and the developers themselves say so.


Not a fan of this sort of "open core" offering, and it's heavy on the marketing. Recurring jobs are an "Enterprise" feature? Hard pass. That's a core feature of Celery and readily available for Resqueue, latter of which seems to be your project inspiration? Pro/Enterprise should be tool offerings that depend on your hosting and support. Arbitrary feature gating isn't encouraging for a project like this.


Celery requires the system administrator to ensure that there's only a single scheduler in the cluster at a time; the documentation specifically calls that out. This additional administrative load is not a requirement of TaskBotJS. Because of that recurring jobs requires underlying, distributed-environment infrastructural features I'm not prepared to open source right now. Past that? TaskBotJS is LGPL. If you'd like to write and maintain your own cron-type scheduler, I would be happy to link to it as a plugin, and you can own the difficulty of maintaining it when users come yelling. ;)

That last bit is important, IMO. The best way to keep software maintained is to create a financial incentive for its maintenance. The pro/enterprise feature set is borne out of actual use, and to signal the very strong intent to maintain this as unsurprising software that will be there and be supported for the long haul.


> Because of that recurring jobs requires underlying, distributed-environment infrastructural features I'm not prepared to open source right now.

I don't see how that follows. Dedicating a single node is trivial. If you're talking about redundancy, nobody's going to steal your leader election code. That's why the feature gating feels arbitrary.


Sorry, maybe I was unclear. I don't want to have to support it in the open-source version ("I tried to run it on three DigitalOcean droplets and I didn't read the instructions and it's all sad" is just a time sink) and I do want feature incentives, in addition to support guarantees, to encourage corporate users to support the project.

I want to align the financial incentives to maintain this software; features are part of how that can be done. At commercial scales, companies do need support. But support is hard to quantify by itself and feature gates provide additional arguments for buying the software and helping to ensure that the project survives and is maintained.


> Celery requires the system administrator to ensure that there's only a single scheduler in the cluster at a time; the documentation specifically calls that out.

really? I thought all the schedulers did explicit locking. The ones I wrote for redis and mongo certainly did that. I'm pretty sure the other schedulers I saw in the wild did as well.


`celery beat` suggests otherwise.

http://docs.celeryproject.org/en/latest/userguide/periodic-t...

You have to ensure only a single scheduler is running for a schedule at a time, otherwise you’d end up with duplicate tasks. Using a centralized approach means the schedule doesn’t have to be synchronized, and the service can operate without using locks.


Off topic, is it normal now to include a "Code of Conduct" in new, single-contributor projects? I thought it was more of a reactive thing.


See https://opensource.guide/starting-a-project/ for GitHub's latest advice on starting an open source project, which includes adding a Code of Conduct. I think single-person projects just starting out can get away with just a LICENSE and README, but it's best to have CONTRIBUTING and CODE_OF_CONDUCT if you expect any contributions.

Hopefully it's rare that it's needed, but it's really easy to add and a nice thing to have proactively instead of reactively, and plenty of people feel more welcome when things like that are explicit.


I don't know what's "normal", but I believe very strongly in specifying acceptable behaviors from the jump when trying to build a community.

Thanks for taking a look.


@eropple Sort of a nitpick, but you might want to replace "[INSERT EMAIL ADDRESS]" with your actual email address. :-)

https://github.com/eropple/taskbotjs/blob/master/CODE-OF-CON...


I just hit my forehead hard enough to wake the dog.

Thanks. :)


"YAGNI" is the first thing that pops into my head. d:o)


The Rust programming language had a Code of Conduct from the get-go. In general, having one gives a reasonable idea of the type of community you'd like to build around the project, and avoids the mess of trying to add one once you have a pile of vehemently anti-CoC community members.


simple copy paste job, avoids "changing the contract" later. i agree it doesnt make sense as a standalone thing but it takes 2 seconds to copy


It's virtue signalling.


Yup. And apparently that really does generate valuable social currency.


Calling out virtue signalling is arguably itself a flavor of virtue signalling, just for (some) other virtues ;)


Agreed. Lets just skip the whole thing next time. :)


[flagged]


I, uh, appreciate your "benefit of the doubt", but I should set the record straight: I very consciously put that file in there. GitHub did not suggest it, I chose to do it (and you can even see--brace yourself--that the build scripts copy it into each package when it builds them!).

While, if we're being honest, the impact of a code of conduct on even a large project is fairly small--it can be ignored just as easily as anything else--it's really good at outing people who think complaints of "virtue signaling" matter as much as a fart in a thunderstorm. I don't want to work with people who act that way and if having a code of conduct in there is a sufficient "you are unwelcome if you cannot act decently" sign, then by god I'll conduct those codes.


Please don't try to stir up pointless drama here.


Great logo!


This is great!


How can "The best TypeScript and JavaScript background job processing on the planet" only have 8 stars on GH ? :o


Because I released it today.

And I'm very, very confident.


First, congrats on this work! It's certainly a lot of work to write this code, the documentation, think about enterprise offerings, etc.

That being said, I think you're inviting a lot of ill will by calling your project "the best", "in the world", etc. It's fine for a proprietary product but it feels awkward for open source projects (if that's your goal).

I think you should spend more time detailing how TaskBotJS is better than the competition in a formal document to justify your pricing. Again, the open source vs. product thing is confusing right now, which is fine for the initial release. If following an open source business model is your primary goal, I would spend more time on open source and community building and then the paying customers will follow. As of now, there isn't a lot of incentive to pay.

Finally, this work is dual licensed, how are you dealing with external contributions?


Thanks for the awesome feedback!

> That being said, I think you're inviting a lot of ill will by calling your project "the best", "in the world", etc.

I understand that viewpoint, and I'm definitely sympathetic to it. But I've spent a lot of time digging into this (as mentioned elsewhere--don't build when you can buy) and from my perspective I feel it's true. I'll change it when facts on the ground change. =)

You are correct in that right now the incentives to purchase are lopsidedly presented. Working on that this weekend, as it happens. Thing is, though, TaskBotJS is super useful, right now, as its open source release, and because it's super useful (I'm using it now on a project to make sure I keep tabs on how the developer experience of the OSS release feels) I wanted to get it out there for people to play with.

Which is why I am comfortable dealing with the lopsided nature of its marketing for a bit. The reason that I'm offering a pro version with feature gates is largely because 1) I want to use this for the long haul, 2) financial incentives need to align for me to be able to spend time on it, 3) I'm that convinced it's that good, and 4) selling pure support without a "you also get feature X, Y, and Z" is a conceptually more difficult road.

As a consultant I pretty regularly find myself going "you might want to consider Sidekiq Pro, because of support and also because batches will save your bacon", to which clients look at it, try it, and are comfortable paying for that (and also getting the support that, IMO, they need); I am doing likewise.

> Finally, this work is dual licensed, how are you dealing with external contributions?

I have a CLA and a rights grant I need to wire up to GitHub. Which totally does exclude some people from contributing, and I realize that. I built this with the expectation that I would be the overwhelmingly primary committer and I expect most open-source stuff to be plugins, etc.--people scratching their own itch, as I have with this.


Thank you. Your experience as a consultant on the ground using this daily is very important. Good luck!


hyperbole-driven documentation


HDD is a cornerstone of modern technology[1]

[1]: apple.com


I thinks it's cute (with no snark meant) how committed many HN readers are with their footnotes


How can you measure a project by the amount of github stars?


I think given the size of the potential user base for such a thing that a low number like 8 would be a contrary indicator of the claims being made.

But in general I'd agree that stars can be a deceptive metric.


I mean...there are 102 as of this writing.

The project has not, I stress, changed in the interim. ;)


It's a weak signal for the maturity of the ecosystem around it, but it's a signal.

It's hard to define what's "the best in the world" so users are totally justified in looking at different metrics to figure out what that actually means, if there is no clear defined metrics to compare against.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: