Hacker Newsnew | past | comments | ask | show | jobs | submit | jomohke's commentslogin

This is standard practice. They need to use current lossless formats to display examples to people who don't have the format yet. They are still showing accurate examples of compression artifacts. I'm not sure what else you'd expect them to do.

Strange, as Cloudinary's test had the opposite conclusion -- jpegxl was significantly faster to decode than avif. Did the decoders change rapidly in a year, or was it a switch to new ones (the rust reimplementation)?

https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front

If decode speed is an issue, it's notable that avif varied a lot depending on encode settings in their test:

> Interestingly, the decode speed of AVIF depends on how the image was encoded: it is faster when using the faster-but-slightly-worse multi-tile encoding, slower when using the default single-tile encoding.


It looks like folding@home is still going https://foldingathome.org/

I'm quite surprised these are still around as I hadn't seen them mentioned in so long.

I always assumed the phase out of screensavers (and introduction of CPU low power modes) were terminal for them.


They saw a huge uptick in users during the COVID pandemic. As the corona virus is a protein shell, and their software folds protein molecules, they were able to apply it to look for targets for other molecules to attach to the virus where it would normally latch onto a cell, this could then lead to treatments.

They'd found some promising results, and were working with a pharmaceutical company to manufacture the first compounds that could then be tested. Unfortunately that company's facility was located in eastern Ukraine. =(

But that aside, they've still been going strong.


I could not find mentions of any Ukraine-based company working with them. Do you have more info?

Folding@home got a boost recently from Pewdipie deploying his 12-stack 4090 build against it and then getting a bunch of his fanbase to also participate in his folding@home squad.

Are they doing anything not covered by alphafold? I thought that approach basically crushed all previous efforts.


World Community Grid at https://www.worldcommunitygrid.org/ is also running, though it has had struggles since moving datacenters, and it seems their external stats are still out of commission.

I've recently decided to end my own participation, mainly because I've run three systems into the ground, and we're now in the "save what you can" era. There's one motherboard I want to get refurbished, since it became unstable when idle but loved 24x7 crunching. It would make a great NAS if I could find some DDR4 at a price I could stomach, or I could lay it in as a spare if the new motherboard goes south in the future.


How many papers have been published as a result of this, and more pertinently, how many "real" things are now being made or used based on that? I'm hoping it's not all just perpetual "regrowing teeth" territory where nothing ever comes from it.

This is extremely far from any of my expertises, but I'll offer an answer while no one else did (please correct me!). Basically, all medicine (i.e. drugs) we have are proteins or certain compounds that fit within some of our cell's (or viruses) molecules and does funny stuff to them, like disabling certain parts, acting as a signal to regulate behavior, and so on. Doing funny stuff is basically about fitting into another molecule. So research about how proteins (most molecules (after water) in our body, I guess) interact is incredibly important in basically all medicine, specially in the discovery of medicine (like suggesting compounds (drug) that could fit in certain receptors or perform certain function), and understanding disease/pathologies (which give ideas on how to prevent and treat them).

If folding@home helps to understand and model this behavior of molecules (which I guess tends to be difficult and unreliable to do without the aid of computers), it is extremely helpful. Now I don't know other details like, perhaps molecular biology is the bottleneck and there is scant available molecules to analyze (reducing its impact/marginal sensitivity), or perhaps compute really is a bottleneck in this particular problem. But nonetheless it seems like a great project for which contributions do make a difference.

(Note: although, that said, if you were expecting something like 'compute->miracle drug comes out', I believe that's not quite how it works; research in general rarely works that way, I think because the constraint space and problem space that would require this approach is too large and complicated; and in fact I believe many if not most significant discoveries have resulted from playing around and investigating random molecules, often from (nonhuman) animals, plants and bacteria[1]; although molecular sciences (molecular biology) seem to enable a slightly more methodological approach)

[1] The GLP-1 based weight loss drugs for example came from investigating the Gila monster lizard venom https://en.wikipedia.org/wiki/GLP-1_receptor_agonist#History


Wow, 333 words without even attempting to address the question. Have you considered a career in PR?

Do you think they used an AI or something? Seems to be answering a question I didn't even ask. The strange performative replies I've had to my question makes me more suspicious about folding@home.

Honestly, I doubt it was an LLM because an LLM would have stuck closer to answering the question (avoiding non-sequiturs is the only thing they do, after all) .

I'm not quite sure what the point of the response was.


[flagged]


I wasn't aware asking a question was FUD. That's also a list of achievements with no links without any information regarding how much if any volunteer contributed computing has contributed to them.

> please have a look around before spreading FUD

Please don't turn HN into reddit.


> That's also a list of achievements with no links without any information regarding how much if any volunteer contributed computing has contributed to them.

That's papers that are citing them. The reason for no links is explained on the page,

> The distribution rules for published papers vary by the publication in which the paper appears. Due to these rules, a public web-source of each paper may not be immediately available. If full version is not linked below or available elsewhere on the Internet (Google Scholar can be helpful for this), most, if not all of these publications are freely available at a local municipal or collegial library. These articles are written for scientists, so the contents are fairly technical.


I would've thought that with the advent of general purpose GPUs, cloud computing, etc that they would've run out of work by now.

I think you’re missing the main limiting resource: money.

Some of these projects could occupy entire regions of cloud compute in some cases for awhile, some even more depending on the problem. But running that for even a short time or decades needed would cost more money than anyone has to do.

Academic HPCs existed long before cloud compute options and for certain problem spaces could also be used even in non-distributed memory cases to handle this stuff. But you still needed allocation time and sometimes even funding to use them, competing against other cases like drug design, cancer research, nuclear testing… whatever. So searching for ET could be crowdsourced and the cost distributed which is something that made it alluring and tractable.

I used to run a small academic cluster that was underutilized but essentially fully paid for. I’d often put some of these projects running as background throttled processes outside scheduler space so the 90% of the time no one was using them, the hardware would at least be doing some useful scientific research since it’s after-all funded largely from federal scientific research funding. There was of course some bias introduced by which projects I chose to support whereas someone else may have made a more equitable choice.


I forgot all about this project - thanks for the reminder!

You can run on a spare Raspberry Pi. I remember doing that. Performance isn’t great but every little bit helps

https://downey.io/blog/folding-at-home-raspberry-pi-arm/


Does it? I stopped doing @home projects because I started never finishing a unit for credit under BOINC.

I have Pi4B 2GB, Pi4B 4GB, and Pi400 folding full time with reasonable success. You need to use the 64-bit OS.

I have several machines contributing to it all the time, and every now and then I run it on my 5090 at home to heat up my room a bit in winter :D It does an incredible 1M points per day, it's a monster of a GPU.

Some sites are using *@home as a cdn.

In theory yes, but in practice they usually have the speaker up far higher than they are speaking themselves so we do only hear one side clearly.

I think the high distractability is a trifecta of volume, non-naturallness of the sound (compression etc: feeling out of place in the space) and this point.


Which models did you try?


Interesting. Even when nothing bad happens? It has always worked for me.


They likely have other things to do.


In this quote I don't think he means it from the business side. He's claiming more data allows a better product:

> ... the answers are a statistical synthesis of all of the knowledge the model makers can get their hands on, and are completely unique to every individual; at the same time, every individual user’s usage should, at least in theory, make the model better over time.

> It follows, then, that ChatGPT should obviously have an advertising model. This isn’t just a function of needing to make money: advertising would make ChatGPT a better product. It would have more users using it more, providing more feedback; capturing purchase signals — not from affiliate links, but from personalized ads — would create a richer understanding of individual users, enabling better responses.

But there is a more trivial way that it could be "better" with ads: they could give free users more quota (and/or better models), since there's some income from them.

The idea of ChatGPT's own output being modified to sell products sounds awful to me, but placing ads alongside that are not relevant to the current chat sounds like an Ok compromise to me for free users. That's what Gmail does and most people here on HN seem to use it.


Is this why everyone only seems to know the first half of Dario's quote? The guy in that video is commenting on a 40 second clip from twitter, not the original interview.

I posted a link and transcription of the rest of his "three to six months" quote here: https://news.ycombinator.com/item?id=46126784


Thank you.


Why do people always stop this quote at the breath? The rest of it says that he still thinks they need tech employees.

> .... and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced

(He then said it would continue improving, but this was not in the 12 month prediction.)

Source interview: https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: