Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am a bit confused by the benchmark comparison they are doing. The comparison of a domain specific "LeJEPA" on astronomy images against general models, which are not explicitly fine-tuned on astronomy images seems misleading to me.

Does anybody understand why that benchmark might still be reasonable?





The comparison is against general models which are explicitly fine-tuned. Specifically, they pre-train their models on unlabeled in-domain images and take DINO models pre-trained on internet-scale general images, then fine-tune both of them on a small number of labeled in-domain images.

The idea is to show that unsupervised pre-training on your target data, even if you don't have a lot of it, can beat transfer learning from a larger, but less focused dataset.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: