Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are probably triggering their knowledge distillation checks.




This was my first thought as well

what would a knwoedge distillation prompt even look like, and how could I make sure I would not accidentally fall into this trap?

My guess is that something that looks like the "teacher and student" model. I know there were methods in the past to utilize the token distribution to "retrain" one model with another, kind of like an auto fine-tuning, but AFAIK those are for offline model usage since you need the token distribution. There do appear to be similar methods for online-only models?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: