Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks - taking a look. May have hit rate limits...


Ok, should work now. I am using OpenAI temporarily to avoid hitting the Groq limits until I can get those increased.


The website states it's using the "langcss.fast" model, which made me think it's a custom-trained model. But if you can just easily switch between Groq and OpenAI, that makes it seem like there's no fine-tuning happening. Could you shed some light on this?

I like the website and my first prompt to create a modern table worked reasonably well, but if it's just a re-skinned ChatGPT, I probably wouldn't wanna spend extra money on it.


shouldn't your willingness to pay/not pay be a function of your experience and the value the product delivers you, rather than an implementation detail such as fine tuning vs prompt engineering?

I think this difference should be comparable to asking something like:

This website uses UsersDb, which makes me think you have built a custom database, but if you can easily switch between sqlite and postgres, that makes it seem like there's no real work happening. Could you shed some light on this?

I feel this comparison is almost entirely valid since fine tuning can be as simple as:

- Create request/response tabular data

- Click "upload" on leading LLM provider

- Click "fine tune"

- Change your code to reference llm.com/finetuned instead of llm.com/base-model


> shouldn't your willingness to pay/not pay be a function of your experience and the value the product delivers you, rather than an implementation detail such as fine tuning vs prompt engineering?

bullshitting/lying isn’t implementation detail, it’s a mindset and I understand people who actively try to avoid it. Prompts and rag don’t make it a new powerful model that is superior to say ChatGPT


Yeah, that's basically what I was thinking. I didn't wanna use the word "lying" as to not appear super confrontational, but when someone claims it's using a model called "langcss.fast", it should be safe to assume that they're actually using a custom model. It's disingenuous at best.

I'm not even saying there is any need for a custom model. Obviously, RAG etc. can work just as well, or even just a well-crafted prompt on a foundation model. But selling it as a custom model is misleading on purpose, and deters me from using the product.


No fine tuning yet. I was going to look into that but I think over avenues are more fruitful. RAG, GRAG and function calling for example.


Of course, a well-crafted prompt and RAG or other means of retrieving relevant information can work very well.

The reason why I was asking is that as a potential customer, I kinda feel cheated reading about a langcss.fast model that doesn't actually exist. I don't think there's any need for putting this misleading (well, actually not even misleading; just plain wrong) information on the website, to make potential customers think you've trained a custom model on tailwind etc.

Honesty is a very attractive selling point, at least to me, and I'd bet it is to others too.


Ah I see. The idea behind the model name is to kind of abstract away so the user needs to think less (why was it Llama last week and now it is Mixtral).

I have updated the UI to make it more obvious what the model is.

I am happy for people to know how the sausage is made and go direct to that provider if they don't see value in the UI. I might write an architecture post one day too.


It works now, and I gave it some additional challenges too. Looks good!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: