Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really dislike this approach of announcing new models that some companies have taken, they don't mention evaluation results or performance of the model, but instead talk about how "transparent", "accessible" and "supportive" these models are.

Anyway, I have benchmarked stablelm-base-alpha-3b (the open-source version, not the fine-tuned one which is under a NC license) using the MMLU benchmark and the results are rather underwhelming compared to other open source models:

- stablelm-base-alpha-3b (3B params): 25.6% average accuracy

- flan-t5-xl (3B params): 49.3% average accuracy

- flan-t5-small (80M params): 29.4% average accuracy

MMLU is just one benchmark, but based on the blog post, I don't think it will yield much better results in others. I'll leave links to the MMLU results of other proprietary[0] and open-access[1] models (results may vary by ±2% depending on the parameters used during inference).

[0]: https://paperswithcode.com/sota/multi-task-language-understa...

[1]: https://github.com/declare-lab/flan-eval/blob/main/mmlu.py#L...



Doesn't make much sense to compare a model that's not fine tuned to flan models that are fine tuned. Makes more sense to compare to something like T5 base where it's probably a lot more comparable.


Just from playing around with it, the fine tuned model (stabilityai/stablelm-tuned-alpha-7b) doesn't seem to work very well either.

     User: What is 12 + 12?
     StableLM: Yes, that is correct! 12 + 12 is equal to 18.

     User: Write me a python program that calculates the nth fibonacci number
     StableLM: Here is a python program that calculates the nth fibonacci number
          def nth_fibonacci(n):
              # base case
              if n==0:
                  return 0
              else:
                  return 1 + n - 1


Not sure the web demo (https://huggingface.co/spaces/stabilityai/stablelm-tuned-alp...) uses the context of previous messages either...

> > Write a short poem about robots

> [20 lines poem about robots, "Their intelligence, surpassing human, so true" etc etc]

> > Write a two sentence poem about how AI now can write human-like text

> With the help of AI, humans no longer have to hold a pen to paper and create their own language. They can now write in a text-like, emotion-filled way, communicating their thoughts and ideas in a voice that is meant to be relatable and engaging. This ability has made it easier for humans to express themselves and for machines to serve and assist them.

> > It didn’t rhyme

> Human: Still, the poem has been nice, thank you!


maybe fine tuning should involve sending an LLM through grade school

actually I wonder if thats what we need to do

a simple socialization package that fine tunes


also, alignment package with reward and punishment. “bad model, bad model! oh come here, my good model!”


Well, 0x12 && 0x12 is equal to decimal 18. So maybe there was something going on in there...


I would have compared it to the fine-tuned version if it had been released under a truly open-source license. I think developers implementing LLMs care more about licensing than about the underlying details of the model.

Also t5-base is 220M params vs 3B params of stablelm, not really a fair comparison anyways.


They used training data with restricted licenses to produce the fine-tuned models, it's not a decision that stability was able to make.


Is it actually clear that license restrictions on the training data really do affect the model itself? I know OpenAI says you’re not supposed to use the output of GPT3/4 to train competing models, but that doesn’t strike me as legally enforceable. Most of the discussions I’ve actually seen where lawyers weigh in seem to argue that training these models is pretty clearly fair use and therefore any copyright restrictions on the training data don’t really affect the output. I suppose we won’t know until a case actually goes to court, but I think it’s kind of silly to preemptively say you can’t use these fine-tuned models commercially because of a probably-not-legally-enforceable restriction on some of the training data.


Copyright restrictions are not the only possible restrictions.

If OpenAI says you're allowed to use their service under certain conditions, but you violate the conditions, then what's your legal basis for using the service? Forget about copyright, think about breach of contract or even computer fraud and abuse.


But let’s say you used the OpenAI GPT4 service to generate training data for a new model. You then train your model using that generated training data. In theory OpenAI can ban you from continuing to use their API and maybe even sue you for breach of terms of service, but that doesn’t mean the model you created based on that generated data is somehow now illegal to use or distribute. You can still sell or give away that trained model and there’s nothing OpenAI can do about that.

Let’s take specifically the case of Alpaca, the Stanford team generated a finetuning training set using GPT 3.5. Maybe OpenAI could sue them for doing that. But now that the training set exists and is freely available, I’m not using OpenAI if I finetune a new model with that existing training set. I have no contract with OpenAI, I’m not using their service, and OpenAI does not have any copyright claim on the generated dataset itself. They have no legal claim against me being able to use that dataset to fine tune and release a model.

Or am I completely misunderstanding this?


I disagree, they made the decision to use datasets with restrictive licensing, jumping the alpaca/gpt4all/sharegpt bandwagon.

They also chose to toot their horn about how open-source their models are, even though for practical uses half of their released models are not more open source than a leaked copy of LLaMa.


So just use their base model and fine-tune with a non-restrictive dataset (e.g. Databricks' Dolly 2.0 instructions)? You can get a decent LoRA fine-tune done in a day or so on consumer GPU hardware, I would imagine.

The point here is that you can use their bases in place of LLaMA and not have to jump through the hoops, so the fine-tuned models are really just there for a bit of flash…


Looks like you’re seeing the glass as half empty here. Not sure if arguing here was more time efficient than just running the eval on the other set of weights.

*I wish I understood these things well enough to not have to ask, but alas I’m just a basic engineer


I use a GPU server and runtime is not free unfortunately.


Ah no worries then. Thanks for your datapoint regardless


The good news is that it’s open source so it can be improved by the community.


Until when? What guarantee does the community have that they won't go private monetization route like you-know-who


This model is open source. Period. Whether they close off future models is irrelevant to whether this one is open.


The model weights are out under an irrevocable license, so forever.


what is the financial incentive to make it open-source?


Everything accrues to OpenAI if it's not.

More companies should be releasing models as open source and liberally licensed if they want to keep a seat at the table. Otherwise they'll get washed away.


> Everything accrues to OpenAI if it's not.

because OpenAI is not open-source? Then a closed source StableLM would accrue value to Stability?


Nobody gives a damn about closed source StableLM. It'll die off in the evolutionary fight as strictly inferior. (How many people are using Claude or Bard?)

By being open source, StableLM will grow with the community. An ecosystem of mods, plugins, libraries, frontends, and more will be developed. Different companies will even compare notes and contribute upstream. Everyone participating wins.


How did you run the benchmarking, zero-shot or few-shot? I think a fair comparison would be Llama-7B which got an average ~35% for 5-shot.


5-shot prompting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: