Stability AI announces new open-source large language model
Stability AI, the company behind the AI-powered Stable Diffusion image generator, has released a suite of open-source large language models (LLMs) collectively called StableLM. In a post shared on Wednesday, the company announced that its models are now available for developers to use and adapt on GitHub.
Like its ChatGPT rival, StableLM is designed to efficiently generate text and code. It’s trained on a larger version of the open-source dataset known as the Pile, which encompasses information from a range of sources, including Wikipedia, Stack Exchange, and PubMed. Stability AI says StableLM models are currently available between 3 billion and 7 billion parameters, with 15 to 65 billion parameter models arriving later.
While StableLM expands on the open-source language models that Stability AI has already worked on in collaboration with the nonprofit EleutherAI, it also builds on its mission to make AI tools more accessible, as it has done with Stable Diffusion. The company made its text-to-image AI available in several ways, including a public demo, a software beta, and a full download of the model, allowing developers to toy with the tool and come up with various integrations.
We might even see the same happen with StableLM, along with Meta’s open-source LLaMa language model that leaked online last month. As pointed out by my colleague James Vincent, the release of Stable Diffusion has led “to both more good stuff and more bad stuff happening,” and “we’ll likely see a similar dynamic play out once more with AI text generation: more stuff, more of the time.”
You can try out a demo of StableLM’s fine-tuned chat model hosted on Hugging Face, which gave me a very complex and somewhat nonsensical recipe when I tried asking it how to make a peanut butter sandwich. It also suggested that I add a “funny drawing” to a sympathy card. Stability AI warns that while the datasets it uses should help “steer the base language models into ‘safer’ distributions of text, not all biases and toxicity can be mitigated through fine-tuning.”