Awesome-SLM: a curated list of Small Language Model
🔥 Small Language Models(SLM) are streamlined versions of large language models designed to retain much of the original capabilities while being more efficient and manageable. Here is a curated list of papers about small language models.
If you're interested in the field of SLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of SLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link:
Date | keywords | Institute | Paper | Publication |
---|---|---|---|---|
2024-02 | Ensemble SLMs | Nanyang Technological University | Purifying Large Language Models by Ensembling a Small Language Model | |
2024-01 | Vary-toy 1.8B | MEGVII Technology | Small Language Model Meets with Reinforced Vision Vocabulary |
- Chatbot Arena Leaderboard - a benchmark platform for large language models (LLMs) that features anonymous, randomized battles in a crowdsourced manner.
- AlpacaEval Leaderboard - An Automatic Evaluator for Instruction-following Language Models using Nous benchmark suite.
- Open LLM Leaderboard - aims to track, rank and evaluate LLMs and chatbots as they are released.
- OpenCompass 2.0 LLM Leaderboard - OpenCompass is an LLM evaluation platform, supporting a wide range of models (InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
- Meta
- Mistral AI
- Apple
- Microsoft
- AllenAI
- xAI
- DeepSeek
- Alibaba
- 01-ai
- Baichuan
- BLOOM
- Zhipu AI
- OpenBMB
- RWKV Foundation
- ElutherAI
- Stability AI
- BigCode
- DataBricks
- Shanghai AI Laboratory
- LLMDatahub - a curated collection of datasets specifically designed for chatbot training, including links, size, language, usage, and a brief description of each dataset
- Zyda_processing - a dataset under a permissive license comprising 1.3 trillion tokens, assembled by integrating several major respected open-source datasets into a single, high-quality corpus
- lm-evaluation-harness - A framework for few-shot evaluation of language models.
- lighteval - a lightweight LLM evaluation suite that Hugging Face has been using internally.
- OLMO-eval - a repository for evaluating open language models.
- instruct-eval - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
- simple-evals - Eval tools by OpenAI.
- Giskard - Testing & evaluation library for LLM applications, in particular RAGs
- LangSmith - a unified platform from LangChain framework for: evaluation, collaboration HITL (Human In The Loop), logging and monitoring LLM applications.
- Ragas - a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines.
This repo contains awesome LLM paper list and frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs. Since SLM shares many of the same issues as LLM, I recommend that you also look at the contents related to LLM.
This is an active repository and your contributions are always welcome!
I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding đź‘Ť to them.
If you have any question about this opinionated list, do not hesitate to contact me [email protected].