Skip to content

Jadavpur-University-Product-Club/Awesome-LLM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome-LLM Awesome

🔥 Large Language Models(LLM) have taken the NLP community AI community the Whole World by storm. Here is a curated list of papers about large language models, especially relating to ChatGPT. It also contains frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs:

Milestone Papers

Date keywords Institute Paper Publication
2017-06 Transformers Google Attention Is All You Need NeurIPS
2018-06 GPT 1.0 OpenAI Improving Language Understanding by Generative Pre-Training
2018-10 BERT Google BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding NAACL
2019-02 GPT 2.0 OpenAI Language Models are Unsupervised Multitask Learners
2019-09 Megatron-LM NVIDIA Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
2019-10 T5 Google Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer JMLR
2019-10 ZeRO Microsoft ZeRO: Memory Optimizations Toward Training Trillion Parameter Models SC
2020-01 Scaling Law OpenAI Scaling Laws for Neural Language Models
2020-05 GPT 3.0 OpenAI Language models are few-shot learners NeurIPS
2021-01 Switch Transformers Google Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity JMLR
2021-08 Codex OpenAI Evaluating Large Language Models Trained on Code
2021-08 Foundation Models Stanford On the Opportunities and Risks of Foundation Models
2021-09 FLAN Google Finetuned Language Models are Zero-Shot Learners ICLR
2021-10 T0 HuggingFace et al. Multitask Prompted Training Enables Zero-Shot Task Generalization ICLR
2021-12 GLaM Google GLaM: Efficient Scaling of Language Models with Mixture-of-Experts ICML
2021-12 WebGPT OpenAI WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing
2021-12 Retro DeepMind Improving language models by retrieving from trillions of tokens ICML
2021-12 Gopher DeepMind Scaling Language Models: Methods, Analysis & Insights from Training Gopher
2022-01 COT Google Chain-of-Thought Prompting Elicits Reasoning in Large Language Models NeurIPS
2022-01 LaMDA Google LaMDA: Language Models for Dialog Applications
2022-01 Minerva Google Solving Quantitative Reasoning Problems with Language Models NeurIPS
2022-01 Megatron-Turing NLG Microsoft&NVIDIA Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
2022-03 InstructGPT OpenAI Training language models to follow instructions with human feedback
2022-04 PaLM Google PaLM: Scaling Language Modeling with Pathways
2022-04 Chinchilla DeepMind An empirical analysis of compute-optimal large language model training NeurIPS
2022-05 OPT Meta OPT: Open Pre-trained Transformer Language Models
2022-05 UL2 Google Unifying Language Learning Paradigms
2022-06 Emergent Abilities Google Emergent Abilities of Large Language Models TMLR
2022-06 BIG-bench Google Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
2022-06 METALM Microsoft Language Models are General-Purpose Interfaces
2022-09 Sparrow DeepMind Improving alignment of dialogue agents via targeted human judgements
2022-10 Flan-T5/PaLM Google Scaling Instruction-Finetuned Language Models
2022-10 GLM-130B Tsinghua GLM-130B: An Open Bilingual Pre-trained Model ICLR
2022-11 HELM Stanford Holistic Evaluation of Language Models
2022-11 BLOOM BigScience BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
2022-11 Galactica Meta Galactica: A Large Language Model for Science
2022-12 OPT-IML Meta OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
2023-01 Flan 2022 Collection Google The Flan Collection: Designing Data and Methods for Effective Instruction Tuning
2023-02 LLaMA Meta LLaMA: Open and Efficient Foundation Language Models
2023-02 Kosmos-1 Microsoft Language Is Not All You Need: Aligning Perception with Language Models
2023-03 PaLM-E Google PaLM-E: An Embodied Multimodal Language Model
2023-03 GPT 4 OpenAI GPT-4 Technical Report
2023-04 Pythia EleutherAI et al. Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling ICML
2023-05 Dromedary CMU et al. Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision
2023-05 PaLM 2 Google PaLM 2 Technical Report
2023-05 RWKV Bo Peng RWKV: Reinventing RNNs for the Transformer Era

Other Papers

If you're interested in the field of LLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link (it is possible that there are overlaps between different subfields):

(:exclamation: We would greatly appreciate and welcome your contribution to the following list. ❗)

  • LLM-Analysis

    Analyse different LLMs in different fields with respect to different abilities

  • LLM-Acceleration

    Hardware and software acceleration for LLM training and inference

  • LLM-Application

    Use LLM to do some really cool stuff

  • LLM-Augmentation

    Augment LLM in different aspects including faithfulness, expressiveness, domain-specific knowledge etc.

  • LLM-Detection

    Detect LLM-generated text from texts written by humans

  • LLM-Alignment

    Align LLM with Human Preference

  • Chain-of-Thought

    Chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning.

  • In-Context-Learning

    Large language models (LLMs) demonstrate an in-context learning (ICL) ability, that is, learning from a few examples in the context.

  • Prompt-Learning

    A Good Prompt is Worth 1,000 Words

  • Instruction-Tuning

    Finetune a language model on a collection of tasks described via instructions

LLM Leaderboard

There are three important steps for a ChatGPT-like LLM:

  1. Pre-training
  2. Instruction Tuning
  3. Alignment

The following list makes sure that all LLMs are compared apples to apples.

Pre-trained LLM

Model Size Architecture Access Date Origin Model License1
Switch Transformer 1.6T Decoder(MOE) - 2021-01 Paper -
GLaM 1.2T Decoder(MOE) - 2021-12 Paper -
PaLM 540B Decoder - 2022-04 Paper -
MT-NLG 530B Decoder - 2022-01 Paper -
J1-Jumbo 178B Decoder api 2021-08 Paper -
OPT 175B Decoder api | ckpt 2022-05 Paper OPT-175B License Agreement
BLOOM 176B Decoder api | ckpt 2022-11 Paper BigScience RAIL License v1.0
GPT 3.0 175B Decoder api 2020-05 Paper -
LaMDA 137B Decoder - 2022-01 Paper -
GLM 130B Decoder ckpt 2022-10 Paper The GLM-130B License
YaLM 100B Decoder ckpt 2022-06 Blog Apache 2.0
LLaMA 65B Decoder ckpt 2022-09 Paper Non-commercial bespoke license
GPT-NeoX 20B Decoder ckpt 2022-04 Paper Apache 2.0
UL2 20B agnostic ckpt 2022-05 Paper Apache 2.0
鹏程.盘古α 13B Decoder ckpt 2021-04 Paper Apache 2.0
T5 11B Encoder-Decoder ckpt 2019-10 Paper Apache 2.0
CPM-Bee 10B Decoder api 2022-10 Paper -
rwkv-4 7B RWKV ckpt 2022-09 Github Apache 2.0
GPT-J 6B Decoder ckpt 2022-09 Github Apache 2.0
GPT-Neo 2.7B Decoder ckpt 2021-03 Github MIT
GPT-Neo 1.3B Decoder ckpt 2021-03 Github MIT

Instruction finetuned LLM

Model Size Architecture Access Date Origin Model License1
Flan-PaLM 540B Decoder - 2022-10 Paper -
BLOOMZ 176B Decoder ckpt 2022-11 Paper BigScience RAIL License v1.0
InstructGPT 175B Decoder api 2022-03 Paper -
Galactica 120B Decoder ckpt 2022-11 Paper CC-BY-NC-4.0
OpenChatKit 20B - ckpt 2023-3 - Apache 2.0
Flan-UL2 20B Decoder ckpt 2023-03 Blog Apache 2.0
Gopher - - - - - -
Chinchilla - - - - - -
Flan-T5 11B Encoder-Decoder ckpt 2022-10 Paper Apache 2.0
T0 11B Encoder-Decoder ckpt 2021-10 Paper Apache 2.0
Alpaca 7B Decoder demo 2023-03 Github CC BY NC 4.0

Aligned LLM

Model Size Architecture Access Date Origin
GPT 4 - - - 2023-03 Blog
ChatGPT - Decoder demo|api 2022-11 Blog
Sparrow 70B - - 2022-09 Paper
Claude - - demo|api 2023-03 Blog

The above tables coule be better summarized by this wonderful visualization from this survey paper:


Open LLM

  • LLaMA - A foundational, 65-billion-parameter large language model. LLaMA.cpp Lit-LLaMA

    • Alpaca - A model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca.cpp Alpaca-LoRA
    • Flan-Alpaca - Instruction Tuning from Humans and Machines.
    • Baize - Baize is an open-source chat model trained with LoRA. It uses 100k dialogs generated by letting ChatGPT chat with itself.
    • Cabrita - A portuguese finetuned instruction LLaMA.
    • Vicuna - An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality.
    • Llama-X - Open Academic Research on Improving LLaMA to SOTA LLM.
    • Chinese-Vicuna - A Chinese Instruction-following LLaMA-based Model.
    • GPTQ-for-LLaMA - 4 bits quantization of LLaMA using GPTQ.
    • GPT4All - Demo, data, and code to train open-source assistant-style large language model based on GPT-J and LLaMa.
    • Koala - A Dialogue Model for Academic Research
    • BELLE - Be Everyone's Large Language model Engine
    • StackLLaMA - A hands-on guide to train LLaMA with RLHF.
    • RedPajama - An Open Source Recipe to Reproduce LLaMA training dataset.
    • Chimera - Latin Phoenix.
    • CaMA - a Chinese-English Bilingual LLaMA Model.
  • BLOOM - BigScience Large Open-science Open-access Multilingual Language Model BLOOM-LoRA

    • BLOOMZ&mT0 - a family of models capable of following human instructions in dozens of languages zero-shot.
    • Phoenix
  • T5 - Text-to-Text Transfer Transformer

    • T0 - Multitask Prompted Training Enables Zero-Shot Task Generalization
  • OPT - Open Pre-trained Transformer Language Models.

  • UL2 - a unified framework for pretraining models that are universally effective across datasets and setups.

  • GLM- GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.

  • RWKV - Parallelizable RNN with Transformer-level LLM Performance.

    • ChatRWKV - ChatRWKV is like ChatGPT but powered by my RWKV (100% RNN) language model.
  • StableLM - Stability AI Language Models.

  • YaLM - a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world.

  • GPT-Neo - An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library.

  • GPT-J - A 6 billion parameter, autoregressive text generation model trained on The Pile.

    • Dolly - a cheap-to-build LLM that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT.
  • Pythia - Interpreting Autoregressive Transformers Across Time and Scale

    • Dolly 2.0 - the first open source, instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.
  • OpenFlamingo - an open-source reproduction of DeepMind's Flamingo model.

  • Cerebras-GPT - A Family of Open, Compute-efficient, Large Language Models.

  • GALACTICA - The GALACTICA models are trained on a large-scale scientific corpus.

    • GALPACA - GALACTICA 30B fine-tuned on the Alpaca dataset.
  • Palmyra - Palmyra Base was primarily pre-trained with English text.

  • Camel - a state-of-the-art instruction-following large language model designed to deliver exceptional performance and versatility.

  • h2oGPT

  • PanGu-α - PanGu-α is a 200B parameter autoregressive pretrained Chinese language model develped by Huawei Noah's Ark Lab, MindSpore Team and Peng Cheng Laboratory.

  • MOSS - MOSS是一个支持中英双语和多种插件的开源对话语言模型.

  • Open-Assistant - a project meant to give everyone access to a great chat based large language model.

    • HuggingChat - Powered by Open Assistant's latest model – the best open source chat model right now and @huggingface Inference API.
  • StarCoder - Hugging Face LLM for Code

  • MPT-7B - Open LLM for commercial use by MosaicML

LLM Training Frameworks

Serving OPT-175B, BLOOM-176B and CodeGen-16B using Alpa

Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.

Megatron-LM GPT2 tutorial

DeepSpeed Chat

DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for DL Training and Inference. Visit us at deepspeed.ai or our Github repo.

pretrain_gpt3_175B.sh

Megatron-LM could be visited here. Megatron (1, 2, and 3) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision.

Open source solution replicates ChatGPT training process! Ready to go with only 1.6GB GPU memory and gives you 7.73 times faster training!

Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines.

BMTrain is an efficient large model training toolkit that can be used to train large models with tens of billions of parameters. It can train models in a distributed manner while keeping the code as simple as stand-alone training.

Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying a broad class of distributed tensor computations. The purpose of Mesh TensorFlow is to formalize and implement distribution strategies for your computation graph over your hardware/processors. For example: "Split the batch over rows of processors and split the units in the hidden layer across columns of processors." Mesh TensorFlow is implemented as a layer over TensorFlow.

This tutorial discusses parallelism via jax.Array.

Tools for deploying LLM

💙 Haystack

Haystack is an open-source NLP framework that allows you to use LLMs and transformer-based models from Hugging Face, OpenAI and Cohere to interact with your own data. It supports 🔍 Semantic Search, 🤖 Agents, ❓ Question Answering, 📝 Summarization and a range of other applications.

💬 Sidekick

Sidekick is an open source ETL platform for building LLM apps. It lets you sync data from SaaS tools like Notion, Google Drive, Confluence, etc to a vector database through an easy-to-use dashboard, and gives you an API endpoint you can use to query data across all your data sources. It cuts down the time needed to build customer support bots, workplace search tools, and conversational interfaces using LLMs from days and weeks to hours.

🦜️🔗 LangChain

Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include ❓ Question Answering over specific documents, 💬 Chatbots and 🤖 Agents.

👋 wechat-chatgpt

Use ChatGPT On Wechat via wechaty

Tutorials about LLM

  • [Andrej Karpathy] State of GPT video
  • [Hyung Won Chung] Instruction finetuning and RLHF lecture Youtube
  • [Jason Wei] Scaling, emergence, and reasoning in large language models Slides
  • [Susan Zhang] Open Pretrained Transformers Youtube
  • [Ameet Deshpande] How Does ChatGPT Work? Slides
  • [Yao Fu] 预训练,指令微调,对齐,专业化:论大语言模型能力的来源 Bilibili
  • [Hung-yi Lee] ChatGPT 原理剖析 Youtube
  • [Jay Mody] GPT in 60 Lines of NumPy Link
  • [ICML 2022] Welcome to the "Big Model" Era: Techniques and Systems to Train and Serve Bigger Models Link
  • [NeurIPS 2022] Foundational Robustness of Foundation Models Link
  • [Andrej Karpathy] Let's build GPT: from scratch, in code, spelled out. Video|Code
  • [DAIR.AI] Prompt Engineering Guide Link
  • [邱锡鹏] 大型语言模型的能力分析与应用 Slides | Video
  • [Philipp Schmid] Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers Link
  • [HuggingFace] Illustrating Reinforcement Learning from Human Feedback (RLHF) Link
  • [HuggingFace] What Makes a Dialog Agent Useful? Link
  • [张俊林]通向AGI之路:大型语言模型(LLM)技术精要 Link
  • [大师兄]ChatGPT/InstructGPT详解 Link
  • [HeptaAI]ChatGPT内核:InstructGPT,基于反馈指令的PPO强化学习 Link
  • [Yao Fu] How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources Link
  • [Stephen Wolfram] What Is ChatGPT Doing … and Why Does It Work? Link
  • [Jingfeng Yang] Why did all of the public reproduction of GPT-3 fail? Link
  • [Hung-yi Lee] ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程 Video

Courses about LLM

  • [DeepLearning.AI] ChatGPT Prompt Engineering for Developers Homepage
  • [Princeton] Understanding Large Language Models Homepage
  • [OpenBMB] 大模型公开课 主页
  • [Stanford] CS224N-Lecture 11: Prompting, Instruction Finetuning, and RLHF Slides
  • [Stanford] CS324-Large Language Models Homepage
  • [Stanford] CS25-Transformers United V2 Homepage
  • [Stanford Webinar] GPT-3 & Beyond Video
  • [李沐] InstructGPT论文精读 Bilibili Youtube
  • [陳縕儂] OpenAI InstructGPT 從人類回饋中學習 ChatGPT 的前身 Youtube
  • [李沐] HELM全面语言模型评测 Bilibili
  • [李沐] GPT,GPT-2,GPT-3 论文精读 Bilibili Youtube
  • [Aston Zhang] Chain of Thought论文 Bilibili Youtube
  • [MIT] Introduction to Data-Centric AI Homepage

Opinions about LLM

Other Awesome Lists

Other Useful Resources

  • Arize-Phoenix - Open-source tool for ML observability that runs in your notebook environment. Monitor and fine tune LLM, CV and Tabular Models.
  • Emergent Mind - The latest AI news, curated & explained by GPT-4.
  • ShareGPT - Share your wildest ChatGPT conversations with one click.
  • Major LLMs + Data Availability
  • 500+ Best AI Tools
  • Cohere Summarize Beta - Introducing Cohere Summarize Beta: A New Endpoint for Text Summarization
  • chatgpt-wrapper - ChatGPT Wrapper is an open-source unofficial Python API and CLI that lets you interact with ChatGPT.
  • Open-evals - A framework extend openai's Evals for different language model.
  • Cursor - Write, edit, and chat about your code with a powerful AI.
  • AutoGPT - an experimental open-source application showcasing the capabilities of the GPT-4 language model.
  • OpenAGI - When LLM Meets Domain Experts.
  • HuggingGPT - Solving AI Tasks with ChatGPT and its Friends in HuggingFace.

Contributing

This is an active repository and your contributions are always welcome!

I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.


If you have any question about this opinionated list, do not hesitate to contact me [email protected].

Footnotes

  1. This is not legal advice. Please contact the original authors of the models for more information. 2

About

Awesome-LLM: a curated list of Large Language Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published