Skip to content

Releases: MaartenGr/BERTopic

v0.16.4

09 Oct 10:57
9518035
Compare
Choose a tag to compare

Fixes

v0.16.3

22 Jul 08:25
2353f4c
Compare
Choose a tag to compare

Highlights

  • Simplify zero-shot topic modeling by @ianrandman in #2060
  • Option to choose between c-TF-IDF and Topic Embeddings in many functions by @azikoss in #1894
    • Use the use_ctfidf parameter in the following function to choose between c-TF-IDF and topic embeddings:
      • hierarchical_topics, reduce_topics, visualize_hierarchy, visualize_heatmap, visualize_topics
  • Linting with Ruff by @afuetterer in #2033
  • Switch from setup.py to pyproject.toml by @afuetterer in #1978
  • In multi-aspect context, allow Main model to be chained by @ddicato in #2002

Fixes

v0.16.2

12 May 09:32
ccc9ebd
Compare
Choose a tag to compare

Fixes:

v0.16.1

21 Apr 14:42
e7369d0
Compare
Choose a tag to compare

Highlights:

Fixes:

  • Fixed issue with .merge_models seemingly skipping topic #1898
  • Fixed Cohere client.embed TypeError #1904
  • Fixed AttributeError: 'TextGeneration' object has no attribute 'random_state' #1870
  • Fixed topic embeddings not properly updated if all outliers were removed #1838
  • Fixed issue with representation models not properly merging #1762
  • Fixed Embeddings not ordered correctly when using .merge_models #1804
  • Fixed Outlier topic not in the 0th position when using zero-shot topic modeling causing prediction issues (amongst others) #1804
  • Fixed Incorrect label in ZeroShot doc SVG #1732
  • Fixed MultiModalBackend throws error with clip-ViT-B-32-multilingual-v1 #1670
  • Fixed AuthenticationError while using OpenAI() #1678
  • Update FAQ on Apple Silicon by @benz0li in #1901
  • Add documentation DataMapPlot + FAQ for running on Apple Silicon by @dkapitan in #1854
  • Remove commas from pip install reference in readme by @luisoala in #1850
  • Spelling corrections by @joouha in #1801
  • Replacing the deprecated text-ada-001 model with the latest text-embedding-3-small from OpenAI by @atmb4u in #1800
  • Prevent invalid empty input error when retrieving embeddings with openai backend by @liaoelton in #1827
  • Remove spurious warning about missing embedding model by @sliedes in #1774
  • Fix type hint in ClassTfidfTransformer constructor @snape in #1803
  • Fix typo and simplify wording in OnlineCountVectorizer docstring by @chrisji in #1802
  • Fixed warning when saving a topic model without an embedding model by @zilch42 in #1740
  • Fix bug in TextGeneration by @manveersadhal in #1726
  • Fix an incorrect link to usecases.md by @nicholsonjf in #1731
  • Prevent model argument being passed twice when using generator_kwargs in OpenAI by @ninavandiermen in #1733
  • Several fixes to the docstrings by @arpadikuma in #1719
  • Remove unused cluster_df variable in hierarchical_topics by @shadiakiki1986 in #1701
  • Removed redundant quotation mark by @LawrenceFulton in #1695
  • Fix typo in merge models docs by @zilch42 in #1660

v0.16

27 Nov 08:06
61a2cd2
Compare
Choose a tag to compare

Highlights:

  • Merge pre-trained BERTopic models with .merge_models
    • Combine models with different representations together!
    • Use this for incremental/online topic modeling to detect new incoming topics
    • First step towards federated learning with BERTopic
  • Zero-shot Topic Modeling
    • Use a predefined list of topics to assign documents
    • If needed, allows for further exploration of undefined topics
  • Seed (domain-specific) words with ClassTfidfTransformer
    • Make sure selected words are more likely to end up in the representation without influencing the clustering process
  • Added params to truncate documents to length when using LLMs
  • Added LlamaCPP as a representation model
  • LangChain: Support for LCEL Runnables by @joshuasundance-swca in #1586
  • Added topics parameter to .topics_over_time to select a subset of documents and topics
  • Documentation:
  • Added support for Cohere's Embed v3:
cohere_model = CohereBackend(
    client,
    embedding_model="embed-english-v3.0",
    embed_kwargs={"input_type": "clustering"}
)

Fixes:

Merge Pre-trained BERTopic Models

The new .merge_models feature allows for any number of fitted BERTopic models to be merged. Doing so allows for a number of use cases:

  • Incremental topic modeling -- Continuously merge models together to detect whether new topics have appeared
  • Federated Learning - Train BERTopic models on different clients and combine them on a central server
  • Minimal compute - We can essentially batch the training process into multiple instances to reduce compute
  • Different datasets - When you have different datasets that you want to train seperately on, for example with different languages, you can train each model separately and join them after training

To demonstrate merging different topic models with BERTopic, we use the ArXiv paper abstracts to see which topics they generally contain.

First, we train three separate models on different parts of the data:

from umap import UMAP
from bertopic import BERTopic
from datasets import load_dataset

dataset = load_dataset("CShorten/ML-ArXiv-Papers")["train"]

# Extract abstracts to train on and corresponding titles
abstracts_1 = dataset["abstract"][:5_000]
abstracts_2 = dataset["abstract"][5_000:10_000]
abstracts_3 = dataset["abstract"][10_000:15_000]

# Create topic models
umap_model = UMAP(n_neighbors=15, n_components=5, min_dist=0.0, metric='cosine', random_state=42)
topic_model_1 = BERTopic(umap_model=umap_model, min_topic_size=20).fit(abstracts_1)
topic_model_2 = BERTopic(umap_model=umap_model, min_topic_size=20).fit(abstracts_2)
topic_model_3 = BERTopic(umap_model=umap_model, min_topic_size=20).fit(abstracts_3)

Then, we can combine all three models into one with .merge_models:

# Combine all models into one
merged_model = BERTopic.merge_models([topic_model_1, topic_model_2, topic_model_3])

Zero-shot Topic Modeling

Zeroshot Topic Modeling is a technique that allows you to find pre-defined topics in large amounts of documents. This method allows you to not only find those specific topics but also create new topics for documents that would not fit with your predefined topics. This allows for extensive flexibility as there are three scenario's to explore.
  • No zeroshot topics were detected. This means that none of the documents would fit with the predefined topics and a regular BERTopic would be run.
  • Only zeroshot topics were detected. Here, we would not need to find additional topics since all original documents were assigned to one of the predefined topics.
  • Both zeroshot topics and clustered topics were detected. This means that some documents would fit with the predefined topics where others would not. For the latter, new topics were found.

zeroshot

In order to use zero-shot BERTopic, we create a list of topics that we want to assign to our documents. However,
there may be several other topics that we know should be in the documents. The dataset that we use is small subset of ArXiv papers.
We know the data and believe there to be at least the following topics: clustering, topic modeling, and large language models.
However, we are not sure whether other topics exist and want to explore those.

Using this feature is straightforward:

from datasets import load_dataset

from bertopic import BERTopic
from bertopic.representation import KeyBERTInspired

# We select a subsample of 5000 abstracts from ArXiv
dataset = load_dataset("CShorten/ML-ArXiv-Papers")["train"]
docs = dataset["abstract"][:5_000]

# We define a number of topics that we know are in the documents
zeroshot_topic_list = ["Clustering", "Topic Modeling", "Large Language Models"]

# We fit our model using the zero-shot topics
# and we define a minimum similarity. For each document,
# if the similarity does not exceed that value, it will be used
# for clustering instead.
topic_model = BERTopic(
    embedding_model="thenlper/gte-small", 
    min_topic_size=15,
    zeroshot_topic_list=zeroshot_topic_list,
    zeroshot_min_similarity=.85,
    representation_model=KeyBERTInspired()
)
topics, _ = topic_m...
Read more

v0.15

30 May 16:49
609d49c
Compare
Choose a tag to compare

Highlights:

  • Multimodal Topic Modeling
    • Train your topic modeling on text, images, or images and text!
    • Use the bertopic.backend.MultiModalBackend to embed images, text, both or even caption images!
  • Multi-Aspect Topic Modeling
    • Create multiple topic representations simultaneously
  • Improved Serialization options
    • Push your model to the HuggingFace Hub with .push_to_hf_hub
    • Safer, smaller and more flexible serialization options with safetensors
    • Thanks to a great collaboration with HuggingFace and the authors of BERTransfer!
  • Added new embedding models
    • OpenAI: bertopic.backend.OpenAIBackend
    • Cohere: bertopic.backend.CohereBackend
  • Added example of summarizing topics with OpenAI's GPT-models
  • Added nr_docs and diversity parameters to OpenAI and Cohere representation models
  • Use custom_labels="Aspect1" to use the aspect labels for visualizations instead
  • Added cuML support for probability calculation in .transform
  • Updated topic embeddings
    • Centroids by default and c-TF-IDF weighted embeddings for partial_fit and .update_topics
  • Added exponential_backoff parameter to OpenAI model

Fixes:

  • Fixed custom prompt not working in TextGeneration
  • Fixed #1142
  • Add additional logic to handle cupy arrays by @metasyn in #1179
  • Fix hierarchy viz and handle any form of distance matrix by @elashrry in #1173
  • Updated languages list by @sam9111 in #1099
  • Added level_scale argument to visualize_hierarchical_documents by @zilch42 in #1106
  • Fix inconsistent naming by @rolanderdei in #1073

Multimodal Topic Modeling

With v0.15, we can now perform multimodal topic modeling in BERTopic! The most basic example of multimodal topic modeling in BERTopic is when you have images that accompany your documents. This means that it is expected that each document has an image and vice versa. Instagram pictures, for example, almost always have some descriptions to them.

In this example, we are going to use images from flickr that each have a caption accociated to it:

# NOTE: This requires the `datasets` package which you can 
# install with `pip install datasets`
from datasets import load_dataset

ds = load_dataset("maderix/flickr_bw_rgb")
images = ds["train"]["image"]
docs = ds["train"]["caption"]

The docs variable contains the captions for each image in images. We can now use these variables to run our multimodal example:

from bertopic import BERTopic
from bertopic.representation import VisualRepresentation

# Additional ways of representing a topic
visual_model = VisualRepresentation()

# Make sure to add the `visual_model` to a dictionary
representation_model = {
   "Visual_Aspect":  visual_model,
}
topic_model = BERTopic(representation_model=representation_model, verbose=True)

We can now access our image representations for each topic with topic_model.topic_aspects_["Visual_Aspect"].
If you want an overview of the topic images together with their textual representations in jupyter, you can run the following:

import base64
from io import BytesIO
from IPython.display import HTML

def image_base64(im):
    if isinstance(im, str):
        im = get_thumbnail(im)
    with BytesIO() as buffer:
        im.save(buffer, 'jpeg')
        return base64.b64encode(buffer.getvalue()).decode()


def image_formatter(im):
    return f'<img src="data:image/jpeg;base64,{image_base64(im)}">'

# Extract dataframe
df = topic_model.get_topic_info().drop("Representative_Docs", 1).drop("Name", 1)

# Visualize the images
HTML(df.to_html(formatters={'Visual_Aspect': image_formatter}, escape=False))

images_and_text

Multi-aspect Topic Modeling

In this new release, we introduce multi-aspect topic modeling! During the .fit or .fit_transform stages, you can now get multiple representations of a single topic. In practice, it works by generating and storing all kinds of different topic representations (see image below).

![Image title](getting_started/multiaspect/multiaspect.svg)

The approach is rather straightforward. We might want to represent our topics using a PartOfSpeech representation model but we might also want to try out KeyBERTInspired and compare those representation models. We can do this as follows:

from bertopic.representation import KeyBERTInspired
from bertopic.representation import PartOfSpeech
from bertopic.representation import MaximalMarginalRelevance
from sklearn.datasets import fetch_20newsgroups

# Documents to train on
docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']

# The main representation of a topic
main_representation = KeyBERTInspired()

# Additional ways of representing a topic
aspect_model1 = PartOfSpeech("en_core_web_sm")
aspect_model2 = [KeyBERTInspired(top_n_words=30), MaximalMarginalRelevance(diversity=.5)]

# Add all models together to be run in a single `fit`
representation_model = {
   "Main": main_representation,
   "Aspect1":  aspect_model1,
   "Aspect2":  aspect_model2 
}
topic_model = BERTopic(representation_model=representation_model).fit(docs)

As show above, to perform multi-aspect topic modeling, we make sure that representation_model is a dictionary where each representation model pipeline is defined.
The main pipeline, that is used in most visualization options, is defined with the "Main" key. All other aspects can be defined however you want. In the example above, the two additional aspects that we are interested in are defined as "Aspect1" and "Aspect2".

After we have fitted our model, we can access all representations with topic_model.get_topic_info():

table

As you can see, there are a number of different representations for our topics that we can inspect. All aspects are found in topic_model.topic_aspects_.

Serialization

Saving, loading, and sharing a BERTopic model can be done in several ways. With this new release, it is now advised to go with .safetensors as that allows for a small, safe, and fast method for saving your BERTopic model. However, other formats, such as .pickle and pytorch .bin are also possible.

The methods are used as follows:

topic_model = BERTopic().fit(my_docs)

# Method 1 - safetensors
embedding_model = "sentence-transformers/all-MiniLM-L6-v2"
topic_model.save("path/to/my/model_dir", serialization="safetensors", save_ctfidf=True, save_embedding_model=embedding_model)

# Method 2 - pytorch
embedding_model = "sentence-transformers/all-MiniLM-L6-v2"
topic_model.save("path/to/my/model_dir", serialization="pytorch", save_ctfidf=True, save_embedding_model=embedding_model)

# Method 3 - pickle
topic_model.save("my_model", serialization="pickle")

Saving the topic modeling with .safetensors or pytorch has a number of advantages:

  • .safetensors is a relatively safe format
  • The resulting model can be very small (often < 20MB>) since no sub-models need to be saved
  • Although version control is important, there is a bit more flexibility with respect to specific versions of packages
  • More easily used in production
  • Share models with the HuggingFace Hub

serialization

The above image, a model trained on 100,000 documents, demonstrates the differences in sizes comparing safetensors, pytorch, and pickle. The difference in sizes can mostly be explained due to the efficient saving procedure and that the clustering and dimensionality reductions are not saved in safetensors/pytorch since inference can be done based on the topic embeddings.

HuggingFace Hub

When you have created a BERTopic model, you can easily share it with other through the HuggingFace Hub. First, you need to log in to your HuggingFace account:

from huggingface_hub import login
login()

When you have logged in to your HuggingFace account, you can save and upload the model as follows:

from bertopic import BERTopic

# Train model
topic_model = BERTopic().fit(my_docs)

# Push to HuggingFace Hub
topic_model.push_to_hf_hub(
    re...
Read more

v0.14.1

02 Mar 13:19
d665d3f
Compare
Choose a tag to compare

Features/Fixes

  • Use ChatGPT to create topic representations!
  • Added delay_in_seconds parameter to OpenAI and Cohere representation models for throttling the API
    • Setting this between 5 and 10 allows for trial users now to use more easily without hitting RateLimitErrors
  • Fixed missing title param to visualization methods
  • Fixed probabilities not correctly aligning (#1024)
  • Fix typo in textgenerator @dkopljar27 in #1002

ChatGPT

Within OpenAI's API, the ChatGPT models use a different API structure compared to the GPT-3 models.
In order to use ChatGPT with BERTopic, we need to define the model and make sure to set chat=True:

import openai
from bertopic import BERTopic
from bertopic.representation import OpenAI

# Create your representation model
openai.api_key = MY_API_KEY
representation_model = OpenAI(model="gpt-3.5-turbo", delay_in_seconds=10, chat=True)

# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

Prompting with ChatGPT is very satisfying and can be customized in BERTopic by using certain tags.
There are currently two tags, namely "[KEYWORDS]" and "[DOCUMENTS]".
These tags indicate where in the prompt they are to be replaced with a topics keywords and top 4 most representative documents respectively.
For example, if we have the following prompt:

prompt = """
I have topic that contains the following documents: \n[DOCUMENTS]
The topic is described by the following keywords: [KEYWORDS]

Based on the information above, extract a short topic label in the following format:
topic: <topic label>
"""

then that will be rendered as follows and passed to OpenAI's API:

"""
I have a topic that contains the following documents: 
- Our videos are also made possible by your support on patreon.co.
- If you want to help us make more videos, you can do so on patreon.com or get one of our posters from our shop.
- If you want to help us make more videos, you can do so there.
- And if you want to support us in our endeavor to survive in the world of online video, and make more videos, you can do so on patreon.com.

The topic is described by the following keywords: videos video you our support want this us channel patreon make on we if facebook to patreoncom can for and more watch 

Based on the information above, extract a short topic label in the following format:
topic: <topic label>
"""

Note
Whenever you create a custom prompt, it is important to add

Based on the information above, extract a short topic label in the following format:
topic: <topic label>

at the end of your prompt as BERTopic extracts everything that comes after topic: . Having
said that, if topic: is not in the output, then it will simply extract the entire response, so
feel free to experiment with the prompts.

v0.14.0

14 Feb 13:48
7142ce7
Compare
Choose a tag to compare

Highlights

  • Fine-tune topic representations with bertopic.representation
    • Diverse range of models, including KeyBERT, MMR, POS, Transformers, OpenAI, and more!'
    • Create your own prompts for text generation models, like GPT3:
      • Use "[KEYWORDS]" and "[DOCUMENTS]" in the prompt to decide where the keywords and and set of representative documents need to be inserted.
    • Chain models to perform fine-grained fine-tuning
    • Create and customize your represention model
  • Improved the topic reduction technique when using nr_topics=int
  • Added title parameters for all graphs (#800)

Fixes

  • Improve documentation (#837, #769, #954, #912, #911)
  • Bump pyyaml (#903)
  • Fix large number of representative docs (#965)
  • Prevent stochastisch behavior in .visualize_topics (#952)
  • Add custom labels parameter to .visualize_topics (#976)
  • Fix cuML HDBSCAN type checks by @FelSiq in #981

API Changes

  • The diversity parameter was removed in favor of bertopic.representation.MaximalMarginalRelevance
  • The representation_model parameter was added to bertopic.BERTopic

Representation Models

Fine-tune the c-TF-IDF representation with a variety of models. Whether that is through a KeyBERT-Inspired model or GPT-3, the choice is up to you!

Fourteen.mp4

KeyBERTInspired

The algorithm follows some principles of KeyBERT but does some optimization in order to speed up inference. Usage is straightforward:

keybertinspired

from bertopic.representation import KeyBERTInspired
from bertopic import BERTopic
# Create your representation model
representation_model = KeyBERTInspired()
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

keybert

PartOfSpeech

Our candidate topics, as extracted with c-TF-IDF, do not take into account a keyword's part of speech as extracting noun-phrases from all documents can be computationally quite expensive. Instead, we can leverage c-TF-IDF to perform part of speech on a subset of keywords and documents that best represent a topic.

partofspeech

from bertopic.representation import PartOfSpeech
from bertopic import BERTopic
# Create your representation model
representation_model = PartOfSpeech("en_core_web_sm")
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

pos

MaximalMarginalRelevance

When we calculate the weights of keywords, we typically do not consider whether we already have similar keywords in our topic. Words like "car" and "cars"
essentially represent the same information and often redundant. We can use MaximalMarginalRelevance to improve diversity of our candidate topics:

mmr

from bertopic.representation import MaximalMarginalRelevance
from bertopic import BERTopic
# Create your representation model
representation_model = MaximalMarginalRelevance(diversity=0.3)
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

mmr (1)

Zero-Shot Classification

To perform zero-shot classification, we feed the model with the keywords as generated through c-TF-IDF and a set of candidate labels. If, for a certain topic, we find a similar enough label, then it is assigned. If not, then we keep the original c-TF-IDF keywords.

We use it in BERTopic as follows:

from bertopic.representation import ZeroShotClassification
from bertopic import BERTopic
# Create your representation model
candidate_topics = ["space and nasa", "bicycles", "sports"]
representation_model = ZeroShotClassification(candidate_topics, model="facebook/bart-large-mnli")
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

zero

Text Generation: 🤗 Transformers

Nearly every week, there are new and improved models released on the 🤗 Model Hub that, with some creativity, allow for
further fine-tuning of our c-TF-IDF based topics. These models range from text generation to zero-classification. In BERTopic, wrappers around these
methods are created as a way to support whatever might be released in the future.

Using a GPT-like model from the huggingface hub is rather straightforward:

from bertopic.representation import TextGeneration
from bertopic import BERTopic
# Create your representation model
representation_model = TextGeneration('gpt2')
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

hf

Text Generation: Cohere

Instead of using a language model from 🤗 transformers, we can use external APIs instead that
do the work for you. Here, we can use Cohere to extract our topic labels from the candidate documents and keywords.
To use this, you will need to install cohere first:

pip install cohere

Then, get yourself an API key and use Cohere's API as follows:

import cohere
from bertopic.representation import Cohere
from bertopic import BERTopic
# Create your representation model
co = cohere.Client(my_api_key)
representation_model = Cohere(co)
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

cohere

Text Generation: OpenAI

Instead of using a language model from 🤗 transformers, we can use external APIs instead that
do the work for you. Here, we can use OpenAI to extract our topic labels from the candidate documents and keywords.
To use this, you will need to install openai first:

pip install openai

Then, get yourself an API key and use OpenAI's API as follows:

import openai
from bertopic.representation import OpenAI
from bertopic import BERTopic
# Create your representation model
openai.api_key = MY_API_KEY
representation_model = OpenAI()
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)

openai

Text Generation: LangChain

Langchain is a package that helps users...

Read more

v0.13.0

04 Jan 11:27
06dcd47
Compare
Choose a tag to compare

Highlights

  • Calculate topic distributions with .approximate_distribution regardless of the cluster model used
    • Generates topic distributions on a document- and token-levels
    • Can be used for any document regardless of its size!
  • Fully supervised BERTopic
    • You can now use a classification model for the clustering step instead to create a fully supervised topic model
  • Manual topic modeling
    • Generate topic representations from labels directly
    • Allows for skipping the embedding and clustering steps in order to go directly to the topic representation step
  • Reduce outliers with 4 different strategies using .reduce_outliers
  • Install BERTopic without SentenceTransformers for a lightweight package:
    • pip install --no-deps bertopic
    • pip install --upgrade numpy hdbscan umap-learn pandas scikit-learn tqdm plotly pyyaml
  • Get meta data of trained documents such as topics and probabilities using .get_document_info(docs)
  • Added more support for cuML's HDBSCAN
    • Calculate and predict probabilities during fit_transform and transform respectively
    • This should give a major speed-up when setting calculate_probabilities=True
  • More images to the documentation and a lot of changes/updates/clarifications
  • Get representative documents for non-HDBSCAN models by comparing document and topic c-TF-IDF representations
  • Sklearn Pipeline Embedder by @koaning in #791

Fixes

Documentation

Personally, I believe that documentation can be seen as a feature and is an often underestimated aspect of open-source. So I went a bit overboard😅... and created an animation about the three pillars of BERTopic using Manim. There are many other visualizations added, one of each variation of BERTopic, and many smaller changes.

BERTopicOverview.mp4

Topic Distributions

The difficulty with a cluster-based topic modeling technique is that it does not directly consider that documents may contain multiple topics. With the new release, we can now model the distributions of topics! We even consider that a single word might be related to multiple topics. If a document is a mixture of topics, what is preventing a single word to be the same?

approximate_distribution (1)

To do so, we approximate the distribution of topics in a document by calculating and summing the similarities of tokensets (achieved by applying a sliding window) with the topics:

# After fitting your model run the following for either your trained documents or even unseen documents
topic_distr, _ = topic_model.approximate_distribution(docs)

To calculate and visualize the topic distributions in a document on a token-level, we can run the following:

# We need to calculate the topic distributions on a token level
topic_distr, topic_token_distr = topic_model.approximate_distribution(docs, calculate_tokens=True)

# Create a visualization using a styled dataframe if Jinja2 is installed
df = topic_model.visualize_approximate_distribution(docs[0], topic_token_distr[0]); df

image

Supervised Topic Modeling

BERTopic now supports fully-supervised classification! Instead of using a clustering algorithm, like HDBSCAN, we can replace it with a classifier, like Logistic Regression.

prediction (2)

from bertopic import BERTopic
from bertopic.dimensionality import BaseDimensionalityReduction
from sklearn.datasets import fetch_20newsgroups
from sklearn.linear_model import LogisticRegression

# Get labeled data
data= fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))
docs = data['data']
y = data['target']

# Allows us to skip over the dimensionality reduction step
empty_dimensionality_model = BaseDimensionalityReduction()

# Create a classifier to be used instead of the cluster model
clf= LogisticRegression()

# Create a fully supervised BERTopic instance
topic_model= BERTopic(
        umap_model=empty_dimensionality_model,
        hdbscan_model=clf
)
topics, probs = topic_model.fit_transform(docs, y=y)

Manual Topic Modeling

When you already have a bunch of labels and simply want to extract topic representations from them, you might not need to actually learn how those can predicted. We can bypass the embeddings -> dimensionality reduction -> clustering steps and go straight to the c-TF-IDF representation of our labels.

from bertopic import BERTopic
from bertopic.backend import BaseEmbedder
from bertopic.cluster import BaseCluster
from bertopic.dimensionality import BaseDimensionalityReduction

# Prepare our empty sub-models
empty_embedding_model = BaseEmbedder()
empty_dimensionality_model = BaseDimensionalityReduction()
empty_cluster_model = BaseCluster()

# Fit BERTopic without actually performing any clustering
topic_model= BERTopic(
        embedding_model=empty_embedding_model,
        umap_model=empty_dimensionality_model,
        hdbscan_model=empty_cluster_model,
)
topics, probs = topic_model.fit_transform(docs, y=y)

Outlier Reduction

Outlier reduction is an frequently-discussed topic in BERTopic as its default cluster model, HDBSCAN, has a tendency to generate many outliers. This often helps in the topic representation steps, as we do not consider documents that are less relevant, but you might want to still assign those outliers to actual topics. In the modular philosophy of BERTopic, keeping training times in mind, it is now possible to perform outlier reduction after having trained your topic model. This allows for ease of iteration and prevents having to train BERTopic many times to find the parameters you are searching for. There are 4 different strategies that you can use, so make sure to check out the documentation!

Using it is rather straightforward:

new_topics = topic_model.reduce_outliers(docs, topics)

Lightweight BERTopic

The default embedding model in BERTopic is one of the amazing sentence-transformers models, namely "all-MiniLM-L6-v2". Although this model performs well out of the box, it typically needs a GPU to transform the documents into embeddings in a reasonable time. Moreover, the installation requires pytorch which often results in a rather large environment, memory-wise.

Fortunately, it is possible to install BERTopic without sentence-transformers and use it as a lightweight solution instead. The installation can be done as follows:

pip i...
Read more

v0.12.0

11 Sep 10:36
09c1732
Compare
Choose a tag to compare

Highlights

  • Perform online/incremental topic modeling with .partial_fit
  • Expose c-TF-IDF model for customization with bertopic.vectorizers.ClassTfidfTransformer
    • The parameters bm25_weighting and reduce_frequent_words were added to potentially improve representations:
  • Expose attributes for easier access to internal data
  • Added many tests with the intention of making development a bit more stable

Documentation

Fixes

  • Fixed iteratively merging topics (#632 and (#648)
  • Fixed 0th topic not showing up in visualizations (#667)
  • Fixed lowercasing not being optional (#682)
  • Fixed spelling (#664 and (#673)
  • Fixed 0th topic not shown in .get_topic_info by @oxymor0n in #660
  • Fixed spelling by @domenicrosati in #674
  • Add custom labels and title options to barchart @leloykun in #694

Online/incremental topic modeling

Online topic modeling (sometimes called "incremental topic modeling") is the ability to learn incrementally from a mini-batch of instances. Essentially, it is a way to update your topic model with data on which it was not trained before. In Scikit-Learn, this technique is often modeled through a .partial_fit function, which is also used in BERTopic.

At a minimum, the cluster model needs to support a .partial_fit function in order to use this feature. The default HDBSCAN model will not work as it does not support online updating.

from sklearn.datasets import fetch_20newsgroups
from sklearn.cluster import MiniBatchKMeans
from sklearn.decomposition import IncrementalPCA
from bertopic.vectorizers import OnlineCountVectorizer
from bertopic import BERTopic

# Prepare documents
all_docs = fetch_20newsgroups(subset="all",  remove=('headers', 'footers', 'quotes'))["data"]
doc_chunks = [all_docs[i:i+1000] for i in range(0, len(all_docs), 1000)]

# Prepare sub-models that support online learning
umap_model = IncrementalPCA(n_components=5)
cluster_model = MiniBatchKMeans(n_clusters=50, random_state=0)
vectorizer_model = OnlineCountVectorizer(stop_words="english", decay=.01)

topic_model = BERTopic(umap_model=umap_model,
                       hdbscan_model=cluster_model,
                       vectorizer_model=vectorizer_model)

# Incrementally fit the topic model by training on 1000 documents at a time
for docs in doc_chunks:
    topic_model.partial_fit(docs)

Only the topics for the most recent batch of documents are tracked. If you want to be using online topic modeling, not for a streaming setting but merely for low-memory use cases, then it is advised to also update the .topics_ attribute as variations such as hierarchical topic modeling will not work afterward:

# Incrementally fit the topic model by training on 1000 documents at a time and tracking the topics in each iteration
topics = []
for docs in doc_chunks:
    topic_model.partial_fit(docs)
    topics.extend(topic_model.topics_)

topic_model.topics_ = topics

c-TF-IDF

Explicitly define, use, and adjust the ClassTfidfTransformer with new parameters, bm25_weighting and reduce_frequent_words, to potentially improve the topic representation:

from bertopic import BERTopic
from bertopic.vectorizers import ClassTfidfTransformer

ctfidf_model = ClassTfidfTransformer(bm25_weighting=True)
topic_model = BERTopic(ctfidf_model=ctfidf_model)

Attributes

After having fitted your BERTopic instance, you can use the following attributes to have quick access to certain information, such as the topic assignment for each document in topic_model.topics_.

Attribute Type Description
topics_ List[int] The topics that are generated for each document after training or updating the topic model. The most recent topics are tracked.
probabilities_ List[float] The probability of the assigned topic per document. These are only calculated if an HDBSCAN model is used for the clustering step. When calculate_probabilities=True, then it is the probabilities of all topics per document.
topic_sizes_ Mapping[int, int] The size of each topic.
topic_mapper_ TopicMapper A class for tracking topics and their mappings anytime they are merged, reduced, added, or removed.
topic_representations_ Mapping[int, Tuple[int, float]] The top n terms per topic and their respective c-TF-IDF values.
c_tf_idf_ csr_matrix The topic-term matrix as calculated through c-TF-IDF. To access its respective words, run .vectorizer_model.get_feature_names() or .vectorizer_model.get_feature_names_out()
topic_labels_ Mapping[int, str] The default labels for each topic.
custom_labels_ List[str] Custom labels for each topic as generated through .set_topic_labels.
topic_embeddings_ np.ndarray The embeddings for each topic. It is calculated by taking the weighted average of word embeddings in a topic based on their c-TF-IDF values.
representative_docs_ Mapping[int, str] The representative documents for each topic if HDBSCAN is used.