Model Overview
In any use case it is important that practicioners understand the implications of their choices. This page is dedicated to giving an overview of the models in the package, so you can find the right one for your particular application.
What is a topic?
Models in Turftopic provide answers to this question that can at large be assigned into two categories:
- A topic is a dimension/factor of semantics. These models try to find the axes along which most of the variance in semantics can be explained. These include S³, KeyNMF and Autoencoding Models. A clear advantage of using these models is that they can capture multiple topics in a document and usually capture nuances in semantics better.
- A topic is a cluster of documents. These models conceptualize a topic as a group of documents that are closely related to each other. The advantage of using these models is that they are perhaps more aligned with human intuition about what a "topic" is. On the other hand, they can only capture nuances in topical content in documents to a limited extent.
Document Representations
All models in Turftopic at some point in the process use contextualized representations from transformers to learn topics. Documents, however have different representations internally, and this has an effect on how the models behave:
- In most models the documents are directly represented by the embeddings (S³, Clustering, GMM). The advantage of this is that at no point in the process do we loose contextual information.
- In KeyNMF documents are represented with keyword importances. This means that some of the contextual nuances get lost in the process before topic discovery. As a result of this, KeyNMF models dimensions of semantics in word content, not the continuous semantic space. In practice this rarely presents a challenge, but topics in KeyNMF might be less interesting or novel than in other models, and might resemble classical topic models more.
- In Autoencoding Models embeddings are only used in the encoder network, but the models describe the generative process of Bag-of-Words representations. This is not ideal, as all too often contextual nuances get lost in the modeling process.
Theoretical Comparison
Model | Conceptualization | #N Topics | Term Importance | Document Representation | Inference | Multilingual |
---|---|---|---|---|---|---|
S³ | Dimension/Factor | Manual | Decomposition | Embedding | Inductive | |
KeyNMF | Dimension/Factor | Manual | Parameters | Keywords | Inductive | |
GMM | Cluster/Mixture Component | Manual | c-TF-IDF | Embedding | Inductive | |
Clustering Models | Cluster/Mixture Component | Automatic | c-TF-IDF/ Centroid Proximity |
Embedding | Transductive | |
Autoencoding Models | Dimension/Factor | Manual | Parameters | Embedding + BoW |
Inductive |
Inference
Models in Turftopic use two different types of inference, which has a number of implications.
- Most models are inductive. Meaning that they aim to recover some underlying structure which results in the observed data. Inductive models can be used for inference over novel data at any time.
- Clustering models that use HDBSCAN, DBSCAN or OPTICS are transductive. This means that the models have no theory of underlying semantic structures, but simply desdcribe the dataset at hand. This has the effect that direct inference on unseen documents is not possible.
Term Importance
Term importances in different models are calculated differently.
- Some models (KeyNMF, Autoencoding) have built-in term importance estimation, as term importances are literally in the models' parameters. This means that term importances are inferential. Meaning that they make a claim about underlying semantical structures. A potential drawback is, that if the vocabulary is very large, the models can be impacted by the curse of dimensionality, resulting in poor convergence or slow inference.
- Other models (GMM, Clustering) use post-hoc measures for determining term importance. In other words term importances are descriptive. Inference of term importance is much more efficient for these methods, but they make no claims about the underlying semantics that result in these term importances.
- S³ decomposes the vocabulary with a fitted model. The result of this is that the model can generalize over all sorts of corpora, and can be described in different ways in different vocabularies. This is somewhere in between inferential and descriptive methods.
Which model should I choose?
The model that you should be using for any particular application will of course be influenced by a number of factors, that you should consider. The tables on this page give you a general overview of a handful of practical aspects of the models.
Practical Comparison
Model | Scalability | Ideal Document Length | Speed | Stability | Robustness to Noise | Embedding Size |
---|---|---|---|---|---|---|
S³ | Moderate | Short, Medium, Long | Fast | Moderate | Good | Any |
KeyNMF | Very High | Medium, Long | Moderate | Stable | Very Good | Any |
GMM | Moderate | Short, Medium | Moderate | Moderate | Good | Limited |
Clustering Models | Low | Short, Medium | Moderate | Volatile | Very Good(centroid) Moderate(c-TF-IDF) |
Any |
Autoencoding Models | Low | Hard to Tell | Slow | Volatile | Poor | Limited |
Here is an opinionated guide for common use cases:
1. When in doubt use KeyNMF.
When you can't make an informed decision about which model is optimal for your use case, or you just want to get your hands dirty with topic modeling, KeyNMF is the best option. It is very stable, gives high quality topics, and is incredibly robust to noise. It is also the closest to classical topic models and thus conforms to your intuition about topic modeling.
Another advantage is that KeyNMF is the most scalable and fail-safe option, meaning that you can use it on enormous corpora.
2. Short Texts - use Clustering or GMM
On tweets and short texts in general, making the assumption that a document only contains one topic is very reasonable. Clustering models and GMM are very good in this context and should be preferred over other options.
3. Want to understand variation? use S³
S³ is by far the best model to explain variations in semantics. If you are looking for a model that can help you establish a theory of semantics in a corpus, S³ is an excellent choice.
4. Avoid using Autoencoding Models.
In my anecdotal experience and all experiments I've done with topic models, Autoencoding Models were consistently outclassed by all else, and their behaviour is also incredbly opaque. Convergence issues or overlapping topics are a common occurrence. And as such, unless you have reasons to do so I would recommend that your first choice is another model on the list.
Base API Reference
turftopic.base.ContextualModel
Bases: ABC
, TransformerMixin
, BaseEstimator
Base class for contextual topic models in Turftopic.
Source code in turftopic/base.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 |
|
topic_names: list[str]
property
Names of the topics based on the highest scoring 4 terms.
encode_documents(raw_documents)
Encodes documents with the sentence encoder of the topic model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
raw_documents |
Iterable[str]
|
Textual documents to encode. |
required |
Return
ndarray of shape (n_documents, n_dimensions) Matrix of document embeddings.
Source code in turftopic/base.py
396 397 398 399 400 401 402 403 404 405 406 407 408 409 |
|
export_representative_documents(topic_id, raw_documents, document_topic_matrix=None, top_k=5, show_negative=False, format='csv')
Exports the highest ranking documents in a topic as a text table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
topic_id |
ID of the topic to display. |
required | |
raw_documents |
List of documents to consider. |
required | |
document_topic_matrix |
Document topic matrix to use. This is useful for transductive methods, as they cannot infer topics from text. |
None
|
|
top_k |
Top K documents to show. |
5
|
|
show_negative |
bool
|
Indicates whether lowest ranking documents should also be shown. |
False
|
format |
str
|
Specifies which format should be used. 'csv', 'latex' and 'markdown' are supported. |
'csv'
|
Source code in turftopic/base.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 |
|
export_topic_distribution(text=None, topic_dist=None, top_k=10, format='csv')
Exports topic distribution as a text table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text |
Text to infer topic distribution for. |
None
|
|
topic_dist |
Already inferred topic distribution for the text. This is useful for transductive methods, as they cannot infer topics from text. |
None
|
|
top_k |
int
|
Top K topics to show. |
10
|
format |
Specifies which format should be used. 'csv', 'latex' and 'markdown' are supported. |
'csv'
|
Source code in turftopic/base.py
374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 |
|
export_topics(top_k=10, show_scores=False, show_negative=False, format='csv')
Exports top K words from topics in a table in a given format. Returns table as a pure string.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
top_k |
int
|
Number of top words to return for each topic. |
10
|
show_scores |
bool
|
Indicates whether to show importance scores for each word. |
False
|
show_negative |
bool
|
Indicates whether the most negative terms should also be displayed. |
False
|
format |
str
|
Specifies which format should be used. 'csv', 'latex' and 'markdown' are supported. |
'csv'
|
Source code in turftopic/base.py
146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
|
fit(raw_documents, y=None, embeddings=None)
Fits model on the given corpus.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
raw_documents |
Documents to fit the model on. |
required | |
y |
Ignored, exists for sklearn compatibility. |
None
|
|
embeddings |
Optional[ndarray]
|
Precomputed document encodings. |
None
|
Source code in turftopic/base.py
433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 |
|
fit_transform(raw_documents, y=None, embeddings=None)
abstractmethod
Fits model and infers topic importances for each document.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
raw_documents |
Documents to fit the model on. |
required | |
y |
Ignored, exists for sklearn compatibility. |
None
|
|
embeddings |
Optional[ndarray]
|
Precomputed document encodings. |
None
|
Returns:
Type | Description |
---|---|
ndarray of shape (n_documents, n_topics)
|
Document-topic matrix. |
Source code in turftopic/base.py
411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 |
|
get_feature_names_out()
Get topic ids.
Returns:
Type | Description |
---|---|
ndarray of shape (n_topics)
|
IDs for each output feature of the model. This is useful, since some models have outlier detection, and this gets -1 as ID, instead of its index. |
Source code in turftopic/base.py
460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 |
|
get_topics(top_k=10)
Returns high-level topic representations in form of the top K words in each topic.
Parameters ---------- top_k: int, default 10 Number of top words to return for each topic.
Returns:
Type | Description |
---|---|
list[tuple]
|
List of topics. Each topic is a tuple of topic ID and the top k words. Top k words are a list of (word, word_importance) pairs. |
Source code in turftopic/base.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
get_vocab()
Get vocabulary of the model.
Returns:
Type | Description |
---|---|
ndarray of shape (n_vocab)
|
All terms in the vocabulary. |
Source code in turftopic/base.py
450 451 452 453 454 455 456 457 458 |
|
prepare_topic_data(corpus, embeddings=None)
Produces topic inference data for a given corpus, that can be then used and reused. Exists to allow visualizations out of the box with topicwizard.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
corpus |
List[str]
|
Documents to infer topical content for. |
required |
embeddings |
Optional[ndarray]
|
Embeddings of documents. |
None
|
Returns:
Type | Description |
---|---|
TopicData
|
Information about topical inference in a dictionary. |
Source code in turftopic/base.py
478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 |
|
print_representative_documents(topic_id, raw_documents, document_topic_matrix=None, top_k=5, show_negative=False)
Pretty prints the highest ranking documents in a topic.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
topic_id |
ID of the topic to display. |
required | |
raw_documents |
List of documents to consider. |
required | |
document_topic_matrix |
Document topic matrix to use. This is useful for transductive methods, as they cannot infer topics from text. |
None
|
|
top_k |
Top K documents to show. |
5
|
|
show_negative |
bool
|
Indicates whether lowest ranking documents should also be shown. |
False
|
Source code in turftopic/base.py
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 |
|
print_topic_distribution(text=None, topic_dist=None, top_k=10)
Pretty prints topic distribution in a document.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text |
Text to infer topic distribution for. |
None
|
|
topic_dist |
Already inferred topic distribution for the text. This is useful for transductive methods, as they cannot infer topics from text. |
None
|
|
top_k |
int
|
Top K topics to show. |
10
|
Source code in turftopic/base.py
349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 |
|
print_topics(top_k=10, show_scores=False, show_negative=False)
Pretty prints topics in the model in a table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
top_k |
int
|
Number of top words to return for each topic. |
10
|
show_scores |
bool
|
Indicates whether to show importance scores for each word. |
False
|
show_negative |
bool
|
Indicates whether the most negative terms should also be displayed. |
False
|
Source code in turftopic/base.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
|