An overview of dimensionality reduction via topics.
Topic modeling is a type of dimensionality reduction method that is especially useful for high-dimensional count matrices. For example, it can be applied to,
For clarity, we will refer to samples as documents and features as words. However, keep in mind that these methods can be used more generally – we will see a biological application three lectures from now.
These models are useful to know about because they provide a compromise between clustering and PCA.
Without going into mathematical detail, topic models perform dimensionality reduction by supposing,
To illustrate the first point, consider modeling a collection of newspaper articles. A set of articles might belong primarily to the “politics” topic, and others to the “business” topic. Articles that describe a monetary policy in the federal reserve might belong partially to both the “politics” and the “business” topic.
For the second point, consider the difference in words that would appear in politics and business articles. Articles about politics might frequently include words like “congress” and “law,” but only rarely words like “stock” and “trade.”
Geometrically, LDA can be represented by the following picture. The corners of the simplex1 represent different words (in reality, there would be \(V\) different corners to this simplex, one for each word). A topic is a point on this simplex. The closer the topic is to one of the corners, the more frequently that word appears in the topic.
For attribution, please cite this work as
Sankaran (2024, Jan. 7). STAT 436 (Spring 2024): Introduction to Topic Models. Retrieved from https://krisrs1128.github.io/stat436_s24/website/stat436_s24/posts/2024-12-27-week11-1/
BibTeX citation
@misc{sankaran2024introduction, author = {Sankaran, Kris}, title = {STAT 436 (Spring 2024): Introduction to Topic Models}, url = {https://krisrs1128.github.io/stat436_s24/website/stat436_s24/posts/2024-12-27-week11-1/}, year = {2024} }