Now after implementing the Gibbs sampling for the LDA learning, I can understand the usefulness of such generative process. The generative process in the LDA model actually defines a joint distribution for all the observed words
![](http://writer.zoho.com:80/ImageDisplay.im?name=1251741152242_41152242_img2.png&accId=1053628000000002007)
![](http://writer.zoho.com:80/ImageDisplay.im?name=1251741176168_41176168_img2.png&accId=1053628000000002007)
![](http://writer.zoho.com:80/ImageDisplay.im?name=1251741217990_41217990_img2.png&accId=1053628000000002007)
![](http://writer.zoho.com:80/ImageDisplay.im?name=1251741381064_41381064_img2.png&accId=1053628000000002007)
Generally, for a model with observed variable x's and hidden variable z's, the idea of Gibbs sampling is sampling one of the hidden variables conditioning on the other hidden and observed variables, i.e., draw samples from
![](http://writer.zoho.com:80/ImageDisplay.im?name=1251741754964_41754964_img2.png&accId=1053628000000002007)
![](http://writer.zoho.com:80/ImageDisplay.im?name=1251742075352_42075352_img2.png&accId=1053628000000002007)
The generative process of a topic model then justifies its usefulness in the Gibbs sampling prodecure.
No comments:
Post a Comment