热线电话:13121318867

登录
2018-10-30 阅读量: 886
LDA主题模型分析学习分享

最近在学习关于LDA模型的知识,做一些细粒度观点挖掘。在学习过程中发现原来R中还有一个专门的LDA包。

我用LDA为两个文本文档建立了一个主题模型,分别是a和B。文档a与计算机科学高度相关,文档B与地球科学高度相关。然后我用这个命令训练lda。

text<- c(A,B) # introduced above
r <- Corpus(VectorSource(text)) # create corpus object
r <- tm_map(r, tolower) # convert all text to lower case
r <- tm_map(r, removePunctuation)
r <- tm_map(r, removeNumbers)
r <- tm_map(r, removeWords, stopwords("english"))
r.dtm <- TermDocumentMatrix(r, control = list(minWordLength = 3))
my_lda <- LDA(r.dtm,2)

现在我想用my_lda来预测一个新文档的上下文,比如C,我想看看它是否与计算机科学或地球科学相关。我知道如果我用这个代码来预测。

 x<-C# a new document (a long string) introduced above for prediction
rp <- Corpus(VectorSource(x)) # create corpus object
rp <- tm_map(rp, tolower) # convert all text to lower case
rp <- tm_map(rp, removePunctuation)
rp <- tm_map(rp, removeNumbers)
rp <- tm_map(rp, removeWords, stopwords("english"))
rp.dtm <- TermDocumentMatrix(rp, control = list(minWordLength = 3))
test.topics <- posterior(my_lda,rp.dtm)

可以从我的LDA topicmodel中提取最有可能的术语,并将这些黑箱数字名称替换为您想要的任意数量的名称。

> library(topicmodels)
> data(AssociatedPress)
>
> train <- AssociatedPress[1:100]
> test <- AssociatedPress[101:150]
>
> train.lda <- LDA(train,2)
>
> #returns those black box names
> test.topics <- posterior(train.lda,test)$topics
> head(test.topics)
1 2
[1,] 0.57245696 0.427543038
[2,] 0.56281568 0.437184320
[3,] 0.99486888 0.005131122
[4,] 0.45298547 0.547014530
[5,] 0.72006712 0.279932882
[6,] 0.03164725 0.968352746
> #extract top 5 terms for each topic and assign as variable names
> colnames(test.topics) <- apply(terms(train.lda,5),2,paste,collapse=",")
> head(test.topics)
percent,year,i,new,last new,people,i,soviet,states
[1,] 0.57245696 0.427543038
[2,] 0.56281568 0.437184320
[3,] 0.99486888 0.005131122
[4,] 0.45298547 0.547014530
[5,] 0.72006712 0.279932882
[6,] 0.03164725 0.968352746
> #round to one topic if you'd prefer
> test.topics <- apply(test.topics,1,function(x) colnames(test.topics)[which.max(x)])
> head(test.topics)
[1] "percent,year,i,new,last" "percent,year,i,new,last" "percent,year,i,new,last"
[4] "new,people,i,soviet,states" "percent,year,i,new,last" "new,people,i,soviet,states"
0.0000
2
关注作者
收藏
评论(0)

发表评论

暂无数据
推荐帖子