朴素贝叶斯分类器和训练数据

时间:2019-05-19 07:41:00

标签: python naivebayes

我正在使用nltk的朴素贝叶斯分类器对某些推文执行情感分析。我正在使用https://towardsdatascience.com/creating-the-twitter-sentiment-analysis-program-in-python-with-naive-bayes-classification-672e5589a7ed处的语料库文件以及此处的方法来训练数据。

在创建训练集时,我已经使用数据集中的所有〜4000条推文完成了训练,但我还认为我会用很少的30条推子进行测试。

在对整个集合进行测试时,当在新的一组推文上使用分类器时,它仅返回“中性”作为标签,但是在使用30条时,它只会返回正数,这是否意味着我的训练数据不完整或过于繁重使用中性条目进行“加权”,并且为什么我的分类器仅在训练集中使用约4000条推文时才返回中性?

我在下面提供了完整的代码。

twitter_api = twitter.Api(consumer_key = consumer_key,
                         consumer_secret = consumer_secret,
                         access_token_key = access_token,
                         access_token_secret = access_token_secret)
# Test set builder

def buildtestset(keyword):
    try: 
        min_id = None
        tweets = []
        ids = []
        for i in range(0,50):
            tweetsdata = twitter_api.GetSearch(keyword, count = 100, max_id = min_id )
            for t in tweetsdata:
                tweets.append(t)
                ids.append(t.id)  
            min_id = min(ids)

        print(str(len(tweets))+ ' tweets found for keyword: '+keyword)
        return[{"text":status.text, "label":None} for status in tweets]

    except:
        print('this is so sad')
        return None
# Quick test

keyword = 'bicycle'

testdataset = buildtestset(keyword)

# Training set builder

def buildtrainingset(corpusfile,tweetdata): 
    #corpusfile = pathway to corpus data
    #tweetdata = pathway to file we going to save all the tweets to
    corpus = []

    with open(corpusfile,'r') as csvfile:
        linereader = csv.reader(csvfile, delimiter = ',', quotechar = "\"")
        for row in linereader:
            corpus.append({'tweet_id':row[2],'label':row[1],'topic':row[0]})

    # Append every tweet from corpusfile to our corpus list

    rate_limit = 180
    sleep_time = 900/180
    # these are set up so we call enough times to be within twitters guidelines

    # the rest is calling the api of every tweet to get the status object, text associated with it and then put it in our
    # data set - trainingdata
    trainingdata = []
    count = 0
    for tweet in corpus:
        if count < 30:
            try:
                status = twitter_api.GetStatus(tweet['tweet_id'])
                print ('Tweet fetched '+status.text)
                tweet['text'] = status.text
                trainingdata.append(tweet)
                time.sleep(sleep_time)
                count += 1
            except:
                count += 1
                continue
        #write tweets to empty csv

    with open(tweetdata,'w',encoding='utf-8') as csvfile:
        linewriter = csv.writer(csvfile, delimiter=',',quotechar = "\"")
        for tweet in trainingdata:
            try: 
                linewriter.writerow([tweet['tweet_id'],tweet['text'],tweet['label'],tweet['topic']])

            except Exception as e:
                print(e)
    return trainingdata

corpusfile = (r'C:\Users\zacda\OneDrive\Desktop\DATA2901\Assignment\corpusmaster.csv')
tweetdata = (r'C:\Users\zacda\OneDrive\Desktop\DATA2901\Assignment\tweetdata.csv')

TrainingData = buildtrainingset(corpusfile,tweetdata)

import re # regular expression library 
from nltk.tokenize import word_tokenize
from string import punctuation 
from nltk.corpus import stopwords 

class preprocesstweets:
    def __init__(self):
        self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL'])

    def processtweets(self, list_of_tweets):
        processedtweets=[]
        for tweet in list_of_tweets:  
            processedtweets.append((self._processtweet(tweet["text"]),tweet["label"]))
        return processedtweets

    def _processtweet(self, tweet):
        tweet = tweet.lower() # convert text to lower-case
        tweet = re.sub('((www\.[^\s]+)|(https?://[^\s]+))', 'URL', tweet) # remove URLs
        tweet = re.sub('@[^\s]+', 'AT_USER', tweet) # remove usernames
        tweet = re.sub(r'#([^\s]+)', r'\1', tweet) # remove the # in #hashtag
        tweet = word_tokenize(tweet) # remove repeated characters (helloooooooo into hello)
        return [word for word in tweet if word not in self._stopwords]

tweetprocessor = preprocesstweets()
processedtrainingdata = tweetprocessor.processtweets(TrainingData)
processedtestdata = tweetprocessor.processtweets(testdataset)

# This is a list of all the words we have in the training set, the word_features is a list of all the distinct words w freq
import nltk

def buildvocab(processedtrainingdata):
    all_words = []

    for (words, sentiment) in processedtrainingdata:
        all_words.extend(words)

    wordlist = nltk.FreqDist(all_words)
    word_features = wordlist.keys()

    return word_features

def extract_features(tweet):
    tweet_words = set(tweet)
    features = {}
    for word in word_features:
        features['contains(%s)' % word] = (word in tweet_words) #creates json key containing word x, its loc.
        # Every key has a T/F according - true for present , false for not
    return features 
# Building the feature vector

word_features = buildvocab(processedtrainingdata)
training_features = nltk.classify.apply_features(extract_features, processedtrainingdata)
# apply features does the actual extraction
# Naive Bayes Classifier 
Nbayes = nltk.NaiveBayesClassifier.train(training_features)

Nbayes_result_labels = [Nbayes.classify(extract_features(tweet[0])) for tweet in processedtestdata]

# get the majority vote [?]
if Nbayes_result_labels.count('positive') > Nbayes_result_labels.count('negative'):
    print('Positive')
    print(str(100*Nbayes_result_labels.count('positive')/len(Nbayes_result_labels)))
elif Nbayes_result_labels.count('negative') > Nbayes_result_labels.count('positive'):
    print(str(100*Nbayes_result_labels.count('negative')/len(Nbayes_result_labels)))
    print('Negative sentiment')
else:
    print('Neutral')

1 个答案:

答案 0 :(得分:0)

进行机器学习时,我们想学习一种在新(看不见的)数据上表现良好的算法。这称为概括。

除其他事项外,测试集的目的是验证分类器的泛化行为。如果您的模型为每个测试实例预测相同的标签,则我们无法确认该假设。测试集应该代表您以后使用的条件。

根据经验,我想认为您保留其数据的50%到25%作为测试集。这当然取决于情况。 30/4000小于百分之一。

想到的第二点是,当您的分类器偏向一个班级时,请确保每个班级在培训和验证集中的代表几乎相等。这样可以防止分类器“仅仅”学习整个集合的分布,而不是学习哪些特征相关。

最后一点,通常我们报告精度,召回率和F β= 1 等指标来评估我们的分类器。样本中的代码似乎基于所有推文中的全球情绪来报告某些内容,您确定这就是您想要的吗?这些推文是否具有代表性?