Python:从csv

时间:2018-05-25 18:12:14

标签: python nlp nltk

我正在尝试从csv文件中逐行提取关键字并创建关键字字段。现在我能够完全提取。如何获取每行/每个字段的关键字?

数据:

id,some_text
1,"What is the meaning of the word Himalaya?"
2,"Palindrome is a word, phrase, or sequence that reads the same backward as forward"

代码:这是搜索整个文本,但不是逐行搜索。我是否需要在replace(r'\|', ' ')之外添加其他内容?

import pandas as pd
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords

df = pd.read_csv('test-data.csv')
# print(df.head(5))

text_context = df['some_text'].str.lower().str.replace(r'\|', ' ').str.cat(sep=' ') # not put lower case?
print(text_context)
print('')
tokens=nltk.tokenize.word_tokenize(text_context)
word_dist = nltk.FreqDist(tokens)
stop_words = stopwords.words('english')
punctuations = ['(',')',';',':','[',']',',','!','?']
keywords = [word for word in tokens if not word in stop_words and not word in punctuations]
print(keywords)

最终输出:

id,some_text,new_keyword_field
1,What is the meaning of the word Himalaya?,"meaning,word,himalaya"
2,"Palindrome is a word, phrase, or sequence that reads the same backward as forward","palindrome,word,phrase,sequence,reads,backward,forward"

1 个答案:

答案 0 :(得分:3)

以下是使用pandas apply向您的数据框添加新关键字列的简洁方法。应用首先定义一个函数(在我们的例子中为get_keywords),我们可以应用于每一行或每列。

import pandas as pd
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords

# I define the stop_words here so I don't do it every time in the function below
stop_words = stopwords.words('english')
# I've added the index_col='id' here to set your 'id' column as the index. This assumes that the 'id' is unique.
df = pd.read_csv('test-data.csv', index_col='id')  

这里我们定义我们的函数,它将在下一个单元格中使用df.apply应用于每一行。您可以看到此函数get_keywordsrow作为参数,并返回一串逗号分隔的关键字,就像您在上面所需的输出中一样("含义,字,喜马拉雅" )。在此函数中,我们降低,标记化,使用isalpha()过滤掉标点符号,过滤掉我们的stop_words,并将我们的关键字连接在一起以形成所需的输出。

# This function will be applied to each row in our Pandas Dataframe
# See the docs for df.apply at: 
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html
def get_keywords(row):
    some_text = row['some_text']
    lowered = some_text.lower()
    tokens = nltk.tokenize.word_tokenize(lowered)
    keywords = [keyword for keyword in tokens if keyword.isalpha() and not keyword in stop_words]
    keywords_string = ','.join(keywords)
    return keywords_string

现在我们已经定义了将要应用的函数,我们调用了df.apply(get_keywords, axis=1)。这将返回一个Pandas系列(类似于列表)。由于我们希望此系列成为数据框的一部分,因此我们使用df['keywords'] = df.apply(get_keywords, axis=1)

将其添加为新列
# applying the get_keywords function to our dataframe and saving the results
# as a new column in our dataframe called 'keywords'
# axis=1 means that we will apply get_keywords to each row and not each column
df['keywords'] = df.apply(get_keywords, axis=1)
  

输出:   Dataframe after adding 'keywords' column