每行中最常出现的单词

时间:2018-11-30 11:56:01

标签: python pandas nlp

我试图在标记化数据框的每一行中找到最常用的单词,如下所示:

print(df.tokenized_sents)

['apple', 'inc.', 'aapl', 'reported', 'fourth', 'consecutive', 'quarter', 'record', 'revenue', 'profit', 'combination', 'higher', 'iphone', 'prices', 'strong', 'app-store', 'sales', 'propelled', 'technology', 'giant', 'best', 'year', 'ever', 'revenue', 'three', 'months', 'ended', 'sept.']

['brussels', 'apple', 'inc.', 'aapl', '-.', 'chief', 'executive', 'tim', 'cook', 'issued', 'tech', 'giants', 'strongest', 'call', 'yet', 'u.s.-wide', 'data-protection', 'regulation', 'saying', 'individuals', 'personal', 'information', 'been', 'weaponized', 'mr.', 'cooks', 'call', 'came', 'sharply', 'worded', 'speech', 'before', 'p…']

...

wrds = []
for i in range(0, len(df) ):

    wrds.append( Counter(df["tokenized_sents"][i]).most_common(5) )

但是它报告的列表为:

print(wrds)

[('revenue', 2), ('apple', 1), ('inc.', 1), ('aapl', 1), ('reported', 1)]
...

我想改为创建以下数据框;

print(final_df)

KeyWords                                                                         
revenue, apple, inc., aapl, reported
...

最终数据框的行不是列表,而是单个文本值,例如收入,苹果公司,apl,已报告,,[收入,苹果公司,aapl,已报告]

3 个答案:

答案 0 :(得分:0)

不确定是否可以更改返回格式,但是可以使用apply和lambda重新设置列的格式。例如。 df = pd.DataFrame({'wrds':[[('revenue', 2), ('apple', 1), ('inc.', 1), ('aapl', 1), ('reported', 1)]]})

df.wrds.apply(lambda x: [item[0] for item in x])

仅返回单词[revenue, apple, inc., aapl, reported]

的列表

答案 1 :(得分:0)

是这样的吗?使用 .apply()

provisioner: kubernetes.io

输出:

# creating the dataframe
df = pd.DataFrame({"token": [['apple', 'inc.', 'aapl', 'reported', 'fourth', 'consecutive', 'quarter', 'record', 'revenue', 'profit', 'combination', 'higher', 'iphone', 'prices', 'strong', 'app-store', 'sales', 'propelled', 'technology', 'giant', 'best', 'year', 'ever', 'revenue', 'three', 'months', 'ended', 'sept.'], ['brussels', 'apple', 'inc.', 'aapl', '-.', 'chief', 'executive', 'tim', 'cook', 'issued', 'tech', 'giants', 'strongest', 'call', 'yet', 'u.s.-wide', 'data-protection', 'regulation', 'saying', 'individuals', 'personal', 'information', 'been', 'weaponized', 'mr.', 'cooks', 'call', 'came', 'sharply', 'worded', 'speech', 'before', 'p…']
]})
# fetching 5 most common words using .apply and assigning it to keywords column in dataframe
df["keywords"] = df.token.apply(lambda x: ', '.join(i[0] for i in Counter(x).most_common(5)))
df

使用 token keywords 0 [apple, inc., aapl, reported, fourth, consecut... revenue, apple, inc., aapl, reported 1 [brussels, apple, inc., aapl, -., chief, execu... call, brussels, apple, inc., aapl 循环 .loc() .itertuples()

for

输出:

df = pd.DataFrame({"token": [['apple', 'inc.', 'aapl', 'reported', 'fourth', 'consecutive', 'quarter', 'record', 'revenue', 'profit', 'combination', 'higher', 'iphone', 'prices', 'strong', 'app-store', 'sales', 'propelled', 'technology', 'giant', 'best', 'year', 'ever', 'revenue', 'three', 'months', 'ended', 'sept.'], ['brussels', 'apple', 'inc.', 'aapl', '-.', 'chief', 'executive', 'tim', 'cook', 'issued', 'tech', 'giants', 'strongest', 'call', 'yet', 'u.s.-wide', 'data-protection', 'regulation', 'saying', 'individuals', 'personal', 'information', 'been', 'weaponized', 'mr.', 'cooks', 'call', 'came', 'sharply', 'worded', 'speech', 'before', 'p…']
]})
df["Keyword"] = ""
for row in df.itertuples():
    xount = [i[0] for i in Counter(row.token).most_common(5)]
    df.loc[row.Index, "Keyword"] = ', '.join(i for i in xount)
df

答案 2 :(得分:0)

使用 token Keyword 0 [apple, inc., aapl, reported, fourth, consecut... revenue, apple, inc., aapl, reported 1 [brussels, apple, inc., aapl, -., chief, execu... call, brussels, apple, inc., aapl

例如:

df.apply

输出:

import pandas as pd
from collections import Counter
tokenized_sents = [['apple', 'inc.', 'aapl', 'reported', 'fourth', 'consecutive', 'quarter', 'record', 'revenue', 'profit', 'combination', 'higher', 'iphone', 'prices', 'strong', 'app-store', 'sales', 'propelled', 'technology', 'giant', 'best', 'year', 'ever', 'revenue', 'three', 'months', 'ended', 'sept.'], 
                   ['brussels', 'apple', 'inc.', 'aapl', '-.', 'chief', 'executive', 'tim', 'cook', 'issued', 'tech', 'giants', 'strongest', 'call', 'yet', 'u.s.-wide', 'data-protection', 'regulation', 'saying', 'individuals', 'personal', 'information', 'been', 'weaponized', 'mr.', 'cooks', 'call', 'came', 'sharply', 'worded', 'speech', 'before', 'p…']

]

df = pd.DataFrame({"tokenized_sents": tokenized_sents})
final_df = pd.DataFrame({"KeyWords" : df["tokenized_sents"].apply(lambda x: [k for k, v in Counter(x).most_common(5)])}) 
#or
#final_df = pd.DataFrame({"KeyWords" : df["tokenized_sents"].apply(lambda x: ", ".join(k for k, v in Counter(x).most_common(5)))})
print(final_df)
相关问题