计算单元格数组matlab中的单词

时间:2012-07-19 07:03:14

标签: matlab

我有一个500x1的单元格aray,每行都有一个单词。如何计算出现的单词出现次数并显示出来,并显示每次出现的百分比。

例如

这些词的出现是:

Ans =

     200 Green
     200 Red
     100 Blue

这些词的百分比:

Ans = 

     40% Green
     40% Red
     20% Blue

4 个答案:

答案 0 :(得分:5)

这个想法是strcmpi按元素比较细胞矩阵。这可用于将输入名称与输入中的唯一名称进行比较。请尝试下面的代码。

% generate some input
input={'green','red','green','red','blue'}';

% find the unique elements in the input
uniqueNames=unique(input)';

% use string comparison ignoring the case
occurrences=strcmpi(input(:,ones(1,length(uniqueNames))),uniqueNames(ones(length(input),1),:));

% count the occurences
counts=sum(occurrences,1);

%pretty printing
for i=1:length(counts)
    disp([uniqueNames{i} ': ' num2str(counts(i))])
end

我将百分比计算留给你。

答案 1 :(得分:1)

首先在数据中找到唯一的单词:

% set up sample data:
data = [{'red'}; {'green'}; {'blue'}; {'blue'}; {'blue'}; {'red'}; {'red'}; {'green'}; {'red'}; {'blue'}; {'red'}; {'green'}; {'green'}; ]
uniqwords = unique(data);

然后在数据中找到这个独特单词的出现:

[~,uniq_id]=ismember(data,uniqwords);

然后简单地计算每个唯一单词的找到次数:

uniq_word_num = arrayfun(@(x) sum(uniq_id==x),1:numel(uniqwords));

要获得百分比,除以数据样本总数的总和:

uniq_word_perc = uniq_word_num/numel(data)

答案 2 :(得分:0)

这是我的解决方案,应该非常快。

% example input
example = 'This is an example corpus. Is is a verb?';
words = regexp(example, ' ', 'split');

%your program, result in vocabulary and counts. (input is a cell array called words)
vocabulary = unique(words);
n = length(vocabulary);
counts = zeros(n, 1);
for i=1:n
    counts(i) = sum(strcmpi(words, vocabulary{i}));
end

%process results
[val, idx]=max(counts);
most_frequent_word = vocabulary{idx};

%percentages:
percentages=counts/sum(counts);

答案 3 :(得分:0)

没有使用明确的fors的棘手方法..

clc
close all
clear all

Paragraph=lower(fileread('Temp1.txt'));

AlphabetFlag=Paragraph>=97 & Paragraph<=122;  % finding alphabets

DelimFlag=find(AlphabetFlag==0); % considering non-alphabets delimiters
WordLength=[DelimFlag(1), diff(DelimFlag)];
Paragraph(DelimFlag)=[]; % setting delimiters to white space
Words=mat2cell(Paragraph, 1, WordLength-1); % cut the paragraph into words

[SortWords, Ia, Ic]=unique(Words);  %finding unique words and their subscript

Bincounts = histc(Ic,1:size(Ia, 1));%finding their occurence
[SortBincounts, IndBincounts]=sort(Bincounts, 'descend');% finding their frequency

FreqWords=SortWords(IndBincounts); % sorting words according to their frequency
FreqWords(1)=[];SortBincounts(1)=[]; % dealing with remaining white space

Freq=SortBincounts/sum(SortBincounts)*100; % frequency percentage

%% plot
NMostCommon=20;
disp(Freq(1:NMostCommon))
pie([Freq(1:NMostCommon); 100-sum(Freq(1:NMostCommon))], [FreqWords(1:NMostCommon), {'other words'}]);