加快Python中成对模糊字符串的匹配

时间:2018-12-22 22:19:47

标签: python string vectorization fuzzywuzzy pairwise

我有40,000个字符串的集合,并且希望使用fuzz.token_set_ratio()来成对比较它们的相似性,但是即使在研究了矢量化之后,我的大脑也无法以正确的方式正确地进行连接。

这里是一个例子:

from fuzzywuzzy import fuzz

s = ["fuzzy was a strong bear", 
 "fuzzy was a large bear", 
 "fuzzy was the strongest bear you could ever imagine"]

similarities = []
l = len(s)

for i in range(l):
    similarities.append([])
    for j in range(l):
        similarities[i].append(fuzz.token_set_ratio(s[i], s[j]))
similarities

现在,该代码至少具有两个缺点。首先,它使用低效的for循环。其次,虽然生成的similarities矩阵是对称的(虽然并不总是正确的,但现在忽略它),并且我只需要计算上下三角形,但它会计算所有元素。后者可能是我可以编写的方式,但是我正在寻找最快的方法来用Python达到similarities

编辑:这是另一条可能有用的信息。我尝试使用pdist来加快这一过程,对于一些类似的任务,似乎perform well了。但是,在这种情况下,由于某种原因,它似乎比我效率低下的for循环要慢。

代码如下:

from fuzzywuzzy import fuzz
from scipy.spatial.distance import pdist, squareform
import numpy as np

def pwd(string1, string2):
    return fuzz.token_set_ratio(string1, string2)

s = []
for i in range(100):
    s.append("fuzzy was a strong bear")
    s.append("fuzzy was a large bear")
    s.append("fuzzy was the strongest bear you could ever imagine")

def pwd_loops():
    similarities = []
    l = len(s)
    for i in range(l):
        similarities.append([])
        for j in range(l):
            similarities[i].append(fuzz.token_set_ratio(s[i], s[j]))

a = np.array(s).reshape(-1,1)
def pwd_pdist():
    dm = squareform(pdist(a, pwd))

%time pwd_loops()
#Wall time: 2.39 s

%time pwd_pdist()
#Wall time: 3.73 s

0 个答案:

没有答案
相关问题