加快pandas数据帧查找速度

时间:2016-07-09 17:47:32

标签: python pandas dataframe

我有一个大约有600,000个地点的邮政编码,城市,州和国家的熊猫数据框。我们称之为 my_df

我想为每个位置查找相应的 经度 纬度 。值得庆幸的是,this有一个数据库。我们称这个数据框为 zipdb

zipdb包括邮政编码,城市,州和国家/地区的列。 所以,我想在zipdb中查找所有位置(邮政编码,城市,州和国家/地区)。

def zipdb_lookup(zipcode, city, state, country):

   countries_mapping = { "UNITED STATES":"US"
                     , "CANADA":"CA"
                     , "KOREA REP OF":"KR"
                     , "ITALY":"IT"
                     , "AUSTRALIA":"AU"
                     , "CHILE":"CL"
                     , "UNITED KINGDOM":"GB"
                     , "BERMUDA":"BM"
    }

    try:
        slc = zipdb[ (zipdb.Zipcode == str(zipcode)) &
                     (zipdb.City == str(city).upper()) &
                     (zipdb.State == str(state).upper()) &
                     (zipdb.Country == countries_mapping[country].upper()) ]

        if slc.shape[0] == 1:
            return np.array(slc["Lat"])[0], np.array(slc["Long"])[0]
        else:
            return None
    except:
         return None

我尝试过pandas'.apply以及for循环来执行此操作。 两者都很慢。我知道有很多行,但我不禁想到更快的事情。

zipdb = pandas.read_csv("free-zipcode-database.csv") #linked to above

注意:我还在zibdb上执行了此转换:

zipdb["Zipcode"] = zipdb["Zipcode"].astype(str)

功能调用:

#Defined a wrapper function:
def lookup(row):
    """

    :param row:
    :return:
    """

    lnglat = zipdb_lookup(
                  zipcode = my_df["organization_zip"][row]
                , city    = my_df["organization_city"][row]
                , state   = my_df["organization_state"][row]
                , country = my_df["organization_country"][row]
    )

    return lnglat

lnglat = list()
for l in range(0, my_df.shape[0]):
    # if l % 5000 == 0: print(round((float(l) / my_df.shape[0])*100, 2), "%")
    lnglat.append(lookup(row = l))

来自my_df的示例数据:

       organization_zip organization_city organization_state  organization_country
0                 60208          EVANSTON                 IL   United Sates
1                 77555         GALVESTON                 TX   United Sates
2                 23284          RICHMOND                 VA   United Sates
3                 53233         MILWAUKEE                 WI   United Sates
4                 10036          NEW YORK                 NY   United Sates
5                 33620             TAMPA                 FL   United Sates
6                 10029          NEW YORK                 NY   United Sates
7                 97201          PORTLAND                 OR   United Sates
8                 97201          PORTLAND                 OR   United Sates
9                 53715           MADISON                 WI   United Sates

1 个答案:

答案 0 :(得分:5)

使用merge()比在每一行调用函数要快得多。确保字段类型匹配并删除字符串:

# prepare your dataframe
data['organization_zip'] = data.organization_zip.astype(str)
data['organization_city'] = data.organization_city.apply(lambda v: v.strip())
# get the zips database
zips = pd.read_csv('/path/to/free-zipcode-database.csv')
zips['Zipcode'] = zips.Zipcode.astype(str)
# left join
# -- prepare common join columns
zips.rename(columns=dict(Zipcode='organization_zip',
                         City='organization_city'), 
            inplace=True)  
# specify join columns along with zips' columns to copy
cols = ['organization_zip', 'organization_city', 'Lat', 'Long']
data.merge(zips[cols], how='left')
=> 

result

请注意,您可能需要扩展合并列和/或添加更多列以从拉链数据框进行复制。