使用python条件加入RDD

时间:2017-03-13 09:37:34

标签: python-3.x pyspark

我有两个RDD。第一个包含与信息相关的IP地址(参见col c_ip):

[Row(unic_key=1608422, idx=18, s_date='2016-12-31', s_time='15:00:07', c_ip='119.228.181.78', c_session='3hyj0tb434o23uxegpnmvzr0', origine_file='inFile', process_date='2017-03-13'),
 Row(unic_key=1608423, idx=19, s_date='2016-12-31', s_time='15:00:08', c_ip='119.228.181.78', c_session='3hyj0tb434o23uxegpnmvzr0', origine_file='inFile', process_date='2017-03-13'),
]

另一个RDD是IP地理定位。

network,geoname_id,registered_country_geoname_id,represented_country_geoname_id,is_anonymous_proxy,is_satellite_provider,postal_code,latitude,longitude,accuracy_radius
1.0.0.0/24,2077456,2077456,,0,0,,-33.4940,143.2104,1000
1.0.1.0/24,1810821,1814991,,0,0,,26.0614,119.3061,50
1.0.2.0/23,1810821,1814991,,0,0,,26.0614,119.3061,50
1.0.4.0/22,2077456,2077456,,0,0,,-33.4940,143.2104,1000

我想匹配这两个,但问题是我在RDD中的列之间没有严格的等价物。

我想使用Python3 Package ipaddress并做这样的检查:

> import ipaddress
> ipaddress.IPv4Address('1.0.0.5') in ipaddress.ip_network('1.0.0.0/24')
True

是否可以使用python函数执行连接(左外连接不排除第一个RDD中的任何行)?我怎样才能做到这一点?

1 个答案:

答案 0 :(得分:1)

使用Apache Spark 1.6时,您仍然可以使用UDF函数作为连接中的谓词。生成一些测试数据后:

import ipaddress
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType, StructField, StructType, BooleanType, ArrayType, IntegerType

sessions = sc.parallelize([(1608422,'119.228.181.78'),(1608423, '119.228.181.78')]).toDF(['unic_key','c_ip'])

geo_ip = sc.parallelize([('1.0.0.0/24',2077456,2077456),
                        ('1.0.1.0/24',1810821,1814991),
                        ('1.0.2.0/23',1810821,1814991),
                        ('1.0.4.0/22',2077456,2077456)]).toDF(['network','geoname_id','registered_country_geoname_id'])

您可以按如下方式创建UDF谓词:

def ip_range(ip, network_range):
    return ipaddress.IPv4Address(unicode(ip)) in ipaddress.ip_network(unicode(network_range))

pred = udf(lambda ip, network_range:ipaddress.IPv4Address(unicode(ip)) in ipaddress.ip_network(unicode(network_range)), BooleanType())

然后,如果以下连接,您可以使用UDF:

sessions.join(geo_ip).where(pred(sessions.c_ip, geo_ip.network))

不幸的是,目前在Spark 2.x中无效,请参阅https://issues.apache.org/jira/browse/SPARK-19728