如何在flatMap函数中实现迭代

时间:2016-06-30 15:02:13

标签: python python-2.7 hadoop iteration pyspark

我正在将多行记录文本文件读入RDD。基础数据就像这样

Time    MHist::852-YF-007   
2016-05-10 00:00:00 0
2016-05-09 23:59:00 0
2016-05-09 23:58:00 0
Time    MHist::852-YF-008   
2016-05-10 00:00:00 0
2016-05-09 23:59:00 0
2016-05-09 23:58:00 0

不,我想转换RDD,以便获得密钥映射,(时间戳,值)。这可以通过几个步骤完成。但我想只在一个调用中提取该信息(但在Python 2.7中不是3)。

RDD是这样的:

[(0, u''),
 (12,
  u'852-YF-007\t\r\n2016-05-10 00:00:00\t0\r\n2016-05-09 23:59:00\t0\r\n2016-05-09 23:58:00\t0\r\n2016-05-09 23:57:00\t0\r\n2016-05-09 23:56:00\t0\r\n2016-05-09 23:55:00\t0\r\n2016-05-09 23:54:00\t0\r\n2016-05-09 23:53:00\t0\r\n2016-05-09 23:52:00\t0\r\n2016-05-09 23:51:00\t0\r\n2016-05-09 23:50:00\t0\r\n2016-05-09 23:49:00\t0\r\n2016-05-09 23:48:00\t0\r\n2016-05-09 23:47:00\t0\r\n2016-05-09 23:46:00\t0\r\n2016-05-09 23:45:00\t0\r\n2016-05-09 23:44:00\t0\r\n2016-05-09 23:43:00\t0\r\n2016-05-09 23:42:00\t0\n'),
 (473,
  u'852-YF-008\t\r\n2016-05-10 00:00:00\t0\r\n2016-05-09 23:59:00\t0\r\n2016-05-09 23:58:00\t0\r\n2016-05-09 23:57:00\t0\r\n2016-05-09 23:56:00\t0\r\n2016-05-09 23:55:00\t0\r\n2016-05-09 23:54:00\t0\r\n2016-05-09 23:53:00\t0\r\n2016-05-09 23:52:00\t0\r\n2016-05-09 23:51:00\t0\r\n2016-05-09 23:50:00\t0\r\n2016-05-09 23:49:00\t0\r\n2016-05-09 23:48:00\t0\r\n2016-05-09 23:47:00\t0\r\n2016-05-09 23:46:00\t0\r\n2016-05-09 23:45:00\t0\r\n2016-05-09 23:44:00\t0\r\n2016-05-09 23:43:00\t0\r\n2016-05-09 23:42:00\t0')]

对于每一对,有趣的部分是值(内容)。在该值内,第一项是键/名称,其余是具有时间戳的值。因此,我试图使用它:

sheet = sc.newAPIHadoopFile(
    'sample.txt',
    'org.apache.hadoop.mapreduce.lib.input.TextInputFormat',
    'org.apache.hadoop.io.LongWritable',
    'org.apache.hadoop.io.Text',
    conf={'textinputformat.record.delimiter': 'Time\tMHist::'}
)

from operator import itemgetter

def process(pair):
    _, content = pair
    if not content: 
        pass

    lines = content.splitlines();
    #k = lines[0].strip()
    #vs =lines[1:]
    k, vs = itemgetter(0, slice(1, None), lines)
    #k, *vs = [x.strip() for x in content.splitlines()]  # Python 3 syntax

    for v in vs:
        try:
            ds, x = v.split("\t")
            yield k, (dateutil.parser.parse(ds), float(x))  # or int(x)
            return
        except ValueError:
            pass

sheet.flatMap(process).take(5)

但是我收到了这个错误:

  

TypeError:' operator.itemgetter'对象不可迭代

进入函数的对具有char-position(我可以忽略)和内容。内容应按\ r \ n分割,并且行数组的第一项是键,而其他项则作为flatMap的key-timestamp-value。

那么,我在流程方法中做错了什么?

同时,由于Stackoverflow和其他所有人的帮助,我提出了这个解决方案。这个非常好用:

# reads a text file in TSV notation having the key-value no as first column but 
# as a randomly occuring line followed by its values. Remark: a variable might occur in several files

#Time    MHist::852-YF-007   
#2016-05-10 00:00:00 0
#2016-05-09 23:59:00 0
#2016-05-09 23:58:00 0
#Time    MHist::852-YF-008   
#2016-05-10 00:00:00 0
#2016-05-09 23:59:00 0
#2016-05-09 23:58:00 0

#imports
from operator import itemgetter
from datetime import datetime

#read the text file with special record-delimiter --> all lines after Time\tMHist:: are the values for that variable
sheet = sc.newAPIHadoopFile(
    'sample.txt',
    'org.apache.hadoop.mapreduce.lib.input.TextInputFormat',
    'org.apache.hadoop.io.LongWritable',
    'org.apache.hadoop.io.Text',
    conf={'textinputformat.record.delimiter': 'Time\tMHist::'}
)

#this avoid using multiple map/flatMap/mapValues/flatMapValues calls by extracting the values at once
def process_and_extract(pair):
    # first part will be the char-position within the file, which we can ignore
    # second is the real content as one string and not yet splitted
    _, content = pair
    if not content: 
        pass

    try:
        # once the content is split into lines:
        # 1. the first line will have the bare variable name since we removed the preceeding 
        # part when opening the file (see delimiter above)
        # 2. the second line until the end will include the values for the current variable

        # Python 2.7 syntax
        #clean = itemgetter(0, slice(1, None))(lines)
        clean = [x.strip() for x in content.splitlines()]
        k, vs = clean[0], clean[1:]    

        # Python 3 syntax
        #k, *vs = [x.strip() for x in content.splitlines()] 
        #for v in vs*:

        for v in vs:
            try:
                # split timestamp and value and convert (cast) them from string to correct data type
                ds, x = v.split("\t")
                yield k, (datetime.strptime(ds, "%Y-%m-%d %H:%M:%S"), float(x))
            except ValueError:
                # might occur if a line format is corrupt
                pass
    except IndexError:
        # might occur if content is empty or iregular
        pass

# read, flatten, extract and reduce the file at once        
sheet.flatMap(process_and_extract) \
    .reduceByKey(lambda x, y: x + y) \
    .take(5)

第二个版本是避免for-each-loop,最后速度提高了20%:

start_time = time.time()

#read the text file with special record-delimiter --> all lines after Time\tMHist:: are the values for that variable
sheet = sc.newAPIHadoopFile(
    'sample.txt',
    'org.apache.hadoop.mapreduce.lib.input.TextInputFormat',
    'org.apache.hadoop.io.LongWritable',
    'org.apache.hadoop.io.Text',
    conf={'textinputformat.record.delimiter': 'Time\tMHist::'}
)

def extract_blob(pair):
    if not pair:
        pass

    try:
        offset, content = pair
        if not content: 
            pass

        clean = [x.strip() for x in content.splitlines()]
        if not clean or len(clean) < 2:
            pass

        k, vs = clean[0], clean[1:]
        if not k:
            pass

        return k.strip(), vs
    except IndexError:
        # might occur if content is empty or malformed
        pass

def extract_line(pair):
    if not pair:
        pass

    key, line = pair;
    if not key or not line:
        pass

    # split timestamp and value and convert (cast) them from string to correct data type
    content = line.split("\t")
    if not content or len(content) < 2:
        pass

    try:
        ds, x = content
        if not ds or not x:
            pass 

        return (key, datetime.strptime(ds, "%Y-%m-%d %H:%M:%S"), float(x))
    except ValueError:
        # might occur if a line format is corrupt
        pass

def check_empty(x):
    return not (x == None)

#drop keys and filter out non-empty entries
non_empty = sheet.filter(lambda (k, v): v)

#group lines having variable name at first line
grouped_lines = non_empty.map(extract_blob)

#extract variable name and split it from the variable values
flat_lines = grouped_lines.flatMapValues(lambda x: x)

#extract the values from the value
flat_triples = flat_lines.map(extract_line).filter(check_empty)

#convert to dataframe
df = flat_triples.toDF(["Variable", "Time", "Value"])

df.write \
    .partitionBy("Variable") \
    .saveAsTable('Observations', format='parquet', mode='overwrite', path=output_hdfs_filepath)

print("loading and saving done in {} seconds".format(time.time() - start_time));

1 个答案:

答案 0 :(得分:2)

itemgetter返回一个接受对象的函数,并为传递给__getitem__的每个参数调用itemgetter。所以你必须在lines上调用它:

itemgetter(0, slice(1, None))(lines)

大致相当于

[lines[i] for i in [0, slice(1, None)])

其中lines[slice(1, None)]基本上是lines[1:]

这意味着您必须先确保lines不为空,否则lines[0]将失败。

if lines:  # bool(empty_sequence) is False
    k, vs = itemgetter(0, slice(1, None))(lines)
    for v in vs:
        ...

将所有内容放在一起,包括doctests:

def process(pair):
    r"""
    >>> list(process((0, u'')))
    []
    >>> kvs = list(process((
    ... 12,
    ... u'852-YF-007\t\r\n2016-05-10 00:00:00\t0\r\n2016-05-09 23:59:00\t0')))
    >>> kvs[0] 
    (u'852-YF-007', (datetime.datetime(2016, 5, 10, 0, 0), 0.0))
    >>> kvs[1]
    (u'852-YF-007', (datetime.datetime(2016, 5, 9, 23, 59), 0.0))
    >>> list(process((
    ... 10,
    ... u'852-YF-007\t\r\n2ad-05-10 00')))
    []
    """ 
    _, content = pair
    clean = [x.strip() for x in content.strip().splitlines()]

    if clean:
        k, vs = itemgetter(0, slice(1, None))(clean)
        for v in vs:
            try:
                ds, x = v.split("\t")
                yield k, (datetime.strptime(ds, "%Y-%m-%d %H:%M:%S"), float(x))
            except ValueError:
                pass 
相关问题