解析具有重复块的文件中的垂直文本

时间:2019-04-11 20:57:47

标签: python

解析文件下面的最佳方法是什么?这些块重复多次。

预期结果以以下格式输出到CSV文件:

{Place: REGION-1, Host: ABCD, Area: 44...}

我尝试了下面的代码,但是它只会迭代第一个代码块,而不是完成代码。

 with open('/tmp/t2.txt', 'r') as input_data:
   for line in input_data:

    if re.findall('(.*_RV)\n',line):
       myDict={}
       myDict['HOST'] = line[6:]
       continue

    elif re.findall('Interface(.*)\n',line):
       myDict['INTF'] = line[6:]
    elif len(line.strip()) == 0:
       print(myDict)

文本文件在下面。

Instance REGION-1:
  ABCD_RV
    Interface: fastethernet01/01
    Last state change: 0h54m44s ago
    Sysid: 01441
    Speaks: IPv4
    Topologies:
      ipv4-unicast     
    SAPA: point-to-point
    Area Address(es):
      441
    IPv4 Address(es):
      1.1.1.1    

  EFGH_RV
    Interface: fastethernet01/01
    Last state change: 0h54m44s ago
    Sysid: 01442
    Speaks: IPv4
    Topologies:
      ipv4-unicast     
    SAPA: point-to-point
    Area Address(es):
      442
    IPv4 Address(es):
      1.1.1.2   

Instance REGION-2:
  IJKL_RV
    Interface: fastethernet01/01
    Last state change: 0h54m44s ago
    Sysid: 01443
    Speaks: IPv4
    Topologies:
      ipv4-unicast     
    SAPA: point-to-point
    Area Address(es):
      443
    IPv4 Address(es):
      1.1.1.3   

3 个答案:

答案 0 :(得分:1)

这对我有用,但这并不漂亮:

text=input_data
text=text.rstrip(' ').rstrip('\n').strip('\n')
#first I get ready to create a csv by replacing the headers for the data
text=text.replace('Instance REGION-1:',',')
text=text.replace('Instance REGION-2:',',')
text=text.replace('Interface:',',')
text=text.replace('Last state change:',',')
text=text.replace('Sysid:',',')
text=text.replace('Speaks:',',')
text=text.replace('Topologies:',',')
text=text.replace('SAPA:',',')
text=text.replace('Area Address(es):',',')
text=text.replace('IPv4 Address(es):',',')

#now I strip out the leading whitespace, cuz it messes up the split on '\n\n'
lines=[x.lstrip(' ') for x in text.split('\n')]


clean_text=''

#now that the leading whitespace is gone I recreate the text file
for line in lines:
    clean_text+=line+'\n'

#Now split the data into groups based on single entries
entries=clean_text.split('\n\n')
#create one liners out of the entries so they can be split like csv
entry_lines=[x.replace('\n',' ') for x in entries]

#create a dataframe to hold the data for each line
df=pd.DataFrame(columns=['Instance REGION','Interface',
                         'Last state change','Sysid','Speaks',
                         'Topologies','SAPA','Area Address(es)',
                         'IPv4 Address(es)']).T

#now the meat and potatoes
count=0
for line in entry_lines:   
    data=line[1:].split(',')        #split like a csv on commas
    data=[x.lstrip(' ').rstrip(' ') for x in data]     #get rid of extra leading/trailing whitespace
    df[count]=data    #create an entry for each split
    count+=1          #incriment the count

df=df.T               #transpose back to normal so it doesn't look weird

输出对我来说像这样

enter image description here

编辑:另外,由于您在这里有各种答案,因此我测试了我的性能。如公式y = 100.97e^(0.0003x)

所述,它是指数级的

这是我的时间结果。

Entries Milliseconds
18      49
270     106
1620    394
178420  28400

答案 1 :(得分:1)

或者,如果您喜欢丑陋的正则表达式路线:

s3_train

输出:

import re

region_re = re.compile("^Instance\s+([^:]+):.*")
host_re = re.compile("^\s+(.*?)_RV.*")
interface_re = re.compile("^\s+Interface:\s+(.*?)\s+")
other_re = re.compile("^\s+([^\s]+).*?:\s+([^\s]*){0,1}")

myDict = {}
extra = None
with open('/tmp/t2.txt', 'r') as input_data:
   for line in input_data:
        if extra: # value on next line from key
            myDict[extra] = line.strip()
            extra = None
            continue

        region = region_re.match(line)
        if region:
            if len(myDict) > 1:
                print(myDict)
            myDict = {'Place': region.group(1)}
            continue

        host = host_re.match(line)
        if host:
            if len(myDict) > 1:
                print(myDict)
            myDict = {'Place': myDict['Place'], 'Host': host.group(1)}
            continue

        interface = interface_re.match(line)
        if interface:
            myDict['INTF'] = interface.group(1)
            continue

        other =  other_re.match(line)
        if other:
            groups = other.groups()
            if groups[1]:
                myDict[groups[0]] = groups[1]
            else:
                extra = groups[0]

# dump out final one
if len(myDict) > 1:
    print(myDict)

答案 2 :(得分:1)

这不需要太多的正则表达式,并且可以进行更优化。希望对您有帮助!

import re
import pandas as pd
from collections import defaultdict

_level_1 = re.compile(r'instance region.*', re.IGNORECASE)
with open('stack_formatting.txt') as f:
    data = f.readlines()

"""
Format data so that it could be split easily
"""
data_blocks = defaultdict(lambda: defaultdict(str))
header = None
instance = None
for line in data:
    line = line.strip()
    if _level_1.match(line):
        header = line
    else:
        if "_RV" in line:
            instance = line
        elif not line.endswith(":"):
            data_blocks[header][instance] += line + ";"
        else:
            data_blocks[header][instance] += line


def parse_text(data_blocks):
    """
    Generate a dict which could be converted easily to a pandas dataframe
    :param data_blocks: splittable data
    :return: dict with row values for every column
    """
    final_data = defaultdict(list)
    for key1 in data_blocks.keys():
        for key2 in data_blocks.get(key1):
            final_data['instance'].append(key1)
            final_data['sub_instance'].append(key2)
            for items in data_blocks[key1][key2].split(";"):
                print(items)
                if items.isspace() or len(items) == 0:
                    continue
                a,b = re.split(r':\s*', items)
                final_data[a].append(b)
    return final_data


print(pd.DataFrame(parse_text(data_blocks)))