解析大型日志文件

时间:2016-01-10 15:26:41

标签: python pandas

我有一个日志文件,其中包含1,001,623行格式:

[02/Jan/2012:09:07:32] "GET /click?id=162&prod=5475 HTTP/1.1" 200 4352

每个用新行分隔

我使用正则表达式循环它并提取我需要的信息(日期,id,产品)

for txt in logfile:
    m = rg.search(txt)
    if m:
        l1=m.group(1)
        l2=m.group(2)
        l3=m.group(3)
        dt=dt.append(pd.Series([l1]))
        art=art.append(pd.Series([l2]))
        usr=usr.append(pd.Series([l3]))

这在我只使用小样本的测试中工作得很好但是当我使用整个集合时它已经运行了12个小时并且没有显示任何进展。然后,我将创建一个数据框来进行一些分析。有更好的方法吗?

编辑:

这是我打开日志文件的方式。

logfile = open("data/access.log", "r")

正则表达式

re1='.*?'   # Non-greedy match on filler
re2='((?:(?:[0-2]?\\d{1})|(?:[3][01]{1}))[-:\\/.](?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Sept|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)[-:\\/.](?:(?:[1]{1}\\d{1}\\d{1}\\d{1})|(?:[2]{1}\\d{3})))(?![\\d])'  # DDMMMYYYY 1
re3='.*?'   # Non-greedy match on filler
re4='\\d+'  # Uninteresting: int
re5='.*?'   # Non-greedy match on filler
re6='\\d+'  # Uninteresting: int
re7='.*?'   # Non-greedy match on filler
re8='\\d+'  # Uninteresting: int
re9='.*?'   # Non-greedy match on filler
re10='(\\d+)'   # Integer Number 1
re11='.*?'  # Non-greedy match on filler
re12='(\\d+)'   # Integer Number 2

rg =  re.compile(re1+re2+re3+re4+re5+re6+re7+re8+re9+re10+re11+re12,re.IGNORECASE|re.DOTALL)
m = rg.search(txt)

1 个答案:

答案 0 :(得分:1)

您可以使用pandas。首先按strip剥离[],然后转换to_datetime

然后解析idprod并最后通过concat合并在一起:

import pandas as pd
import io

temp=u"""[02/Jan/2012:09:07:32] "GET /click?id=162&prod=5475 HTTP/1.1" 200 4352
[02/Jan/2012:09:07:32] "GET /click?id=162&prod=5475 HTTP/1.1" 200 4352
[02/Jan/2012:09:07:32] "GET /click?id=162&prod=5475 HTTP/1.1" 200 4352
[02/Jan/2012:09:07:32] "GET /click?id=162&prod=5475 HTTP/1.1" 200 4352
[02/Jan/2012:09:07:32] "GET /click?id=162&prod=5475 HTTP/1.1" 200 4352"""

#change io.StringIO(temp) to 'filename.csv'
df = pd.read_csv(io.StringIO(temp), sep="\s*", engine='python', header=None, 
                                    names=['date','get','data','http','no1','no2'])

#format - http://strftime.org/
df['date'] = pd.to_datetime(df['date'].str.strip('[]'), format="%d/%b/%Y:%H:%M:%S")

#split Dataframe
df1 = pd.DataFrame([ x.split('=') for x in df['data'].tolist() ], columns=['c','id','prod'])

#split Dataframe
df2 = pd.DataFrame([ x.split('&') for x in df1['id'].tolist() ], columns=['id', 'no3'])
    
print df

                 date   get                     data       http  no1   no2
0 2012-01-02 09:07:32  "GET  /click?id=162&prod=5475  HTTP/1.1"  200  4352
1 2012-01-02 09:07:32  "GET  /click?id=162&prod=5475  HTTP/1.1"  200  4352
2 2012-01-02 09:07:32  "GET  /click?id=162&prod=5475  HTTP/1.1"  200  4352
3 2012-01-02 09:07:32  "GET  /click?id=162&prod=5475  HTTP/1.1"  200  4352
4 2012-01-02 09:07:32  "GET  /click?id=162&prod=5475  HTTP/1.1"  200  4352
print df1

           c        id  prod
0  /click?id  162&prod  5475
1  /click?id  162&prod  5475
2  /click?id  162&prod  5475
3  /click?id  162&prod  5475
4  /click?id  162&prod  5475
print df2

    id   no3
0  162  prod
1  162  prod
2  162  prod
3  162  prod
4  162  prod

df = pd.concat([df['date'], df1['prod'], df2['id']], axis=1)
print df

                 date  prod   id
0 2012-01-02 09:07:32  5475  162
1 2012-01-02 09:07:32  5475  162
2 2012-01-02 09:07:32  5475  162
3 2012-01-02 09:07:32  5475  162
4 2012-01-02 09:07:32  5475  162
相关问题