使用pandas进行漏斗分析(大数据集)

时间:2014-09-17 16:35:23

标签: python pandas dataframe

我正在尝试使用pandas数据框进行一些基本的漏斗分析。这意味着,我有一个包含用户sessions的数据框,它由一系列events组成。我希望能够按session进行分组,确定哪个sessions包含给定的event ordering(eventA后跟eventB),然后按date进行分组并获取计数随着时间的推移。

例如,给定我的数据帧:

sessions = ['a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'd', 'd', 'd']
events = ['dog', 'cat', 'tree', 'tree', 'dog', 'frog', 'cat', 'dog', 'cat', 'tree', 'cat', 'dog']
d1 = datetime(2014,8,1)
d2 = datetime(2014,8,2)
d3 = datetime(2014,8,3)
dates = [d1, d1, d1, d1, d1, d1, d1, d2, d2, d1, d1, d1]
dic = {'sessions':sessions, 'events':events, 'dates':dates}
df_tot = pd.DataFrame(dic)
制造

    sessionDate  events  sessions
0   2014-08-01   dog     a
1   2014-08-01   cat     a
2   2014-08-01   tree    a
3   2014-08-01   tree    b
4   2014-08-01   dog     b
5   2014-08-01   frog    b
6   2014-08-01   cat     b
7   2014-08-02   dog     c
8   2014-08-02   cat     c
9   2014-08-01   tree    d
10  2014-08-01   cat     d
11  2014-08-01   dog     d

我希望获得以下事件排序dog,然后cat

             reachedFirstEvent   reachedSecondEvent  total
2014-08-01   1                   2                   3
2014-08-02   0                   1                   1

我的第二个问题是,我的实际数据帧中有300万行。所以我构建了一个黑客攻击解决方案。它工作但很慢。有关如何执行此操作或加快代码速度的任何想法?

def find_funnels_ex(dlist,event_list):

        m = -1
        for i in range(0,len(event_list)):

            j = np.where(dlist == event_list[i])[0] #get all indices where cat
            j = j[j>=m] #select only indices greater than min dog index
            if j.size == 0:
                return i
            else:
                m = np.min(j)

        return i+1

sessions = ['a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'd', 'd', 'd']
events = ['dog', 'cat', 'tree', 'tree', 'dog', 'frog', 'cat', 'dog', 'cat', 'tree', 'cat', 'dog']
d1 = datetime(2014,8,1)
d2 = datetime(2014,8,2)
d3 = datetime(2014,8,3)
dates = [d1, d1, d1, d1, d1, d1, d1, d2, d2, d1, d1, d1]
dic = {'sessions':sessions, 'events':events, 'dates':dates}
df_tot = pd.DataFrame(dic)


#get only groups that have at least first event
gb_tot = df_tot.groupby('sessions')
df_filt = gb_tot.filter(lambda x: 'dog' in x['events'].values) #changes to dataframe

#get funnel position for each session
#returns a 1 if first event is reached, returns a 2 if second event is reached, etc
gb_filt = df_filt.groupby('sessions')
gb_funn = gb_filt.aggregate(lambda x: find_funnels_ex(
                                x['events'].values, 
                                ['dog','cat']
                                )
                   ) 

#join this to funnel to get date events funnel was started
gb_filt = gb_filt.aggregate({'dates':np.min}) 
gb_filt['funnel'] = gb_funn['events']
df_funn = gb_filt.reset_index() #change back to dataframe

#pivot to get columns of funnel position indicators
df_piv = pd.pivot_table(df_funn,'funnel', cols='funnel', rows=['sessions','dates'], aggfunc=np.sum) #pivot
df_piv = df_piv.reset_index() #reset

#group by date and sum
df_piv = df_piv.set_index('dates') #set index
gb_piv = df_piv.groupby(lambda x: x) #groupby date
gb_final = gb_piv.aggregate({1:np.sum,2:np.sum})

#get totals
gb_tot = df_tot.groupby('sessions')
gb_tot = gb_tot.aggregate({'dates':np.min})
gb_tot = gb_tot.set_index('dates') #set index
gb_tot = gb_tot.groupby(lambda x: x).size() #groupby date
gb_final['total'] = gb_tot


gb_final[2] = gb_final.apply(lambda x: x[2]/2.0,axis=1)

1 个答案:

答案 0 :(得分:0)

这是我想到的另一个版本。同样,大量数据也很慢:(

此功能确定会话是否包含当前的渠道:

def find_funnels(dlist,event_list):

    m = -1
    for i in range(0,len(event_list)):

        j = np.where(dlist == event_list[i])[0]
        j = j[j>=m]
        if j.size == 0:
            return False
        else:
            m = np.min(j)

    return True

此功能将当前漏斗应用于df:

def funnelz(df, eventList, groups, dates, events):

    dfz = pd.DataFrame()
    count = 1
    eventList2 = [eventList[0]]

    while eventList:

        if not dfz.empty:
            gb = df.groupby(groups)
            df = gb.filter(lambda x: find_funnels(x[events].values, eventList2))

        gb = df.groupby(groups)
        gb = gb.aggregate({dates:np.min})
        gb = gb.set_index(dates)
        gb = gb.groupby(lambda x: x).size()

        if not dfz.empty:
            dfz['event'+str(count)+'Reached'] = gb
            count += 1
            eventList = eventList[1:]
            if eventList:
                eventList2.append(eventList[0])

        else:
            dfz['total'] = gb

    return dfz

似乎工作但很慢:

funnelz(df_funn, eventList=['dog', 'cat'], groups = 'sessions', dates = 'dates', events = 'events')