根据列中的条件创建组/类

时间:2016-09-26 21:20:29

标签: python pandas

我需要帮助转换我的数据,以便我可以阅读交易数据。

商业案例

我正在尝试将一些相关事务组合在一起,以创建一些事件组或类。这个数据集代表了工作人员参加各种缺席事件。我想根据离开事件类365天内的任何事务创建一类叶子。对于图表趋势,我想对类进行编号,以便得到序列/模式。

我的代码允许我查看第一个事件发生的时间,并且它可以识别新类何时启动,但它不会将每个事务都分成一个类。

要求:

  • 根据所属的行类标记所有行。
  • 每个独特的休假活动编号。使用此示例,索引0将是唯一离开事件2,索引1将是唯一离开事件2,索引3将是唯一离开事件2,并且索引4将是唯一离开事件1等。

我在列中添加了所需的输出,标记为“Desired Output”。注意,每个人可以有更多行/事件;而且可能会有更多的人。

部分数据

import pandas as pd

data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
        'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"],
        'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])

我试过的一些代码

df['Effective Date'] = df['Effective Date'].astype('datetime64[ns]')
df['EmplidShift'] = df['Employee ID'].shift(-1)
df['Effdt-Shift'] = df['Effective Date'].shift(-1)
df['Prior Row in Same Emplid Class'] = "No"
df['Effdt Diff'] = df['Effdt-Shift'] - df['Effective Date']
df['Effdt Diff'] = (pd.to_timedelta(df['Effdt Diff'], unit='d') + pd.to_timedelta(1,unit='s')).astype('timedelta64[D]')
df['Cumul. Count'] = df.groupby('Employee ID').cumcount()


df['Groupby'] = df.groupby('Employee ID')['Cumul. Count'].transform('max')
df['First Row Appears?'] = ""
df['First Row Appears?'][df['Cumul. Count'] == df['Groupby']] = "First Row"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']]  = "Yes"

df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']]  = "Yes"

df['Effdt > 1 Yr?'] = ""                                        
df['Effdt > 1 Yr?'][ ((df['Prior Row in Same Emplid Class'] == "Yes" ) & (df['Effdt Diff'] < -365))  ] = "Yes"

df['Unique Leave Event'] = ""
df['Unique Leave Event'][ (df['Effdt > 1 Yr?'] == "Yes") | (df['First Row Appears?'] == "First Row") ] = "Unique Leave Event" 

df

2 个答案:

答案 0 :(得分:3)

这有点笨拙,但它至少为你的小例子产生正确的输出:

import pandas as pd

data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
        'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01-01"],
        'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])

df["Effective Date"] = pd.to_datetime(df["Effective Date"])
df = df.sort_values(["Employee ID","Effective Date"]).reset_index(drop=True)

for i,_ in df.iterrows():
  df.ix[0,"Result"] = "Unique Leave Event 1"
  if i < len(df)-1:
    if df.ix[i+1,"Employee ID"] == df.ix[i,"Employee ID"]:
      if df.ix[i+1,"Effective Date"] - df.ix[i,"Effective Date"] > pd.Timedelta('365 days'):
        df.ix[i+1,"Result"] = "Unique Leave Event " + str(int(df.ix[i,"Result"].split()[-1])+1)
      else:
        df.ix[i+1,"Result"] = df.ix[i,"Result"]
    else:
      df.ix[i+1,"Result"] = "Unique Leave Event 1"

请注意,此代码假定第一行始终包含字符串Unique Leave Event 1

编辑:一些解释。

首先,我将日期转换为日期时间格式,然后重新排序数据框,以便每个员工ID的日期都是升序。

然后我使用built-int迭代器iterrows迭代帧的行。 _中的for i,_只是我不使用的第二个变量的占位符,因为迭代器会返回行号和行,我只需要这里的数字。

在迭代器中我正在进行逐行比较,所以默认情况下我手工填充第一行然后分配到i+1行。我是这样做的,因为我知道第一行的值,但不知道最后一行的值。然后我将i+1 - 行与i中的if行进行比较 - 因为i+1会在最后一次迭代时给出索引错误。

在循环中,我首先检查Employee ID是否在两行之间发生了变化。如果没有,那么我比较两行的日期,看看它们是否分开超过365天。如果是这种情况,我会从"Unique Leave Event X"行读取字符串i,将数字增加一,并将其​​写入i+1 - 行。如果日期更接近,我只需复制上一行的字符串。

如果Employee ID确实发生了变化,我只需编写"Unique Leave Event 1"即可重新开始。

注1:iterrows()没有设置选项,所以我不能只在一个子集上进行迭代。

注2:总是使用其中一个内置迭代器进行迭代,只有在无法解决问题时才进行迭代。

注3:在迭代中分配值时,请始终使用ixlociloc

答案 1 :(得分:2)

您可以在不必循环或遍历数据框的情况下执行此操作。根据{{​​3}},您可以将.apply()与groupBy对象一起使用,并定义要应用于groupby对象的函数。如果您将其与.shift()Wes McKinney)一起使用,则可以在不使用任何循环的情况下获得结果。

Terse示例:

# Group by Employee ID
grouped = df.groupby("Employee ID")
# Define function 
def get_unique_events(group):
    # Convert to date and sort by date, like @Khris did
    group["Effective Date"] = pd.to_datetime(group["Effective Date"])
    group = group.sort_values("Effective Date")
    event_series = (group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days')).apply(lambda x: int(x)).cumsum()+1
    return event_series

event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0)
df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True)
df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x))
df['Match'] = df['Desired Output'] == df['Output']

print(df)

<强>输出:

  Employee ID Effective Date        Desired Output  Unique Event  \
3         100     2013-01-01  Unique Leave Event 1             1
2         100     2014-07-01  Unique Leave Event 2             2
1         100     2015-06-05  Unique Leave Event 2             2
0         100     2016-01-01  Unique Leave Event 2             2
6         200     2013-01-01  Unique Leave Event 1             1
5         200     2015-01-01  Unique Leave Event 2             2
4         200     2016-01-01  Unique Leave Event 2             2
7         300        2014-01  Unique Leave Event 1             1

                 Output Match
3  Unique Leave Event 1  True
2  Unique Leave Event 2  True
1  Unique Leave Event 2  True
0  Unique Leave Event 2  True
6  Unique Leave Event 1  True
5  Unique Leave Event 2  True
4  Unique Leave Event 2  True
7  Unique Leave Event 1  True

为了清晰起见,更详细的例子:

import pandas as pd

data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
        'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"],
        'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])

# Group by Employee ID
grouped = df.groupby("Employee ID")

# Define a function to get the unique events
def get_unique_events(group):
     # Convert to date and sort by date, like @Khris did
    group["Effective Date"] = pd.to_datetime(group["Effective Date"])
    group = group.sort_values("Effective Date")
    # Define a series of booleans to determine whether the time between dates is over 365 days
    # Use .shift(1) to look back one row
    is_year = group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days')
    # Convert booleans to integers (0 for False, 1 for True)
    is_year_int = is_year.apply(lambda x: int(x))    
    # Use the cumulative sum function in pandas to get the cumulative adjustment from the first date.
    # Add one to start the first event as 1 instead of 0
    event_series = is_year_int.cumsum() + 1
    return event_series

# Run function on df and put results into a new dataframe
# Convert Employee ID back from an index to a column with .reset_index(level=0)
event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0)

# Merge the dataframes
df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True)

# Add string to match desired format
df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x))

# Check to see if output matches desired output
df['Match'] = df['Desired Output'] == df['Output']

print(df)

您获得相同的输出:

  Employee ID Effective Date        Desired Output  Unique Event  \
3         100     2013-01-01  Unique Leave Event 1             1
2         100     2014-07-01  Unique Leave Event 2             2
1         100     2015-06-05  Unique Leave Event 2             2
0         100     2016-01-01  Unique Leave Event 2             2
6         200     2013-01-01  Unique Leave Event 1             1
5         200     2015-01-01  Unique Leave Event 2             2
4         200     2016-01-01  Unique Leave Event 2             2
7         300        2014-01  Unique Leave Event 1             1

                 Output Match
3  Unique Leave Event 1  True
2  Unique Leave Event 2  True
1  Unique Leave Event 2  True
0  Unique Leave Event 2  True
6  Unique Leave Event 1  True
5  Unique Leave Event 2  True
4  Unique Leave Event 2  True
7  Unique Leave Event 1  True