i have a pandas dataframe with multiple columns (on which I am computing the cumulative value).
I would like to get the incremental value for the same now.
This is my current dataset:
Gender Cubic_Cap Branch UWYear yhat_all
M 1000 A 2015 19
M 1000 A 2015 20
M 1000 A 2015 26
M 1000 A 2015 30
F 1500 B 2016 1
F 1500 B 2016 25
F 1500 B 2016 36
F 1500 B 2016 49
My desired result is:
yhat_incremental
0
1
6
4
1
24
11
13
I've tried the following methods (but to no avail):
all_c1['incremental_yhat'] = all_c1.groupby(rating_factors)['yhat_all'].diff().fillna(all_c1['yhat_all'])
I've also tried this:
all_c1['incremental_yhat'] = all_c1['yhat_all'].shift().where(all_c1[rating_factors].eq(all_c1[rating_factors].shift()))
Is there any other methods I can use to obtain this?
i have a pandas dataframe with multiple columns (on which I am computing the cumulative value).
I would like to get the incremental value for the same now.
This is my current dataset:
Gender Cubic_Cap Branch UWYear yhat_all
M 1000 A 2015 19
M 1000 A 2015 20
M 1000 A 2015 26
M 1000 A 2015 30
F 1500 B 2016 1
F 1500 B 2016 25
F 1500 B 2016 36
F 1500 B 2016 49
My desired result is:
yhat_incremental
0
1
6
4
1
24
11
13
I've tried the following methods (but to no avail):
all_c1['incremental_yhat'] = all_c1.groupby(rating_factors)['yhat_all'].diff().fillna(all_c1['yhat_all'])
I've also tried this:
all_c1['incremental_yhat'] = all_c1['yhat_all'].shift().where(all_c1[rating_factors].eq(all_c1[rating_factors].shift()))
Is there any other methods I can use to obtain this?
Share Improve this question edited Jan 31 at 11:59 galeej asked Jan 31 at 11:26 galeejgaleej 5631 gold badge9 silver badges23 bronze badges 7 | Show 2 more comments1 Answer
Reset to default 0If need set only 0
to first value and gount difference per groups:
rating_factors = ['Var1','Var2','Var3','Var4']
all_c1['incr Value'] = (all_c1.groupby(rating_factors, dropna=False)['CumValue'].diff()
.fillna(all_c1['CumValue']))
all_c1.loc[0, 'incr Value'] = 0
print (all_c1)
Var1 Var2 Var3 Var4 CumValue incr Value
0 V1 X1 L1 R1 19 0.0
1 V1 X1 L1 R1 20 1.0
2 V1 X1 L1 R1 26 6.0
3 V1 X1 L1 R1 30 4.0
4 V2 X1 L1 R1 1 1.0
5 V2 X1 L1 R1 25 24.0
6 V2 X1 L1 R1 36 11.0
7 V2 X1 L1 R1 49 13.0
Alternative:
rating_factors = ['Var1','Var2','Var3','Var4']
all_c1['incr Value'] = all_c1['CumValue']
.sub(all_c1.groupby(rating_factors, dropna=False)['CumValue']
.shift(fill_value=0))
all_c1.loc[0, 'incr Value'] = 0
print (all_c1)
Var1 Var2 Var3 Var4 CumValue incr Value
0 V1 X1 L1 R1 19 0
1 V1 X1 L1 R1 20 1
2 V1 X1 L1 R1 26 6
3 V1 X1 L1 R1 30 4
4 V2 X1 L1 R1 1 1
5 V2 X1 L1 R1 25 24
6 V2 X1 L1 R1 36 11
7 V2 X1 L1 R1 49 13
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745265210a4619412.html
df
provided. Is this "CumValue"? Also, do you really have duplicate column labels? I mean: 4x "Var". Because,pd
doesn't really like that, so it might be for that reason that your first attempt doesn't work. Because that looks fine otherwise. To dedup column labels, see here. Finally, why does group 1 start with0
, but group 2 with1
. Why not19
and1
or 2x0
? – ouroboros1 Commented Jan 31 at 11:50rating_factors
? One assumesrating_factors = ['Gender', 'Cubic_Cap', 'Branch', 'UWYear']
. But if so, your first attempt should work just fine. With the minor point that first value will be19
. On that, see my comment above: it is not clear why it should be0
), but you can usedf.loc
for that, as in the answer by @jezrael. Do you get the correct values with that code for this sample? If so, I take that to mean thatall_c1
is simply not a correct repr. of your actual data. – ouroboros1 Commented Jan 31 at 12:05