将索引分为大熊猫中的单独列

亚纳德姆

我有一个很大的数据框,可以从中获取所需的数据groupby我需要从新数据框的索引中获取几个单独的列。

原始数据框的一部分如下所示:

        code         place     vl   year    week
0   111.0002.0056   region1     1   2017    29
1   112.6500.2285   region2     1   2017    31
2   112.5600.6325   region2     1   2017    30
3   112.5600.6325   region2     1   2017    30
4   112.5600.8159   region2     1   2017    30
5   111.0002.0056   region2     1   2017    29
6   111.0002.0056   region2     1   2017    30
7   111.0002.0056   region2     1   2017    28
8   112.5600.8159   region3     1   2017    31
9   112.5600.8159   region3     1   2017    28
10  111.0002.0114   region3     1   2017    31
....

应用后groupby,它看起来像这样(代码:)df_test1 = df_test.groupby(['code' , 'year', 'week', 'place'])['vl'].sum().unstack(fill_value=0)

                        place  region1  region2  region3  region4  index1
code            year    week                    
111.0002.0006   2017     29       0        3        0        0     (111.0002.0006, 2017, 29)
                         30       0        7        0        0     (111.0002.0006, 2017, 30)
111.0002.0018   2017     29       0        0        0        0     (111.0002.0018, 2017, 29)
111.0002.0029   2017     30       0        0        0        0     (111.0002.0029, 2017, 30)
111.0002.0055   2017     28       0        33       0        8     (111.0002.0055, 2017, 28)
                         29       1        155      2        41    (111.0002.0055, 2017, 29)
                         30       0        142      1        39    (111.0002.0055, 2017, 30)
                         31       0        31       0        13    (111.0002.0055, 2017, 31)
111.0002.0056   2017     28       9        36       0        4     (111.0002.0056, 2017, 28)
                         29       20       75       2        37    (111.0002.0056, 2017, 29)
                         30       17       81       2        33    (111.0002.0056, 2017, 30)
....

我将索引保存在单独的列中index1(代码:),df_test1['index1'] = df_test1.index我需要从该列中取出index1三个单独的列codeyearweek

结果应如下所示:

region1 region2 region3 region4       code     year  week                   
   0       3       0       0    111.0002.0006  2017   29
   0       7       0       0    111.0002.0006  2017   30
   0       0       0       0    111.0002.0018  2017   29
   0       0       0       0    111.0002.0029  2017   30
   0       33      0       8    111.0002.0055  2017   28
   1       155     2       41   111.0002.0055  2017   29
   0       142     1       39   111.0002.0055  2017   30
   0       31      0       13   111.0002.0055  2017   31
....

如有任何建议,我将不胜感激!

耶斯列尔

reset_index改为添加df_test1['index1'] = df_test1.index并进行干净df添加rename_axis-删除列名place

df_test1 = df_test.groupby(['code' , 'year', 'week',  'place'])['vl'].sum() \
                  .unstack(fill_value=0) \
                  .reset_index() \
                  .rename_axis(None, axis=1)
print (df_test1)

            code  year  week  region1  region2  region3
0  111.0002.0056  2017    28        0        1        0
1  111.0002.0056  2017    29        1        1        0
2  111.0002.0056  2017    30        0        1        0
3  111.0002.0114  2017    31        0        0        1
4  112.5600.6325  2017    30        0        2        0
5  112.5600.8159  2017    28        0        0        1
6  112.5600.8159  2017    30        0        1        0
7  112.5600.8159  2017    31        0        0        1
8  112.6500.2285  2017    31        0        1        0

最后,如有必要,更改列的顺序:

#all cols are columns in df_test1
cols = ['code' , 'year', 'week']
df_test1 = df_test1[[x for x in df_test1.columns if x not in cols] + cols]
print (df_test1)
   region1  region2  region3           code  year  week
0        0        1        0  111.0002.0056  2017    28
1        1        1        0  111.0002.0056  2017    29
2        0        1        0  111.0002.0056  2017    30
3        0        0        1  111.0002.0114  2017    31
4        0        2        0  112.5600.6325  2017    30
5        0        0        1  112.5600.8159  2017    28
6        0        1        0  112.5600.8159  2017    30
7        0        0        1  112.5600.8159  2017    31
8        0        1        0  112.6500.2285  2017    31

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章