交叉验证决策树

未定

在完成决策树功能之后,我决定检查树的准确性,并确认如果我要使用相同的数据制作另一棵树,则至少第一个分割是相同的

from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import os
from sklearn import tree
from sklearn import preprocessing
import sys
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold

.....

def desicion_tree(data_set:pd.DataFrame,val_1 : str, val_2 : str):
    #Encoder  -- > fit doesn't accept strings
    feature_cols = data_set.columns[0:-1]
    X = data_set[feature_cols] # Independent variables
    y = data_set.Mut #class
    y = y.to_list()
    le = preprocessing.LabelBinarizer()
    y = le.fit_transform(y)
    # Split data set into training set and test set
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1) # 75% 
    # Create Decision Tree classifer object
    clf = DecisionTreeClassifier(max_depth= 4, criterion= 'entropy')
    # Train Decision Tree Classifer
    clf.fit(X_train, y_train)
    # Predict the response for test dataset
    y_pred = clf.predict(X_test)
    #Perform cross validation
    for i in range(2, 8):
        plt.figure(figsize=(14, 7))
        # Perform Kfold cross validation
        #cv = ShuffleSplit(test_size=0.25, random_state=0)
        kf = KFold(n_splits=5,shuffle= True)
        scores = cross_val_score(estimator=clf, X=X, y=y, n_jobs=4, cv=kf)
        print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean(), scores.std()))
        tree.plot_tree(clf,filled = True,feature_names=feature_cols,class_names=[val_1,val_2])
        plt.show()
desicion_tree(car_rep_sep_20, 'Categorial', 'Non categorial')

下来,我编写了一个循环,以便使用Kfold用分割后的值缩小树。精度在变化(大约90%),但是树是相同的,我在哪里弄错了?

本·赖尼格

cross_val_score克隆估算器以便在各个折叠上拟合和得分,因此clf对象与在循环前将其拟合到整个数据集时保持相同,因此绘制的树是一个而不是任何交叉经过验证的。

为了得到您想要的东西,我认为您可以使用cross_validateoption return_estimator=True如果您的cv对象具有所需的分割数,则也不需要循环:

kf = KFold(n_splits=5, shuffle=True)
cv_results = cross_validate(
    estimator=clf,
    X=X,
    y=y,
    n_jobs=4,
    cv=kf,
    return_estimator=True,
)
print("%0.2f accuracy with a standard deviation of %0.2f" % (
    cv_results['test_score'].mean(),
    cv_results['test_score'].std(),
))
for est in cv_results['estimator']:
    tree.plot_tree(est, filled=True, feature_names=feature_cols, class_names=[val_1, val_2])
    plt.show();

或者,手动在折痕上循环(或其他cv迭代),拟合模型并在循环中绘制其树。

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章