Training the decision tree classifier
Let's learn how to train the decision tree classifier as shown in the following code snippet:
In []: from sklearn import tree tree_model = tree.DecisionTreeClassifier(criterion='entropy', random_state=42) tree_model = tree_model.fit(X_train, y_train) tree_model Out[]: DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=None, max_features=None, max_leaf_nodes=None, min_impurity_split=1e-07, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=42, splitter='best')
The most interesting for us are the class attributes of DecisionTreeClassifier:
- criterion: The way to estimate the best partition (see the How decision tree learning works section).
- max_depth: Maximum tree depth.
- max_features: The maximum number of attributes to account in one split.
- min_samples_leaf: The minimum number of objects in the leaf; for example, if it is equal to 3, then the tree will generate only those classification rules that are true for at least three objects.
These attributes are known as hyperparameters. They are different from model parameters: the former is something that users can tweak, and the latter is something that machine learning algorithm learns. In a decision tree, parameters are specific rules in its nodes. The tree hyperparameters must be adjusted depending on the input data, and this is usually done using cross-validation (stay tuned).
The properties of the model, which are not adjusted (learned) by the model itself, but are available for the user's adjustments, are known as hyperparameters. In the case of the decision tree model, these hyperparameters are class_weight, criterion, max_depth, max_features, and so on. They are like knobs you can turn to adjust the model to your specific needs.