上QQ阅读APP看书,第一时间看更新
Random forest accuracy evaluation
Loss on training data:
In []: rf_model.score(X_train, y_train) Out[]: 0.98999999999999999
Loss on test data:
In []: rf_model.score(X_test, y_test) Out[]: 0.90333333333333332
Cross-validation:
In []:
scores = cross_val_score(rf_model, features, df.label, cv=10) np.mean(scores) Out[]: 0.89700000000000002 In []: print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) Accuracy: 0.90 (+/- 0.06)
Precision and recall:
In []: predictions = rf_model.predict(X_test) predictions = np.array(map(lambda x: x == 'rabbosaurus', predictions), dtype='int') true_labels = np.array(map(lambda x: x == 'rabbosaurus', y_test), dtype='int') precision_score(true_labels, predictions) Out[]: 0.9072847682119205 In []: recall_score(true_labels, predictions) Out[]: 0.90131578947368418
F1-score:
In []: f1_score(true_labels, predictions) Out[]: 0.90429042904290435
Confusion matrix:
In []: confusion_matrix(y_test, rf_model.predict(X_test)) Out[]: array([[134, 14], [ 15, 137]])
You export a random forest for the iOS in the same way you do for a decision tree.