Commit 4b6a3d15 authored by isabelle's avatar isabelle
Browse files

Merge branch 'master' of gitlab.cl.uni-heidelberg.de:igraf/exp-ml-2-hillengass-graf

parents 2ab3d82d 7e1e7d01
Loading
Loading
Loading
Loading
+24 −12
Original line number Diff line number Diff line
@@ -96,6 +96,9 @@ The following table summarizes the performance of different baseline models on t

### Decision Tree




### Random Forest

### CNN (Convolutional Neural Network)
@@ -152,9 +155,11 @@ param_grid = { 'var_smoothing': np.logspace(0,-20, num=20)}
- for some classes, the diagonal is quite bright (e.g. apricots and passion fruits) :arrow_right: the classifier is quite good at predicting these classes
- but we also see that the classifier has a **strong bias** towards some classes (e.g. apricots, jostaberries and passion fruits and figs)

![Naive Bayes Best Parameters](figures/naive_bayes/grid_search_results_50x50_hsv_sobel_var_smoothing_best_params.png)

### Decision Tree

![Decision Tree Best Parameters](figures/decision_tree/grid_search_results_50x50_hsv_sobel_decision_tree_best_params.png)

### Random Forest
**Feature Combinations:**
@@ -225,15 +230,6 @@ Results for RandomForestClassifier classifier on 100x100_standard images:
| 125x125 | Canny 300 threshold | 0.200 |  `{'max_depth': 70, 'max_features': 'sqrt', 'min_samples_leaf': 2, 'min_samples_split': 2, 'n_estimators': 100}` | 0.97 min | 0.0007 min


- **No filters** - best parameters:
![Confusion Matrix](figures/random_forest/RandomForestClassifier_50x50__confusion_matrix_max_depth_70_max_features_sqrt_min_samples_leaf_2_min_samples_split_2_n_estimators_100.png)

- **HSV + Sobel** - best parameters:
![Confusion Matrix](figures/random_forest/RandomForestClassifier_50x50_hsv_sobel_confusion_matrix_max_depth_40_max_features_sqrt_min_samples_leaf_2_min_samples_split_5_n_estimators_100.png)

- **HSV only** - best parameters:
![Confusion Matrix](figures/random_forest/RandomForestClassifier_50x50_hsv-only_confusion_matrix_max_depth_40_max_features_sqrt_min_samples_leaf_2_min_samples_split_2_n_estimators_100.png)

- Observations:
    - Classifiers both make the same mistakes, e.g. confusing raspberries, redcurrants and strawberries :strawberry: (see bottom right corner of confusion matrix)

@@ -243,9 +239,14 @@ Results for RandomForestClassifier classifier on 100x100_standard images:
    - if we also want to find out how the parameters influence the accuracy, we can visualize the results of the grid search as below; the code we used for this is slightly adapted from a [stackoverflow response](https://stackoverflow.com/questions/37161563/how-to-graph-grid-scores-from-gridsearchcv)
        - :mag: the figure shows the accuracy when all parameters are fixed to their best value except for the one for which the accuracy is plotted (both for train and dev set)

![GridSearch](figures/random_forest/grid_search_results_50x50_standard_max_depth_10_80.png)
![GridSearch](figures/random_forest/grid_search_results_50x50_hsv_sobel_max_depth_10_80.png)
![GridSearch](figures/random_forest/grid_search_results_50x50_hsv-only_max_depth_10_80.png)

![Random Forest Best Parameters](figures/random_forest/grid_search_results_50x50_hsv_random_forest_best_params.png)


Confusion Matrix -  No filters  - best parameters        |  Confusion Matrix -  HSV features - best parameters
:-------------------------:|:-------------------------:
![Random Forest Grid Search](figures/random_forest/RandomForestClassifier_50x50_standard_confusion_matrix_max_depth_70_max_features_sqrt_min_samples_leaf_2_min_samples_split_2_n_estimators_100.png)  |  ![Random Forest Grid Search](figures/random_forest/RandomForestClassifier_50x50_hsv-only_confusion_matrix_max_depth_40_max_features_sqrt_min_samples_leaf_2_min_samples_split_2_n_estimators_100.png)



### CNN (Convolutional Neural Network)
@@ -284,10 +285,21 @@ The performance on the dev and test set is (as expected) nearly the same, which

### Feature Importance

*What are the most important features for our classification models?*

To answer this question, we can use the `feature_importances_` attribute of the Decision Tree and Random Forest models. Because the Naive Bayes and CNN models do not have a direct feature importance attribute, we will focus on the Decision Tree and Random Forest models for this analysis.

When using the RGB or HSV values as features, we have three features for each pixel. In order to visualize the feature importance, we **sum the feature importances for each pixel** and reshape the resulting array to the original image shape that was used for training the model. This way, we can visualize the feature importance for each pixel in the image.

As can be seen in the following plot, the **pixels in the middle** have higher values and are thus more important for the classification than the pixels near the edges. The same pattern is found for all decision tree and random forest models that we have trained. This meets our expectations, as the middle of the image is **where the fruit is typically located** and the edges are often just the background.

<img src="figures/random_forest/RandomForestClassifier_50x50_hsv_feature_importances_max_depth_70_max_features_sqrt_min_samples_leaf_2_min_samples_split_2_n_estimators_100.png" alt= "Random Forest Feature Importance" width="500" height="auto">


### Data Reduction

*How well do our models perform with a reduced dataset size for training?*

We have also tested training a random forest model and a CNN model on a reduced dataset size. In the Diagram below you can see the results of the tests. The results show that the performance of the models decreases with a reduced dataset size but we can still achieve a good performance with 50% or more of the original dataset size.

![Data Reduction](figures/data_reduction.png)